Discussion about this post

User's avatar
Herbie Bradley's avatar

> This gets worse once you think about the 22 year old wunderkinds that the labs are looking to hire, and wonder if they’d be interested in more compliance, even at the margin

Over the years I've been friends with many strong researchers in LLMs and diffusion models, working across pretraining, post-training, infra, evals, safety, etc. Despite my selection bias, all of them generally believe in building AGI, but also tend to believe that it should be done with some responsibility and care, regardless of what their speciality is. And so it's not a surprise to me that many of them have ended up at Anthropic coming from OpenAI or GDM or academia, even those who never paid attention to the AI safety community.

I think this is just because normie AI academic culture is like this, and they basically all have PhDs. So generally I'm sceptical that a full e/acc lab has any real advantage in talent.

Ted Sanders's avatar

One "contrarian" belief I hold is that recursive self improvement doesn't imply first to ASI wins the lightcone.

Even if you're momentarily ahead by 1,000,000x on the Y axis, you're still only a few months ahead on the X axis. If your competitors keep toiling and hit recursive self improvement a few months later, and there's some distribution in the exponent of self improvement, then the actual winner will be the one with the best exponent of self improvement, not the first to self improvement.

Even recursive super intelligence may not be a sustainable competitive advantage.

(The key uncertainty here is the extent to which a winner can kick out the ladder or suck up the oxygen in a way that makes it harder for others to follow. E.g., if they get rich and buy up all the GPUs, their monopoly is secure. But if you're a lab and you release proof of superintelligence, the other labs will accelerate, not decelerate. You'd have to play quite dirty to keep them from catching up, and I don't see any lab doing that.)

15 more comments...

No posts

Ready for more?