Discussion about this post

User's avatar
Herbie Bradley's avatar

> This gets worse once you think about the 22 year old wunderkinds that the labs are looking to hire, and wonder if they’d be interested in more compliance, even at the margin

Over the years I've been friends with many strong researchers in LLMs and diffusion models, working across pretraining, post-training, infra, evals, safety, etc. Despite my selection bias, all of them generally believe in building AGI, but also tend to believe that it should be done with some responsibility and care, regardless of what their speciality is. And so it's not a surprise to me that many of them have ended up at Anthropic coming from OpenAI or GDM or academia, even those who never paid attention to the AI safety community.

I think this is just because normie AI academic culture is like this, and they basically all have PhDs. So generally I'm sceptical that a full e/acc lab has any real advantage in talent.

Expand full comment
Godfree Roberts's avatar

China is pursuing “fast follow” strategy focused on applications anyway??

China was never interested in LLMs. Until recently it invested in embedding AI in supply chains and manufacturing processes, from which it is already making billions. LLMs were an afterthought from a company with spare Nvidia cards and bright kids.

They settled their compliance issue in 2022 by allowing labs free rein but regulating public-facing apps.

Expand full comment
15 more comments...

No posts

Ready for more?