Discussion about this post

User's avatar
Hollis Robbins (@Anecdotal)'s avatar

For me Butler always signifies Judith Butler and I have to readjust every time.

Expand full comment
Performative Bafflement's avatar

I'm mentally paraphrasing your argument as "sure, our road to GAI is probably going to have a few economic crashes, self-driving car fiascos, and megadeaths for various reasons, but that's how we LEARN, and put safeguards in. Look at the history of cars, or electricity, or steel refinement, and how many people died along the way to get to our pretty safe present day!"

And I grant that, it's pretty reasonable for the level of tool AI we see today, and can anticipate in the next 3-5 years or so. But I think that's not actually addressing the real problem. The actual *existential* GAI risk isn't amenable to the "mess up, then put safeguards in after the fact" methodology, because it's all about AI gaining the intelligence to self-modify and self-improve capabilities (either via superior software / programming, adversarially taking over multiple server farms, making lots of money with creative content or finance then buying or designing better hardware, etc).

If we wait til THAT screw-up, we aren't going to be able to put safeguards in after the fact, because GAI would be smarter and more capable than any coordination-capable ensemble of humans. A GAI of that intelligence level could have digital fingers on all of Russia's nukes, for example, and could ensure MAD, total global banking and economic collapse, and more if we started bombing server farms. I mean, just think of China today as an AI - if it goes rogue and adversarial to all other life, what can we actually constructively do to prevent it and put safeguards in? All we can actually do is threaten or ensure the entire world's destruction, not any positive outcome. And GAI should arguably be *more* capable than today's China.

Expand full comment
41 more comments...

No posts