Or how I learned to stop worrying and love AI
"...but it’s so incredibly important to point out that this is a destruction myth. It’s apocalypticism crowdwriting its own biography."
YES! It's not simply the belief, but all the activity devoted to predicting when AGI will happen, the surveys and the studies with the Rube Goldberg reasoning over order-of-magnitude guesstimates. This is epistemic theatre. And the millions of dollars being poured into this activity. This is cargo cult behavior. There may not be a Jim Jones, or a jungle compound, much less toxic fruit punch, but this is a high-tech millennial cult. And it's all be conducted in the name of rationality.
I'm not sure what the argument here is.
People have predicted bad things/apocalypses in the last, they didn't happen, so AI is fine?
The core arguments of those who are concerned about AI isn't that "something could go very wrong", it's that:
1) Alignment is extremely difficult, even in principle (of which there are many, many extensive arguments for, not least by MIRI)
2) We have no reliable method of telling when we cross over the threshold to where it could be dangerous, thereby making it too late for troubleshooting
The above doesn't seem to have any specific counterarguments to those concerns.
I'm not even personally an AGI guy (for what it's worth, my donations go to global poverty/health), but the arguments are much stronger than you present them, and worth addressing directly.
I think the best response to this genre of AI risk commentary comes from SSC’s “Maybe the Real Superintelligent AI is Extremely Smart Computers” (https://slatestarcodex.com/2018/01/15/maybe-the-real-superintelligent-ai-is-extremely-smart-computers/):
“In my own psychiatric practice, I am always very reluctant to assume a patient is projecting unless I know them very well. I’ve written more about the dangers of defense mechanism narratives here, but the short version is that amateur therapists inevitably end up using them to trivialize or psychologize a patient’s real concerns. I can’t tell you how many morons hear a patient say “I think my husband hates our kids”, give some kind of galaxy-brain level interpretation like “Maybe what’s really going on is you unconsciously hate your kids, but it’s more comfortable for you to imagine this of your husband”, and then get absolutely shocked when the husband turns out to be abusing the kids.
Accusing an entire region of California of projection is a novel psychoanalytic manuever, and I’m not sure Chiang and Buzzfeed give it the caution it deserves. The problem isn’t that they don’t have a plausible-sounding argument. The problem is that this sort of hunting-for-resemblances is a known bug in the human brain. You can do it to anything, and it will always generate a plausible-sounding argument.”
It’s frustrating, as someone who’s on the periphery of AI safety work, to see people come in and dismiss this whole field of possibly *critically important* research with, “But did you know religions talk about the apocalypse?” And Rohit’s proposed obvious “solutions” look unworkable to most people who’ve spent time thinking seriously about the difficulties involved.
Tl;dr: analogizing AGI concerns to eschatology don’t tell us anything about whether AGI could actually kill people. If you want to know who’s right, you can’t just take a shortcut through the whole debate by comparing it to religion.
Hi Rohit, I really enjoyed reading your essay. I have always wanted to read more about AI risk, and this essay gives me the impetus.