It’s hard to talk about AI without a good foundation exploring how it works, its history, and its future. So I wrote a book on the topic. Order it here!
How many people are we ok with dying to get a better world?
Ok, that’s somewhat macabre and kind of an abstract question, let’s make it concrete with the question du jour. How many people are we ok getting killed to get advancements in AI?
This might seem like a heartless question, but since the leaders of the AI revolution, along with plenty of interested voices, are saying that this entire industry ought to be regulated, or even stopped in its tracks, we should get an idea of what they’re fearing.
I mean the downside is, at some point, that humanity loses control of the technology it's developing. … The urgency is to work and deploy it in a beneficial way. But at the same time, it can be very harmful if deployed wrongly. And we don't have all the answers there yet. And the technology is moving fast. So does that keep me up at night? Absolutely.
Sundar Pichai
According to those very same worriers, AI can be seen as revolutionary. Anything from being the most useful invention since fire to the biggest thing since personal computers to having the power of (literally) multiple nuclear devices.
I've always thought of A.I. as the most profound technology humanity is working on… More profound than fire or electricity or anything that we've done in the past.
Sundar Pichai
Now, reading both the statements above, and considering what Google’s been actually doing, I would probably conclude that the actual level of worry is de minimis and if anything the bigger worry is that Google won’t be the one to blaze the path.
But still, it’s a start. AI is a powerful technology and it is a dangerous technology. So maybe we can look at other technologies and try and learn something.
Since it’s really hard to know about personal computers and how many people it actually harmed, much less for the internet, we’re left grasping at actual physical calamities.
Let’s take nuclear first. Here’s a list, and the number you get from this is somewhere in the 4,000 range, mostly from Chernobyl that the WHO projects might have happened. Add nuclear weapons, this becomes 200k people.
But there is an undisputed king of deaths from the source of our industrial progress: fossil fuels.
We consume around 137k TWh of fossil fuels a year as of 2022, and it’s growing. Just the subsidies for it amount up to $7 Trillion1! It runs most of what we call civilisation, built it, and made almost everything we use possible2.
And it costs around 8 million deaths a year. That’s around 1 in 5 deaths, of all deaths for humanity, for this our method to build a better world. Even Covid, which killed 7m people worldwide, couldn’t beat this record3. That is the price we’re willing to pay, and have paid proportionally for many centuries since deciding to build a better world for ourselves.
So we have a question to answer. What damage is AI likely to cause, and what could its potential benefit be?
Is it going to cause terrorist attacks? If so it can be analysed according to things that cause terrorist attacks, like nukes, then we can get a sense of what AI is likely to cause and then try to do the math. Nuclear power produces c.10% of the world’s electricity needs, around 2.5k TWh and probably around $180 Billion spend.
Is it going to cause massive unemployment or social shocks? I don’t know what the calculation of the negative impact of social media has been, or even if it’s been quantified properly, but it seems like a good place to start. In any case it also seems true that this is insufficient for us to do or learn anything.
Is it going to wipe out all of humanity? ‘All’ is a much larger number than 8 million a year, and if true changes the calculus substantially. If that’s the problem, sure we should stop it. Though … how will we know if it will4?
AI is like jumping into a wormhole: One answer is that we can never know. This is the classic answer. We do not know what madness lies in the hearts of LLMs5. This is stated as fact, and if you believe it you also have to believe doing anything in the space is destructive for us. It can also only be true if we accidentally create superintelligence. Not just regular intelligence, or even ‘smarter than a dozen humans’ intelligence, but actual literal superintelligence.
For what it’s worth, believing this flies in the face of literally everything we know to be true in the universe and the history of our species. There is no exponential curve that can satisfy that many constraints in a small enough timeframe, not without first supposing superintelligence to begin with. And one man’s prior probability is insufficient to stop others from doing anything.AI is like bioterrorism: A second answer is that yes, we can know, but we’ll have to look for it. That is the majority of people who are calling for safety. That includes the governments (nobody thinks we’re all about to die any moment). That includes the regular public who are scared of all four horsemen of the apocalypse, from war to famine to pestilence, and especially the worry that.
So, fine, let’s look for it. Have evals. Not governmentally mandated signoffs on models yet, because … why should we? We should get some indication that something truly bad has happened, or even is happening, or can predictably happen, before doing something.AI is like nuclear weapons: A third answer is that of runaway negative impact. Eg if using AI creates a runaway war, similar to what many worried as they argued against Kissinger et al about nuclear exchange, then there is reason to tamp it down a bit. Now, you could look at the history of how we’ve traditionally seen this evolve since the first and only time nukes were used in a war, and draw the conclusion that maybe humans are actually pretty sensible with it at the helm.
Or, you could see this as taking an unreasonable risk over almost 7 decades and somehow surviving by luck. The evidence thus far on which way AI will go is in one direction, and the other can’t be falsified, but again, this falls into the wait-and-watch bucket. Or at the very least this requires a model with a clear chain of action on how AI is supposed to create runaway devastation. We did this for climate change and surely we can do something similar here.
None of this helps with the fact that any equivalence we draw between AI and nuclear weapons or synthetic bio manufacturing are much much worse than the analogies we can draw between AI and the internet or the computer or the mobile. The latter seems particularly apt today, but might not be particularly strong a year or three from now. But that doesn’t matter either, since we have to get to three years from now to understand how the analogy breaks down.
The same exists for those who think we can capture all the value within the light-cone. In a narrow sense, that’s a tautology that has to be true. However, this too should be backed by some estimation of benefits we can actually receive. So far we have evidence from what people want as they vote with their wallets, and while useful it’s not conclusive.
There’s a lesson. The lesson is that if you’re building in this arena, there’s no reason to feel guilty or stop unless you think what you’re building today, the immediate goal, is either irrepressibly powerful or irretrievably chaotic. And no matter how many folks are made scared, that in itself will never be sufficient evidence. If you’re advocating change in this arena, or propose intervening with this process, you need to provide a cost benefit analysis of some sort, or a model, otherwise you’re tilting at windmills.
Explicit subsidies (undercharging for supply costs) have more than doubled since 2020 but are still only 18 percent of the total subsidy, while nearly 60 percent is due to undercharging for global warming and local air pollution.
This would make the direct subsidies $1.8 Trillion, not $7. Which is smaller, but still, man, not that small!
Nuclear energy produces a 10th of the world’s electricity already by the way.
Unless you count all the Covid induced deaths via excess deaths statistic, in which case it might be as big as 28m
The rest are actual big, hairy problems. The other, milder, alternative is that AI is like the internet or AI is like the computer which were momentous discoveries and reshaped society, but were huge net benefits and which we learnt to live with.
Or before LLMs, other AI models which were thought to contain various forms of alien intelligence
Subsidies for fossil fuels are not remotely close to $7 trillion a year. That number is arrived at by extremely bad accounting. Among others, Alex Epstein does a good good taking that number apart. I do agree with your overall approach which is to stress that the costs of fossil fuels came with massively greater benefits. As coal and oil have been increasingly cleaned up (better scrubbers) etc., the equation continues to shift.
My main threat model is outright superintelligence.
"For what it’s worth, believing this flies in the face of literally everything we know to be true in the universe and the history of our species. There is no exponential curve that can satisfy that many constraints in a small enough timeframe, not without first supposing superintelligence to begin with."
Humans spending 100,000 years running around with pointy sticks, and then building a rapidly growing high tech civilization is at least an existence proof of exponentialish behavior.
What timeframe do you consider "small enough". Suppose we make human level AI. That AI fairly quickly gets to be about as powerful as humans. We spend 100 years in a state where humans and AI's are roughly matched. Human's can't "just reprogram" the AI's any more than the AI's can "just do brain surgery" on the humans. Then the AI's get smarter and go full superintelligent and kill all humans. In this hypothetical with an extremely long timeframe, humans are still wiped out in the end.
That said, I don't know what you think the constraints are, and why you think they couldn't be satisfied by a very fast and somewhat superhuman AI working on making itself faster and more superhuman.
Extreme cases. Suppose making the AI smarter was as simple as turning the line that says "IQ=100" into a line saying "IQ=1000000". In the most extreme case, that would be all it takes. It's probably more likely that the AI would take a big increase in compute to get really smart. There are plenty of things a smart human level AI could do to make a lot of money online. Things like programming games or running crypto scams. Or it could just hack some big computers directly.
Now how fast this happens would depend on various factors. Is it like an IQ 80 human or an IQ 120 one? How much compute does it take to run.
So how quickly AI goes to ASI will depend on various factors. Once it's above human level, we might well be not in much control.
I mean even a fairly dumb AI can generally muddy the waters to stop humanity forming an effective plan against it. If we can't form a plan to stop superintelligence now, we won't be in a better position when human smart AI is trying to convince us that it's safe and friendly (when the AI is lying).