I.
If you think the world will end because of AI, how do you bet on that?
Tyler Cowen asked this a while back too. That if you believe what you believe so strongly, why aren’t you acting on that belief in the market?
This question comes up again and again. In finance it’s a well worn topic. But with technology it’s harder. That’s why Warren Buffett famously doesn’t (didn’t) invest in tech. With newer technology it’s even harder. And with emergent general purpose technologies like AI it’s even harder.
There are two common answers to this, e.g., from Eliezer here or Emmett here.
We did act on that belief. We bought Nvidia. We got rich. So there!
We can’t act on that belief. It’s an unknown unknown. An unforeseeable event. Markets will ever remain irrational about those.
The first one is of course correct if your belief is AI will become big and influential, as it needs to do if it is to become big, influential and deadly. But the “deadly” part is explicitly not being bet on. So it kind of misses the core point.
The second answer is the complicated one. Yes, there exist things we can’t bet on easily. The existence of ice-9. The chance of a gamma ray burst destroying earth. After all, you only put money in the futures you can predict.
But there’s an implicit assumption here that Tyler also writes about, that not only can you not predict the outcome, you can’t convince anyone else of the outcome either.
Because if you could convince more people that the world is ending, they would act on that belief. At the margin there is some point where the collective belief will get manifest in the markets.
Which means for you to not take any action in the market you have to not only believe that you are right, but also that you can’t convince anyone else that you are right.
The way to make money in the market is to be contrarian and right. Here you are contrarian but are convinced nobody will ever find out you’re right.
Normally, going from Quadrant 2 to Quadrant 4 is usually where you make a lot of money. As the market grows to agree with your (previously heterodox) view.
The second part there is important, because markets are made up of the collective will of the people trading in it. Animal spirits, as Keynes called them. Or Mr. Market in Ben Graham’s analogy. Those are anthropomorphised personifications of this collective will, made up of not just reality, but also people’s perception of reality.
It’s worth asking why we think that others can’t be persuaded of this at all, until they all drop dead. Why wouldn’t you at least roll over long dated puts, assuming you have any other model of catastrophe than almost-instantaneous “foom”.
II.
Now look, what we bet or how we bet depends on how we see the future. You kind of need to have some point of view. For instance, if we do manage to see any indications that the technology is not particularly steerable, nor something we can easily defend against, then you should update that the world overall will start to come around to your point of view that this tech ought to be severely curtailed.
The populace is already primed to believe that AI is net terrible, mostly for a variety of terrible reasons about energy usage and stolen valour, primed as they were from several years of anger against social media.
So, one way to think about this could be, that if they are at all bound to get convinced, you should believe you can move towards Quadrant 4, and you should believe that your bets will become market consensus pretty fast. How fast? About as fast as the field of AI itself is moving.
Or, you could just bet on increased volatility - that’s bound to happen. On giant crashes in specific industries if this happens. On increased interest rates.
But there’s a possibility on why being short the market and waiting might not work though. Markets are path dependent. For most instruments you might get wiped out as you wait for the markets to catch up with you1. As Keynes’ dictum goes, the market can remain irrational longer than you can remain solvent!
So if that's not the case that you have a position on how things will evolve, not just where they will evolve towards, either you have to come up with one that you’re sufficiently tolerant of, or you will have to think beyond the financial market and the instruments it gives you.
If you had sufficient liquidity you might be able to convince the banks to create a new instrument for you, though this is expensive.
You could walk into a bank and tell them. “Hey, I have this belief that we’re all going to go foom soon enough, and people will realise this too late. The payoff distribution is highly uncertain. The path to get there is also uncertain. I only know that this will happen. I want you to make me an AI Collapse Linked Note (ACLN).”
And if you have a few million dollars they’ll happily take it off your hands and write you a lovely new financial product. It’s going to be complicated and expensive but as an end-of-the-world trade it’s possible.
Or maybe you want an open-ended catastrophe bond. Continuous premiums paid in, and payout triggered when an “event” occurs. Call is a Perennial AI Doom Swap (PADS).
You could even make this CDO-style of tranched risks. You know, AI causes misuse risk as the equity tranche. Mezzanine can be bio-risk. And senior tranche for societal collapse or whatever.
Or.
III.
I asked our current AGI this question and it ended with this:
So instead you could bet reputation and time that what you say will happen will happen, so your star rises from this prediction. It’s a sort of low-cost call option on your reputation. Ideally you could try to be specific, “I think deploying X type of AI systems in Y industries will give rise to Z”2.
But maybe you believe that we cannot predict the trajectory at all. That, the market for “AI will kill everybody” its a step change and the difference between the day before everybody dies and the day everybody dies will be impossible for humans to detect in time. Or even the hour before and hour of.
This, as you might have noticed, is unfalsifiable. It is a belief about a very particular pathway of technological development, one akin to the Eureka moment than the development of anything you'd think of as technology, including a pencil. And one we have absolutely no way of predicting. Not its path, nor its results.
If you are in this camp then there are no bets available to you. Maybe the PADS above might work, but even that’s hard. What Zvi and Eliezer say is true. I thought they might have been wrong when I started, but I have changed my mind on this. If you are so unsure about this that you can’t tell what will happen nor when it will happen then there is very little that you can bet on.
The social and time investments options remain available though. The option to scare them remains a viable strategy. Public advocacy, investing time to find a way to change opinions, moving more people to Quadrant 4, all remain available. One might talk to more people, do podcasts, write blogposts, even write books3!
IV.
If we step back for a second to state the obvious, it's really difficult to figure out what the future holds. And a major reason why betting on these beliefs is hard is that betting requires identifying a market with an identified resolution, a mechanism to make that come about, and dealing with path dependency, especially if you want to short.
It's not just doom though, positive visions of what AI can do also remains scarce. The best that the realists can muster up is often “like today, but more efficient”. Or if you’re provocative “like today, but with more animism”. Or if you’re really precocious, “like The Culture, you know, the Iain Banks novels, it’s great, have you read it?” This is made harder because you don't know what you don't know.
Unlike the question that was asked about what a world with very cheap energy looks like, intelligence isn’t so easily defined.
But it's still important to have a view, if you're to make bets on it. A notable counterexample is this, from Dario Amodei, cofounder of Anthropic.
Dario’s essay starts here, and then teases out the conclusion across biology, energy, health and the economy.
Most of the arguments and discussions about AI start with suppositions like this.
But this is also exactly the problem.
What if you don’t agree that the models will be like having Nobel winners in a jar? Again, rather famously, Nobel winners aren't there through sheer dint of ‘g’, but also creativity, ability to make unique connections, and the occasional ability to find inspiration in dreams!
If you agree with the suppositions of course then the conclusions in many cases are self evident. Like “if you have an army of Nobel laureates at your disposal who are indefatigable and incredibly fast, then you might be able to do a century’s worth of technological progress in a decade.”
(The thing about extremely unpredictable, high-dimensional environments is that a “greedy” algorithm is probably more sensible than one that tries to figure out the theoretical optimum.)
So maybe the question we started with maybe isn’t a “tradeable” one. You could work really hard at creating a financial instrument that somehow satisfies the criteria you set up. Or you could farm the karmic credits through prediction markets and punditry. But without specificity in the outcome and some sense of path, there are no real beliefs you can bet on, only opinions.
I think this is where many public intellectuals in the space are, except without any level of specificity. No, putting a bet on Metaculus about when everyone would die is not sufficient, although it is a good start.
Not as well as the previous group, because if there are zero externally visible problems, then the public necessarily might not update in your direction. Or you might have unrelated externally visible problems, in which case they might?
I think Astral Codex Ten had an article about this a while ago, although I can't find it now.
Suppose AGI is coming and it will lead to either 1) everyone dying suddenly 2) everyone becoming rich beyond anything they could imagine today.
In that world, you want to front load personal wealth as much as possible. Don't save for retirement, in fact take out as much loans as possible, etc. You are either partying at the end of the world or fronting yourself some of your future riches.
This is a sentiment I have encountered before, people proudly saying they aren't saving for retirement because AGI. Although coming up with thin rationalizations for not saving for retirement is a time-honored activity, so I am not sure how much of it can really be ascribed to principled AGI investing.
Brilliant note Rohit . The scenario looks so realistic specially the structure bet .