Discussion about this post

User's avatar
Max More's avatar

Subsidies for fossil fuels are not remotely close to $7 trillion a year. That number is arrived at by extremely bad accounting. Among others, Alex Epstein does a good good taking that number apart. I do agree with your overall approach which is to stress that the costs of fossil fuels came with massively greater benefits. As coal and oil have been increasingly cleaned up (better scrubbers) etc., the equation continues to shift.

Expand full comment
Donald's avatar

My main threat model is outright superintelligence.

"For what it’s worth, believing this flies in the face of literally everything we know to be true in the universe and the history of our species. There is no exponential curve that can satisfy that many constraints in a small enough timeframe, not without first supposing superintelligence to begin with."

Humans spending 100,000 years running around with pointy sticks, and then building a rapidly growing high tech civilization is at least an existence proof of exponentialish behavior.

What timeframe do you consider "small enough". Suppose we make human level AI. That AI fairly quickly gets to be about as powerful as humans. We spend 100 years in a state where humans and AI's are roughly matched. Human's can't "just reprogram" the AI's any more than the AI's can "just do brain surgery" on the humans. Then the AI's get smarter and go full superintelligent and kill all humans. In this hypothetical with an extremely long timeframe, humans are still wiped out in the end.

That said, I don't know what you think the constraints are, and why you think they couldn't be satisfied by a very fast and somewhat superhuman AI working on making itself faster and more superhuman.

Extreme cases. Suppose making the AI smarter was as simple as turning the line that says "IQ=100" into a line saying "IQ=1000000". In the most extreme case, that would be all it takes. It's probably more likely that the AI would take a big increase in compute to get really smart. There are plenty of things a smart human level AI could do to make a lot of money online. Things like programming games or running crypto scams. Or it could just hack some big computers directly.

Now how fast this happens would depend on various factors. Is it like an IQ 80 human or an IQ 120 one? How much compute does it take to run.

So how quickly AI goes to ASI will depend on various factors. Once it's above human level, we might well be not in much control.

I mean even a fairly dumb AI can generally muddy the waters to stop humanity forming an effective plan against it. If we can't form a plan to stop superintelligence now, we won't be in a better position when human smart AI is trying to convince us that it's safe and friendly (when the AI is lying).

Expand full comment
16 more comments...

No posts