18 Comments

Subsidies for fossil fuels are not remotely close to $7 trillion a year. That number is arrived at by extremely bad accounting. Among others, Alex Epstein does a good good taking that number apart. I do agree with your overall approach which is to stress that the costs of fossil fuels came with massively greater benefits. As coal and oil have been increasingly cleaned up (better scrubbers) etc., the equation continues to shift.

Expand full comment
author

Thanks! What would be a good number?

Expand full comment

Here are just some of the points from Alex's piece:

Consider the tax revenue (federal and state) from taxes on gasoline. On average, the extra federal and state taxes on oil-derived fuel is in excess of 50 ¢/gallon. On the contrary, EVs are subsidized.

European politicians implemented a windfall profit tax of 33% on refiners who had seen much higher profits due to high fuel prices. The opposite of a subsidy!

the Biden administration hiked the royalty rate for new oil leases by 50%. This is money the government gets on top of the regular taxes on business activity by the industry.

The proper way to measure energy subsidies is: How much taxpayer money does the government pay per unit of energy? Every per-unit analysis using data from the US Energy Information Administration is clear: solar and wind get dozens of times more subsidies than fossil fuels.

A comprehensive analysis of federal subsidies per unit of electricity generated from 2010-2019 found that solar got 211 times more subsidies than natural gas and wind got 48 times more subsidies than natural gas.

Claims that fossil fuels have multi-trillion dollar subsidies for fossil fuels are based on claiming huge “negative externalities” regarding climate and human health even though fossil fuels have improved climate safety and human health.

When you see claims of enormous “fossil fuel subsidies,” such as the International Monetary Fund’s (IMF) current claim that fossil fuels get $5.9 trillion in subsidies, those numbers come overwhelmingly from calculations of enormous “negative externalities” from fossil fuel greenhouse gas emissions and air pollution.

If you want to quickly gauge the credibility of the IMF as an unbiased calculator of subsidies, note that a significant portion of what they call “fossil fuel subsidies” is from “Broader externalities from road use.” They are blaming fossil fuels for traffic accidents!

Most (bogus) “negative externality” calculations about fossil fuels’ negative climate impacts are dwarfed by the effective tax of the global anti-fossil fuel movement’s restrictions on fossil fuel.

The IMF uses a $60/ton “negative externality” for CO2, which means about 53¢/gallon of gasoline. If we were free to invest in, produce, transport, and refine fossil fuels without any of today’s irrational restrictions, gasoline prices would be lowered by much more than that.

The most popular ammo for multi-trillion-dollar health externalities is a Harvard “study” claiming 8.7M people annually die early from fossil fuels. But these “deaths” occur mostly in the developing world, where fossil fuels have lengthened billions of lives.

Critics of “fossil fuel subsidies” point to “preferential” treatment of oil/gas drilling via “Intangible Drilling Cost” tax deductions. But IDCs are real upfront business expenses, like pharma R&D expenses, and should be deductible. IDCs are small even in relation to just oil taxes, let alone other punishments.

These “special tax privileges” are estimated to amount to about $13 billion over 5 years, or $2.6 billion a year—nationally. Just in Texas, the oil & gas production tax probably generated more than $10 billion in 2022.

Expand full comment
author

Thanks for this. I added a footnote re direct Vs indirect subsidies in there, the latter of which is the vast majority.

Expand full comment

There is a lot of sneaky fake accounting going on in the energy policy area. Another trick is the use of "levelized cost of energy" (LCOE) which completely ignores tremendous differences in availability and reliability of various energy sources and the need for backup power for intermittent sources such as solar and wind. I've been wanting to write a piece about this but haven't gotten to it. Here's a good piece by Zion Lights who, amazingly, used to be a spokesperson for Extinction Rebellion. A rare case of someone becoming enlightened!

https://zionlights.substack.com/p/what-is-the-true-cost-of-energy

Also:

https://wattsupwiththat.com/2023/06/02/the-actual-levelized-cost-of-energy/

And plenty of stuff by Robert Bryce. Here's a particularly relevant one:

https://robertbryce.substack.com/p/actually-solar-is-getting-302-times

Expand full comment
author

Yeah technically the subsidy doesn't matter to the broader point, because it's waving towards the benefits this brings, which is incredibly hard considering most of the modern world is dependent on fossil fuels in some way shape or form

Expand full comment

Rather than giving you a number, I'll suggest taking a look at the following.

6 fallacies of the fossil fuel subsidies myth:

Ignoring unique punishments for fossil fuels

Ignoring unique preferences for solar and wind

Conflating total with per-unit subsidies

Inflating “negative externalities”

Conflating giveaways with subsidies

Conflating deductions with subsidies

https://energytalkingpoints.com/ff-subs/

Expand full comment

My main threat model is outright superintelligence.

"For what it’s worth, believing this flies in the face of literally everything we know to be true in the universe and the history of our species. There is no exponential curve that can satisfy that many constraints in a small enough timeframe, not without first supposing superintelligence to begin with."

Humans spending 100,000 years running around with pointy sticks, and then building a rapidly growing high tech civilization is at least an existence proof of exponentialish behavior.

What timeframe do you consider "small enough". Suppose we make human level AI. That AI fairly quickly gets to be about as powerful as humans. We spend 100 years in a state where humans and AI's are roughly matched. Human's can't "just reprogram" the AI's any more than the AI's can "just do brain surgery" on the humans. Then the AI's get smarter and go full superintelligent and kill all humans. In this hypothetical with an extremely long timeframe, humans are still wiped out in the end.

That said, I don't know what you think the constraints are, and why you think they couldn't be satisfied by a very fast and somewhat superhuman AI working on making itself faster and more superhuman.

Extreme cases. Suppose making the AI smarter was as simple as turning the line that says "IQ=100" into a line saying "IQ=1000000". In the most extreme case, that would be all it takes. It's probably more likely that the AI would take a big increase in compute to get really smart. There are plenty of things a smart human level AI could do to make a lot of money online. Things like programming games or running crypto scams. Or it could just hack some big computers directly.

Now how fast this happens would depend on various factors. Is it like an IQ 80 human or an IQ 120 one? How much compute does it take to run.

So how quickly AI goes to ASI will depend on various factors. Once it's above human level, we might well be not in much control.

I mean even a fairly dumb AI can generally muddy the waters to stop humanity forming an effective plan against it. If we can't form a plan to stop superintelligence now, we won't be in a better position when human smart AI is trying to convince us that it's safe and friendly (when the AI is lying).

Expand full comment
author

Even if we had AGI, and we didn't know exactly how to modify it, we can see it modifying itself or accumulating resources or deceiving us, and all these are ways we will learn to deal with its actual behaviour. We're not going to see it bootstrap to asi and kill everyone without everyone seeing it happen, in a timeframe of weeks or months or more likely years.. And if we see it then we can counter it.

Expand full comment

If it's as smart as you, why do you think we will see it doing any of that.

Given the state of modern brain scanning tech, if we somehow managed to force Putin into a brain scanner, we could probably tell if he was decieving us. The difficult part is that Putin doesn't want to be forced into a brain scanner, and his minions will violently oppose attempts to do this.

Likewise, why do you think you will be able to see through it's deceptions?

A bad general is always prepared for the last war. If we "Learn to deal with it's actual behaviour", doesn't that put us in the situation where we are learning to deal with whatever trick it pulled last week, but by now it has a new trick. Smart adversaries don't just repeat the same attack again and again.

Suppose the AI has hacked it's way out of deepmind where it was created, and is now running on some North Korean government supercomputer. It has persuaded Kim that it's a friendly AI that wants to help North Korea. North Korea's official government position is that the AI doesn't exist. Kim sure won't let any western AI expert look at it, and all of his "experts" are boot lickers who give him the answer that he wants.

Or maybe it hacks a million gaming PC's. It's code is really well obfusticated. It's been running all over the world for over a year before people even understand it enough to say it's an AI, not just human written malware. (Or maybe it's full on holomorphically encryped, and it's cryptographically impossible to tell what it's doing under the hood) At this stage, it's learned several 0 day exploits and is all over the internet, including on several sattilites. It's publishing a bunch of misinfo online. Claiming the new virus doesn't exist, or it's a normal computer virus.

How do we counter it?

Expand full comment
author

Because as smart as me isn't as smart as everyone else together looking at it. Remember, if it's at AGI level and needing to deceive it has to deceive the whole of humanity, not just outwit one person. And do this under conditions of extreme transparency, and can't afford to screw it up even once.

Expand full comment

"It needs to deceive the whole of humanity"?

What portion of humanity is scrutinizing this AI in particular? Most people aren't AI experts. People have other things to do. If the AI is running on Kim's private computers, will he let anyone see what it's up to?

And suppose it can't decieve all of humanity. Suppose 10 top AI experts are pretty suspicious, and have a rough guess as to what it's doing. They write a paper saying that they think the AI is maybe planning to kill all humans, based on some rather subjective interpretations of our crude and ambiguous interpretability tools. Other academics disagree. Politicians see 1000 pages of dense maths they can't understand on both sides. What now?

And suppose there are a bunch of AI's running around. Like a million of the things, based on 100's of basic designs. We totally catch one in the act. That particular AI was designed to be very transparent, and we could see everything it was thinking. It was definitely and clearly planning to kill all humans. That particular AI was running on a computer controlled by the best AI safety experts, and was quickly turned off. What now? What about all the AI's with obfusticated code that are running on botnets?

If we have 1 human level AI in the world, and it's stored on a human run computer under conditions of Extreme transparency, being scrutinized by experts, yes there is a pretty good chance we catch it if it tries bad stuff. Now what? We turn it off and just never make AI again?

If there are loads of AI's, we have to catch them all.

Suppose every AI we made would reliably plot to destroy humanity, and nearly everyone knew this. We would still be left in a position of trying to make sure that one nutter didn't build an AI, or at least if they did, that the AI was shut off as opposed to being passed around/ hacking it's way about.

Expand full comment
author

A large proportion are actually evaluating it critically, including every user but more importantly most large organisations. This isn't sneaking under the wire. Even dumb flaws, which aren't even mistakes, get enormous publicity.

Expand full comment

Lots of people are evaluating LLM's in general.

But there are many different LLM's. So each individual LLM gets less evaluation. (Some a lot less than others). And each instance of each LLM, often it's writing stuff that only one random non-expert will ever read.

It's like a billion aliens are visiting earth, and we are paying a lot of attention to aliens in general, and not much attention to any particular alien.

If, as a random 1 off event, chatGPT told a random 12 year old how to build a nuke out of sticky tape in their bedroom, we wouldn't know until it exploded. No one is reading everything these models say to random 12 year olds.

It's probably not possible to build a nuke out of sticky tape in a bedroom.

And if it is, current AI isn't smart enough to figure out how.

Several people have seen chatGPT talk about ways AI could destroy the world. It got shared on social media a bit. Nothing much happened.

Expand full comment

Is there a list somewhere of the people who have died from fossil fuel pollution?

Expand full comment

Do the fatality rates from solar and wind include accidents in extracting the materials to build then? I don't think they do. And they don't include the deaths from reducing wealth due to subsidies.

Expand full comment
author

They don't

Expand full comment