6 Comments
Feb 1, 2022·edited Feb 1, 2022Liked by Rohit Krishnan

Nice one - Needed this sequel. The observation that you shouldn't focus on hit rate isn't novel, but the facts that past predictions of "hard to predict things" doesn't guarantee future predictions, and quantifying what a "hard prediction" is in the first place is very difficult add some new complexity. Not to mention that we're trying to maximize expected gain, not hit rate per se. Like Soros said, "It's not whether you're right or wrong that's important, but how much money you make when you're right and how much you lose when you're wrong."

Fwiw, I analyzed past predictions of doom by Burry (who definitely got more attention that Roubini because of Bale, I guess :p), and also the track record of Cramer... What I didn't do is compare the hit-rate of just predictions of doom by multiple experts on the same topics - I'm not sure that would be even helpful in future scenarios, considering that true surprises may stump all the existing experts, but it's something to think about...

The comments in the rationality blogs make sense in hindsight as well, but alas, how do we know what we were looking for in the times of crisis?

Expand full comment

"In almost all cases it's better to fight the incentives to maximise hit rate and try to maximise EV instead. Even in cases you're going to sound silly, it seems worthwhile to be a Roubini rather than, say, Jim Cramer."

This seems to assume that in almost all cases there are extraordinarily asymmetric payoffs *in the rigth direction*. For instance, shorting the market has an infinite potential downside. So in this case being wrong more often is bad. If you are a doctor, an engineer, or an airplane pilot, the same applies. I'm not sure if we can say, thus, that we should be wrong more often in almost all cases, but only that we should identify extraordinarily asymmetric *positive* payoffs and then be wrong more often.

Excellent article otherwise.

Expand full comment