How do you know whom to trust?
At first I trusted the expert opinions
Then I went to their published papers
Then I trusted the polls, they'll be right
And the markets, predictions all in sight
Then I decided to hear the experts debate
In an octagon of words, though oft irate
Finally I looked inside my mind again
In search of the truth there, a search not in vain
At first I trusted the experts opinions
Philip Tetlock ruined this illusion for me, and before I was even born, which was quite rude of him. But his study said:
For each of these predictions, Tetlock insisted that the experts specify which of two outcomes they expected and also assign a probability to their prediction. He did so in a way that confident predictions scored more points when correct, but also lost more points when mistaken. With those predictions in hand, he then sat back and waited for the events themselves to play out. Twenty years later, he published his results, and what he found was striking: Although the experts performed slightly better than random guessing, they did not perform as well as even a minimally sophisticated statistical model. Even more surprisingly, the experts did slightly better when operating outside their area of expertise than within it.
Other examples - one, two, three, four, five. Turns out experts are all too human when it comes to their ability to have correct opinions.
Also:
Morgan Stanley's Mary Meeker, who made $15 million in 1999 while telling people to buy Priceline when it was at $165 a share and Healtheon/WebMD when it reached $105 a share, went silent as they collapsed toward zero.
Michael Lewis is as accurate as he's devastating. In that one sentence is what you need to know about the pronouncements from the supposedly smart people around the world. The difference is that in the financial services circles the performance is easier to see.
Then I went to their published papers
And then, moving on from just opinions, we get to the heart of the scientific method. The mecca of peer review and published papers. And lo and behold, the replication crisis. It's been explored by Alvaro de Menard in beautiful detail, by Vox in slightly more optimistic detail, and by papersthemselves purporting to make us believe or disbelieve other papers, just like a speeding ticket on a mobius strip.
What's frustrating is not that this crisis exists, since we're all human (see first point), but rather that the factors that could plausibly change this are seemingly essentially random. Better understanding of statistics and better mentoring and better experiment designs are all good ideas, but they're not new ideas, and they're definitely not good enough or new enough to help pass this crisis.
Then I trusted the polls, they'll be right
Coming to fore in the fraught and unscientific world of politics, this seems to be widely believed. Nate Silver disagrees on the amplitude of this opinion, but as far as I can see he doesn't disagree on the direction. Widely read media outlets though, like New Yorker and Politico above, and Atlantic here, call it a catastrophe.
Let's move onto something slightly less contentious, economics. Mises (the institute, not the person) thinks our obsession with surveys is ruining economics. Looking at the forecasting performance, an analysis I found has this wonderful snippet:
To sum up, the three surveys considered are broadly comparable in terms of forecasting performance. Overall, an analysis of the accuracy of survey-based expectations suggests that, over the past decade, with the exception of 2009, professional forecasters systematically underestimated inflation.
To sum, apart for the really large deviation, we did pretty well. I can only wish my job let me do this.
And the markets, predictions all in sight
Since it became a thing, prediction markets have had some bad raps. Prediction markets fail regularly. Nate Silver calls it competitive mansplaining, which is of course my favourite kind. It kept giving Trump a chance of winning, again and again, after court after court threw out his case. In one of the Fivethirtyeight writeups it says:
... betting markets mostly follow the polling averages
It's not just politics, it's also used for predicting who the next pope is going to be. Or whether you'll get Covid-19. Or if you'll lose your job in a few months time. But in these heterodox questions too, we find that markets are generally not very efficient.
James Surowiecki wrote The Wisdom of Crowds which touts the benefits of prediction markets, including the success of the prediction markets which worked great in the 1988 presidential elections. And there have been great papers on its potential. But IEM, PredictIt and others though were wrong about Brexit and Trump and more!
This has been argued to be because of regulatory failures and lack of liquidity brought about by the government. Scott writes:
As usual, it’s the government’s fault: betting on prediction markets is technically gambling, which makes it mostly illegal (of course, you can still buy all the Gamestop stock you want).
So while there's hope that we will eventually get enough liquidity and insight and make these markets work, we're not there yet.
Then I decided to hear the experts debate
Debating is wonderful. Practically our whole system of government and society is based on the fact that having adversarial arguments help us figure out what the right thing to do is. That's why our legal system never screws up. And why political debates are so essential to us making the correct decision.
You could even use it in the healthcare or education system. For instance, in this review of the use of debates in health professions, the conclusion seems to be that debate can be a useful pedagogical tool to consider multiple perspectives regarding the issues.
Which means it's useful as a learning tool if we already know the answer, but perhaps not otherwise?
In the majority of studies (10 of 12 studies, 83.3%), debate has been deemed by the learners to be effective in facilitating the learning of new content and skills such as communication and critical thinking
Even when experts are the ones arguing in front of an expert audience, they still need to both be familiar with the topic, the data, and the logic to help adjudicate. An interesting essay on the topic judges three issues, namely differences in bias towards the affirmative vs the negative, differences in style that divide communities and encouragement of flip-flopping to win the debate rather than get better at practical wisdom.
One can hope. But I have epistemic learned helplessness in figuring out which is which.
In an octagon of aphorisms, though oft irate
Twitter wars! Believe it or not, someone has researched this. And there's this fantastic abstract, of a paper you should read:
Experts increasingly use social media to communicate with the wider public, prompted by the need to demonstrate impact and public engagement. While previous research on the use of social media by experts focused on single topics and performed sentiment analysis, we propose to extend the scope by investigating experts’ networks, topics and communicative styles. We perform social and semantic network as well language analysis of top tweeting scientists and economists. We find that economists tweet less, mention fewer people and have fewer Twitter conversations with members of the public than scientists. Scientists use a more informal and involved style and engage wider audiences through multimedia contents, while economists use more jargon, and tend to favour traditional written media. The results point to differences in experts’ communicative practices online, and we propose that disciplinary ways of ‘talking’ may pose obstacles to an effective public communication of expert knowledge.
Dylan Matthews from Vox thinks that arguing on twitter is impossible. There are charts that show why debating on twitter is pointless, for which I can only say thanks god someone made charts! Even when looking at something as esoteric as the implementation of a data retention directive by the EU in Norway, the answer seems to be mixed at best.
Finally I looked inside my mind again
Wherein you end up believing whatever the hell you wanted anyway.
In search of the truth there, a search not in vain