Against prediction markets

Within the EA sphere, prediction markets have often been championed as a good solution for forecasting the future. Improved forecasting has been discussed many times as a cause area for humanity to make better judgements and generally improve institutional decision making.

In this post, I will argue that prediction markets are overrated within EA, both in their function for high stakes forecasting, as well as in more casual environments.

A prediction market is a market created for the purpose of trading on the outcome of events. The market prices are supposed to indicate what the probability of an event is, so a contract can trade between 0 and 100%. The idea behind the merit of prediction markets is that they are markets—and therefore should be efficient.

I. Prediction markets didn’t do best in Tetlock’s experiments

Prof. Tetlock lead the Good Judgement Project, a research collaborative participating in the IARPA’s (a US intelligence org) forecasting tournament. He experimented with various setups of people participating to see which would lead to be the best outcomes. The tournament focussed on geopolitical questions with a time frame of less than a couple of years.

Tetlock then wrote up his findings in Superforecasting. His book is mostly about the individual people who did consistently exceptionally well at IARPA’s tournament, whom he calls Superforecasters and what made them special.

But Tetlock also writes about how well his various experiments did at the tournament—among the tried methods were prediction markets. Unfortunately, while prediction markets beat the ‘wisdom of crowds’ (the average of people’s guesses) as well as most individual participants and teams of participants, they were not the experiment that did best.

They were individual superforecasters who did better than prediction markets and teams of superforecasters working together did reliably better than prediction markets. Also, teams of forecasters working together often did better than prediction markets if their results were extremized—that means taking the team’s stated probability and nudging it either closer to zero or hundred percent. The reason for extremizing to work so well is that information in teams of normal forecasters is often shared incompletely. The idea is that if all participants shared their information more completely with each other, the guesses each individual has would become more confident and thereby their average judgement more extreme. Note that extremizing works much less well if people have similar information.

However, Tetlock grants that the experiments with prediction markets could be improved (Superforecasting, page 207):

I can already hear the protests from my colleagues in finance that the only reason the superteams beat the prediction markets was that our markets lacked liquidity: real money wasn’t at stake and we didn’t have a critical mass of traders. They may be right. It is a testable idea, and worth testing. It’s also important to recognize that while superteams beat prediction markets, prediction markets did a pretty good job of forecasting complex global events.

I’d also be curious about a prediction market in which only superforecasters trade.

This brings me to the second part of my argument—namely, that the suboptimal conditions for the prediction market in the Good Judgement Project would often also apply in the real world when prediction markets get tested.

II. Prediction markets often won’t be efficient (as so many other markets in the real world)

It is important to recap here what market efficiency actually means: a market is efficient, if at any given time, prices fully reflect all available information about a particular stock and/​or market.

The idea behind the efficiency of markets is that if there’s information that is not yet priced in, traders have an immediate financial incentive to correct market prices which will then lead to market prices incorporating this information.

This mechanism working well rests on various conditions needing to be met. In the quote above, Tetlock mentions two of them: a market having to be large and liquid.

Your small office prediction market won’t do so well because there isn’t enough money and not enough people trading. Correcting prices isn’t worth the opportunity cost. The more money you can make in your market by correcting prices, the more likely it is the opportunity cost will be worth it. If you run a prediction market on when Mark will finish project X and you think the current estimate is hopelessly optimistic, is it really worth the cost of aggravating Mark to earn a few bucks? Of course not.

The same argument applies to transaction costs—they have to be low, otherwise correcting prices isn’t worth the cost. Your office presumably has other priorities than making sure its workers can trade easily.

That said, your office prediction market will still likely do better than taking the average of the guesses of your office workers.

There’s another problem peculiar to prediction markets, which run predictions for the more distant future. If you notice a future election prediction in 2021 is only a couple of percent off, it’s not worth it for you to invest your savings in correcting the price. You’re better off making money on index fonds which cash out a bit more than a couple of percent over three years!

But what about other bigger markets to make actually relevant prediction about the future? Surely they will be more efficient?

I’m skeptical. How stringently the conditions for market efficiency need to be met for a market to actually be efficient is an empirical question. How efficient a prediction market needs to be to give better forecasts than the alternatives is another one.

For example, despite high liquidity, political prediction markets aren’t that efficient. In the 2012 US presidential election, there was a big arbitrage opportunity between Intrade (a former US prediction market) and other prediction markets betting on the outcome of the presidential election. A single trader on Intrade lost millions by continuously betting on Romney. The most likely explanation for this trader’s behaviour was that they were trying to distort published predictions to manipulate the election outcome, since published polls and prediction markets odds tend to influence actual voter outcomes.

From this trader’s perspective, investing millions to increase the chance of Romney winning the election may have been a good investment, but this example shows the problems that can result when decisions are going to be made based on the prediction market’s prices.

Even markets like political betting not being that efficient shows us how difficult it is to fulfill necessary market efficiency conditions. I’m still optimistic for sufficiently large stock market like prediction markets in the future. But all in all, I think the merit of prediction markets is overblown.