Prediction Markets are Somewhat Overrated Within EA
Introduction
Prediction markets are a tool that have gained significant attention and traction in EA. Although I agree that they can be useful in some circumstances, and that there is not always an existing better alternative, they nonetheless have some flaws, which I believe deserve more attention.
I don’t think most of this is new information, but as far as I know these issues have not been systematically discussed in a single place.
Why is this important to EA?
1. Being able to make accurate predictions is important to basically every EA cause area. Prediction markets have gotten a lot of attention as a tool to facilitate this, so if they are not actually effective, it may be necessary to look for other strategies.
2a. Some EA-aligned organizations, such as the FTX Future Fund, have placed emphasis on prediction markets as a potential EA project,[1] which could be a problem if prediction markets are less useful than widely believed.
2b. The Future Fund has also given multiple grants related to prediction markets.[2] (I did a cursory search of other major EA funders but found inconclusive information.) If prediction markets are less useful than widely believed in EA, it might be better to use that money elsewhere.
3. Overhyping prediction markets could theoretically be harmful to community epistemics. (I am the least confident in this point.)
Issue 1: Prediction markets become much less reliable in the long-run
The one empirical study I found that actually directly addressed this question found that prediction markets are fairly well-calibrated in the short term but are not as well calibrated in the long-term.[3] The study in question actually defined “long term” as “more than one month away,” but I expect (P = 0.85) that this would be as severe or more severe of a problem on the scale of years. As many questions that are highly relevant to EA depend on the outcome of events years in the future,[4] this limits the usefulness of prediction markets to EA.
Issue 2: Prediction markets are bad at estimating the probability of very unlikely outcomes
There are many events where we might want to tell the difference between multiple fairly low probabilities. For example, we might want to answer the question, “Will there be another pandemic that kills more than 1 million people worldwide before 2030?” It matters a lot whether the probability of this happening is more like 5% or more like 0.05%, but due to very low expected payouts, even if someone thinks that 5% is much too high, there is not much of an incentive for them to tie up their money betting on the market. (In some cases, they may even lose money due to inflation!)
For some practical examples, take a look at the Manifold Free Money tag.[5] While some of the markets there do at least seem close to the true probability, some examples have significant discrepancies, such as “This market will resolve no,” which is currently trading at 5%.
A similar but distinct problem occurs in prediction markets based on unlikely conditionals. For example, let’s say I wanted to answer the question, “Conditional on Congress passing [some bill], how many degrees of warming will there be by 2100?” but the bill in question is very unlikely to pass. Even if the probability is not accurate, people are often reluctant to tie up their money on a market that will likely just return their money back to them.
Issue 3: The incentive for being right on many important questions is often asymmetric
Questions related to existential risk are often important to our work. However, in many such cases, one side of the market will never be able to collect, even if they are correct. For example, if someone asks “Will an unaligned AGI kill all humans by 2050?” there is very little incentive to bet yes, even if you believe the market is underrating the probability of this happening. As a result, prediction markets will systematically tend to err on the side of humanity not going extinct.
A lesser form of this issue can arise with long-term questions in full generality. If a prediction market asks about what will happen in 2100, most investors today will probably be dead by the time the market resolves, which means there is not very much incentive to bet on it in either direction. I suspect with low confidence that this is less serious of an issue because it does not produce a systematic skew in the same direction.
Conclusion
Prediction markets can be a useful tool, but they have limitations, and it’s important to be aware of them and to not overstate their potential benefits. These issues include lack of long-term accuracy, overstated probabilities for unlikely events, and systematic incentive issues for certain topics.
Austin from Manifold here—thanks for this writeup! I do actually agree with your core thesis (something like “prediction markets get a lot of hype relative to how much value they’ve added”), though I think your specific points are not as convincing?
1 Re: long-run reliability, this is actually a thing we think a lot about. Manifold has introduced different loan schemes such that the cost of capital of investing in a long-term market is lower, but I could imagine better market structures or derivatives that correctly get people to bet on important longterm questions
2 The existence of free money is worth noting, and points out some limits of prediction markets:
Markets which don’t get enough attention will be less accurate
Markets without enough liquidity (fewer traders, less $ traded) will be less accurate
Efficient Market Hypothesis isn’t unilaterally true—markets exist on a spectrum of efficiency, and simply setting up a “market” doesn’t magically make the prices/predictions good
That said, “hey look, these markets are clearly wrong” is painting prediction markets with an overtly broad brush, and might lead you to miss out on markets that are actually valuable. By analogy, you wouldn’t hold up a random antivaxxer’s tweet as proof that all of Twitter is worthless; rather, you should think that the context and the people who are producing the tweet or market actually make a difference
3 The asymmetric payout for being right in Doom scenarios has been discussed eg in this bet between Yudkowsky and Caplan. I think this is simultaneously true, and also not super relevant in practice, since it turns out the motivation (at least in Manifold markets) is often closer to “I want to make this number correct, either for altruistic info-providing reasons, or for egotistical show people I was right reasons, not completely rational bankroll-maximizing cost-benefit analyses”
FWIW my strongest criticism of prediction markets might look something like “Prediction Markets are very general purpose tools, and there’s been a lot of excitement about them from a technocratic perspective, but much less success at integrating them into making better decisions or providing novel information, especially relative to the counterfactual of eg paying forecasters or just making random guesses”
Also re funding—obviously, super super biased here but I think something like “experimentation is good”, “the amount of EA money that’s been spent on prediction markets is quite low overall, in the single digit millions” and “it’s especially unclear where the money would be better spent”
Prediction pools (like Metaculus-style systems) are maybe the solution I’m most aware of in this space, and I think executing on these could also be quite valuable; if you have good proposals on how to get better forecasts about the future, I think a lot of people would happily fund those~
Wow, thank you, lots to unpack here and background information that I need to gain.