Austin from Manifold here—thanks for this writeup! I do actually agree with your core thesis (something like “prediction markets get a lot of hype relative to how much value they’ve added”), though I think your specific points are not as convincing?
1 Re: long-run reliability, this is actually a thing we think a lot about. Manifold has introduced different loan schemes such that the cost of capital of investing in a long-term market is lower, but I could imagine better market structures or derivatives that correctly get people to bet on important longterm questions
2 The existence of free money is worth noting, and points out some limits of prediction markets:
Markets which don’t get enough attention will be less accurate
Markets without enough liquidity (fewer traders, less $ traded) will be less accurate
Efficient Market Hypothesis isn’t unilaterally true—markets exist on a spectrum of efficiency, and simply setting up a “market” doesn’t magically make the prices/predictions good
That said, “hey look, these markets are clearly wrong” is painting prediction markets with an overtly broad brush, and might lead you to miss out on markets that are actually valuable. By analogy, you wouldn’t hold up a random antivaxxer’s tweet as proof that all of Twitter is worthless; rather, you should think that the context and the people who are producing the tweet or market actually make a difference
3 The asymmetric payout for being right in Doom scenarios has been discussed eg in this bet between Yudkowsky and Caplan. I think this is simultaneously true, and also not super relevant in practice, since it turns out the motivation (at least in Manifold markets) is often closer to “I want to make this number correct, either for altruistic info-providing reasons, or for egotistical show people I was right reasons, not completely rational bankroll-maximizing cost-benefit analyses”
FWIW my strongest criticism of prediction markets might look something like “Prediction Markets are very general purpose tools, and there’s been a lot of excitement about them from a technocratic perspective, but much less success at integrating them into making better decisions or providing novel information, especially relative to the counterfactual of eg paying forecasters or just making random guesses”
Also re funding—obviously, super super biased here but I think something like “experimentation is good”, “the amount of EA money that’s been spent on prediction markets is quite low overall, in the single digit millions” and “it’s especially unclear where the money would be better spent”
Prediction pools (like Metaculus-style systems) are maybe the solution I’m most aware of in this space, and I think executing on these could also be quite valuable; if you have good proposals on how to get better forecasts about the future, I think a lot of people would happily fund those~
Austin from Manifold here—thanks for this writeup! I do actually agree with your core thesis (something like “prediction markets get a lot of hype relative to how much value they’ve added”), though I think your specific points are not as convincing?
1 Re: long-run reliability, this is actually a thing we think a lot about. Manifold has introduced different loan schemes such that the cost of capital of investing in a long-term market is lower, but I could imagine better market structures or derivatives that correctly get people to bet on important longterm questions
2 The existence of free money is worth noting, and points out some limits of prediction markets:
Markets which don’t get enough attention will be less accurate
Markets without enough liquidity (fewer traders, less $ traded) will be less accurate
Efficient Market Hypothesis isn’t unilaterally true—markets exist on a spectrum of efficiency, and simply setting up a “market” doesn’t magically make the prices/predictions good
That said, “hey look, these markets are clearly wrong” is painting prediction markets with an overtly broad brush, and might lead you to miss out on markets that are actually valuable. By analogy, you wouldn’t hold up a random antivaxxer’s tweet as proof that all of Twitter is worthless; rather, you should think that the context and the people who are producing the tweet or market actually make a difference
3 The asymmetric payout for being right in Doom scenarios has been discussed eg in this bet between Yudkowsky and Caplan. I think this is simultaneously true, and also not super relevant in practice, since it turns out the motivation (at least in Manifold markets) is often closer to “I want to make this number correct, either for altruistic info-providing reasons, or for egotistical show people I was right reasons, not completely rational bankroll-maximizing cost-benefit analyses”
FWIW my strongest criticism of prediction markets might look something like “Prediction Markets are very general purpose tools, and there’s been a lot of excitement about them from a technocratic perspective, but much less success at integrating them into making better decisions or providing novel information, especially relative to the counterfactual of eg paying forecasters or just making random guesses”
Also re funding—obviously, super super biased here but I think something like “experimentation is good”, “the amount of EA money that’s been spent on prediction markets is quite low overall, in the single digit millions” and “it’s especially unclear where the money would be better spent”
Prediction pools (like Metaculus-style systems) are maybe the solution I’m most aware of in this space, and I think executing on these could also be quite valuable; if you have good proposals on how to get better forecasts about the future, I think a lot of people would happily fund those~