You seem to be comparing prediction markets to perfection, not to the real mechanisms that we now use today instead. People proposing prediction markets are suggesting they’d work better than the status quo. They are usually not comparing them to something like GJP.
I agree with you prediction markets are in many cases better than the status quo. I’m not comparing prediction markets to perfection but to their alternatives (like extremizing team forecasts). I’m also only arguing that prediction markets are overrated within EA, not in the wider world. I’d assume they’re underrated outside of libertarian-friendly circles.
All in all, for which problems prediction markets do better than which alternatives is an empirical question, which I state in the post:
How stringently the conditions for market efficiency need to be met for a market to actually be efficient is an empirical question. How efficient a prediction market needs to be to give better forecasts than the alternatives is another one.
Do you disagree that in the specific examples I have given (an office prediction market about the timeline of a project, an election prediction market) having a prediction market is worse than the alternatives?
It would be good if you could give concrete examples where you expect prediction markets to be the best alternative.
Prediction markets are a neat concept, and are often regarded highly in the EA sphere. I think they are often not the best alternative for a given problem and are insufficiently compared to those alternatives within EA. Perhaps because they are such a neat concept—“let’s just do a prediction market!” sounds a lot more exciting than discussing a problem in a team and extremizing the team’s forecast even though a prediction market would be a lot more work.
Without some concrete estimate of how highly prediction markets are currently rated, its hard to say if they are over or under rated. They are almost never used, however, so it is hard to believe they are overused.
The office prediction markets you outline might well be useful. They aren’t obviously bad.
I see huge potential for creating larger markets to estimate altruism effectiveness. We don’t have any such at the moment, or even much effort to make them, so I find it hard to see that there’s too much effort there.
For example, it would be great to create markets estimating advertised outcomes from proposed World Bank projects. That might well pressure the Bank into adopting projects more likely to achieve those outcomes.
I don’t think prediction markets are overused by EAs, I think they are advocated for too much (both for internal lower stakes situations as well as for solving problems in the world) when they are not the best alternative for a given problem.
One problem with prediction markets is that they are hassle to implement which is why people don’t actually want to implement them. But since they are often the first alternative suggestion to the status quo within EA, better solutions in lower stakes situations like office forecasts which might have a chance of actually getting implemented don’t even get discussed.
I don’t think an office prediction market would be bad or not useful once you ignore opportunity costs, just worse than the alternatives. To be fair, I’m somewhat more optimistic for implementing office prediction markets in large workspaces like Google, but not for the small EA orgs we have. In those they would more likely take up a bunch of work without actually improving the situation much.
How large do you think a market needs to be to be efficient enough to be better than, say, asking Tetlock for the names of the top 30 superforecasters and hiring them to assess the problem? Given that political betting, despite being pretty large, had such big trouble as described in the post, I’m afraid an efficient enough prediction market would take a lot of work to implement.
I agree with you the added incentive structure would be nice, which might well make up for a lack of efficiency.
But again, I’m still optimistic about sufficiently large stock market like prediction markets.
Political betting had a problem relative to perfection, not relative to the actual other alternatives used; it did better than them according to accuracy studies.
Yes there are overheads to using prediction markets, but those are mainly for having any system at all. Once you have a system, the overhead to adding a new question is much lower. Since you don’t have EA prediction markets now, you face those initial costs.
For forecasting in most organizations, hiring top 30 super forecasters would go badly, as they don’t know enough about that organization to be useful. Far better to have just a handful of participants from that organization.
I assumed you didn’t mean an internal World Bank prediction market, sorry about that. As I said above, I’m more optimistic about large workplaces employing prediction markets. I don’t know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?
Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don’t see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets.
In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.
You also don’t state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn’t call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.
I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.
Robin’s position is that manipulators can actually improve the accuracy of prediction markets, by increasing the rewards to informed trading. On this view, the possibility of market manipulation is not in itself a consideration that favors non-market alternatives, such as polls or pundits.
Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn’t actually the main end goal I care about (but ‘good done in the world’ for which better forecasts of the future would be pretty useful).
Feel free to ignore if you don’t think this is sufficiently important, but I don’t understand the contrast you draw between accuracy and outside world manipulation. I thought manipulation of prediction markets was concerning precisely because it reduces their accuracy. Assuming you accept Robin’s point that manipulation increases accuracy on balance, what’s your residual concern?
I think markets that have at least 20 people trading on any given question will on average be at least as good as any alternative.
Your comments about superforecasters suggest that you think what matters is hiring the right people. What I think matters is the incentives the people are given. Most organizations produce bad forecasts because they have goals which distract people from the truth. The biggest gains from prediction markets are due to replacing bad incentives with incentives that are closely connected with accurate predictions.
There are multiple ways to produce good incentives, and for internal office predictions, there’s usually something simpler than prediction markets that works well enough.
You seem to be comparing prediction markets to perfection, not to the real mechanisms that we now use today instead. People proposing prediction markets are suggesting they’d work better than the status quo. They are usually not comparing them to something like GJP.
I agree with you prediction markets are in many cases better than the status quo. I’m not comparing prediction markets to perfection but to their alternatives (like extremizing team forecasts). I’m also only arguing that prediction markets are overrated within EA, not in the wider world. I’d assume they’re underrated outside of libertarian-friendly circles.
All in all, for which problems prediction markets do better than which alternatives is an empirical question, which I state in the post:
Do you disagree that in the specific examples I have given (an office prediction market about the timeline of a project, an election prediction market) having a prediction market is worse than the alternatives?
It would be good if you could give concrete examples where you expect prediction markets to be the best alternative.
Prediction markets are a neat concept, and are often regarded highly in the EA sphere. I think they are often not the best alternative for a given problem and are insufficiently compared to those alternatives within EA. Perhaps because they are such a neat concept—“let’s just do a prediction market!” sounds a lot more exciting than discussing a problem in a team and extremizing the team’s forecast even though a prediction market would be a lot more work.
Without some concrete estimate of how highly prediction markets are currently rated, its hard to say if they are over or under rated. They are almost never used, however, so it is hard to believe they are overused.
The office prediction markets you outline might well be useful. They aren’t obviously bad.
I see huge potential for creating larger markets to estimate altruism effectiveness. We don’t have any such at the moment, or even much effort to make them, so I find it hard to see that there’s too much effort there.
For example, it would be great to create markets estimating advertised outcomes from proposed World Bank projects. That might well pressure the Bank into adopting projects more likely to achieve those outcomes.
I don’t think prediction markets are overused by EAs, I think they are advocated for too much (both for internal lower stakes situations as well as for solving problems in the world) when they are not the best alternative for a given problem.
One problem with prediction markets is that they are hassle to implement which is why people don’t actually want to implement them. But since they are often the first alternative suggestion to the status quo within EA, better solutions in lower stakes situations like office forecasts which might have a chance of actually getting implemented don’t even get discussed.
I don’t think an office prediction market would be bad or not useful once you ignore opportunity costs, just worse than the alternatives. To be fair, I’m somewhat more optimistic for implementing office prediction markets in large workspaces like Google, but not for the small EA orgs we have. In those they would more likely take up a bunch of work without actually improving the situation much.
How large do you think a market needs to be to be efficient enough to be better than, say, asking Tetlock for the names of the top 30 superforecasters and hiring them to assess the problem? Given that political betting, despite being pretty large, had such big trouble as described in the post, I’m afraid an efficient enough prediction market would take a lot of work to implement. I agree with you the added incentive structure would be nice, which might well make up for a lack of efficiency.
But again, I’m still optimistic about sufficiently large stock market like prediction markets.
Political betting had a problem relative to perfection, not relative to the actual other alternatives used; it did better than them according to accuracy studies.
Yes there are overheads to using prediction markets, but those are mainly for having any system at all. Once you have a system, the overhead to adding a new question is much lower. Since you don’t have EA prediction markets now, you face those initial costs.
For forecasting in most organizations, hiring top 30 super forecasters would go badly, as they don’t know enough about that organization to be useful. Far better to have just a handful of participants from that organization.
I assumed you didn’t mean an internal World Bank prediction market, sorry about that. As I said above, I’m more optimistic about large workplaces employing prediction markets. I don’t know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?
Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don’t see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets. In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.
You also don’t state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn’t call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.
Robin’s position is that manipulators can actually improve the accuracy of prediction markets, by increasing the rewards to informed trading. On this view, the possibility of market manipulation is not in itself a consideration that favors non-market alternatives, such as polls or pundits.
Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn’t actually the main end goal I care about (but ‘good done in the world’ for which better forecasts of the future would be pretty useful).
Feel free to ignore if you don’t think this is sufficiently important, but I don’t understand the contrast you draw between accuracy and outside world manipulation. I thought manipulation of prediction markets was concerning precisely because it reduces their accuracy. Assuming you accept Robin’s point that manipulation increases accuracy on balance, what’s your residual concern?
I think markets that have at least 20 people trading on any given question will on average be at least as good as any alternative.
Your comments about superforecasters suggest that you think what matters is hiring the right people. What I think matters is the incentives the people are given. Most organizations produce bad forecasts because they have goals which distract people from the truth. The biggest gains from prediction markets are due to replacing bad incentives with incentives that are closely connected with accurate predictions.
There are multiple ways to produce good incentives, and for internal office predictions, there’s usually something simpler than prediction markets that works well enough.