I really appreciate that you break down explanatory factors in the way you do.
I’m happy that this was useful for you!
I have a hard time making a mental model of their relative importance compared to each other. Do you think that such an exercise is feasible, and if so, do any of you have a conception of the relative explanatory strength of any factor when considered against the others?
Good question. We also had some trouble with this, as it’s difficult to observe the reasons many corporate prediction markets have failed to catch on. That being said, my best guess is that it varies substantially based on the corporation:
For an average company, the most important factor might be some combination of (2) and (4): many employees wouldn’t be that interested in predicting and thus the cost of getting enough predictions might be high, and there is also just isn’t that much appetite to change things up.
For an average EA org, the most important factors might be a combination of (1) and (2): the tech is too immature and writing + acting on good questions takes too much time such that it’s hard to find the sweet spot where the benefit is worth the cost. In particular, many EA orgs are quite small so fixed costs of setting up and maintaining the market as well as writing impactful questions can be significant.
This Twitter poll by Ozzie and the discussion under it is also interesting data here; my read is that the mapping between Ozzie’s options and our requirements are:
They’re undervalued: None of our requirements are substantial enough issues.
They’re mediocre: Some combination of our requirements (1), (2), and (3) make prediction markets not worth the cost.
Politically disruptive: Our requirement (4).
Other
(3) won the poll by quite a bit, but note it was retweeted by Hanson which could skew the voting pool (h/t Ozzie for mentioning this somewhere else) .
Also, do you think that it is likely that the true explanation has nothing to do with any of these? In that case, how likely?
The most likely possibility I can think of is the one Ozzie included in his poll: prediction markets are undervalued for a reason other than political fears, and all/most of the companies made a mistake by discontinuing them. I’d say 15% for this, given that the evidence is fairly strong but there could be correlated reasons companies are missing out on the benefits. In particular, they could be underestimating some of the positive effects Ozzie mentioned in his comment above.
As for an unlisted explanation being the main one, it feels like we covered most of the ground here and the main explanation is at least related to something we mentioned, but unknown unknowns are always a thing; I’d say 10% here .
So that gives me a quick gut estimate of 25%; would be curious to get others’ takes.
Is the main reason for lack of adoption of prediction markets in the corporate setting unrelated to any of the requirements mentioned in this post?Is the main reason for lack of adoption of prediction markets in the corporate setting unrelated to any of the requirements mentioned in this post?
Thank you for this. This is all very helpful, and I think your explanations of giving differential weights to factors for average orgs and EA orgs seems very sensible. The 25% for unknown unknowns is probably right too. It doesn’t seem unlikely to me that most folks at average orgs would fail to understand the value of prediction markets even if they turned out to be valuable (since it would require work to prove it).
It would really surprise me if the ‘main reason’ why there is a lack of prediction markets had nothing to do with anything mentioned in the post. I think all unknown unknowns might conjunctly explain 25% of why prediction markets aren’t adopted, but the chance of any single unknown factor being the primary reason is, I think, quite slim.
I’m happy that this was useful for you!
Good question. We also had some trouble with this, as it’s difficult to observe the reasons many corporate prediction markets have failed to catch on. That being said, my best guess is that it varies substantially based on the corporation:
For an average company, the most important factor might be some combination of (2) and (4): many employees wouldn’t be that interested in predicting and thus the cost of getting enough predictions might be high, and there is also just isn’t that much appetite to change things up.
For an average EA org, the most important factors might be a combination of (1) and (2): the tech is too immature and writing + acting on good questions takes too much time such that it’s hard to find the sweet spot where the benefit is worth the cost. In particular, many EA orgs are quite small so fixed costs of setting up and maintaining the market as well as writing impactful questions can be significant.
This Twitter poll by Ozzie and the discussion under it is also interesting data here; my read is that the mapping between Ozzie’s options and our requirements are:
They’re undervalued: None of our requirements are substantial enough issues.
They’re mediocre: Some combination of our requirements (1), (2), and (3) make prediction markets not worth the cost.
Politically disruptive: Our requirement (4).
Other
(3) won the poll by quite a bit, but note it was retweeted by Hanson which could skew the voting pool (h/t Ozzie for mentioning this somewhere else) .
The most likely possibility I can think of is the one Ozzie included in his poll: prediction markets are undervalued for a reason other than political fears, and all/most of the companies made a mistake by discontinuing them. I’d say 15% for this, given that the evidence is fairly strong but there could be correlated reasons companies are missing out on the benefits. In particular, they could be underestimating some of the positive effects Ozzie mentioned in his comment above.
As for an unlisted explanation being the main one, it feels like we covered most of the ground here and the main explanation is at least related to something we mentioned, but unknown unknowns are always a thing; I’d say 10% here .
So that gives me a quick gut estimate of 25%; would be curious to get others’ takes.
Thank you for this. This is all very helpful, and I think your explanations of giving differential weights to factors for average orgs and EA orgs seems very sensible. The 25% for unknown unknowns is probably right too. It doesn’t seem unlikely to me that most folks at average orgs would fail to understand the value of prediction markets even if they turned out to be valuable (since it would require work to prove it).
It would really surprise me if the ‘main reason’ why there is a lack of prediction markets had nothing to do with anything mentioned in the post. I think all unknown unknowns might conjunctly explain 25% of why prediction markets aren’t adopted, but the chance of any single unknown factor being the primary reason is, I think, quite slim.