Yeah, I’m also not sure. The main issue I see is whether we can be confident that the loser is really worse without randomising ( I don’t expect the price of the loser to accurately tell us how much worse it is).
Edit: turns out that this question has been partially addressed. They sort of say “no”, but I’m not convinced. In their derivation of incompatible incentives, they condition on the final price, but traders are actually going to be calculating an expectation over final prices. They discuss an example where, if the losing action is priced too low, there’s an incentive to manipulate the market to make that action win. However, the risk of such manipulation is also an incentive to correctly price the loser, even if you’re not planning on manipulation.
I think it definitely breaks if the consequences depend on the price and the choice (in which case, I think what goes wrong is that you can’t expect the market to convert to the right probability).
E.g. there is one box, and the market can open it (a) or not (b). The choice is 75% determined by the market prices and 25% determined by a coin flip. A “powerful computer” (jokes) has specified that the box will be filled with $1m if the market price favours b, and nothing otherwise.
So, whenever the market price favours b, a contracts are conditionally worth $1m (or whatever). However, b contracts are always seemingly worthless, and as soon as a contracts are worth more than b they’re also worthless. There might be an equilibrium where b gets bid up to $250k and a to $250k-ϵ, but this doesn’t reflect the conditional probability of outcomes, and in fact a is the better outcome in spite of its lower price.
I’m playing a bit loose with the payouts here, but I don’t think it matters.
OK I tried to think of an intuitive example where using the market could cause heavy distortions in incentives. Maybe something like the following works?
Suppose that we are betting on if a certain coin will come up heads if flipped. If the market is above 50% the coin is flipped and bets activate. If the market is below 50% the coin is not flipped and bets are returned.
I happen to know that the coin either ALWAYS comes up heads or ALWAYS comes up tails. I don’t know which of these is true, but I think there is a 60% chance the coin is all-heads and a 40% chance the coin is all-tails.
Furthermore, I know that the coin will tomorrow be laser scanned and the laser scan published. This means that after tomorrow everyone will realize the coin is either all-heads or all-tails.
Ideally, I would have an incentive to buy if the market price is below 60% and sell if the market price is above 60% (to reveal my true probability).
But in reality, I would be happy to buy at a price up to 99%. Because: Even at 99%, if the coin ends up being revealed to be all-tails, the market prices will collapse.
If I’ve got that right, then having the market make decisions could be very harmful. (Let me know if this example isn’t clear.)
In this case, either the price finalises before the scan and no collapse happens, or it finalises after the scan and so the information from the scan is incorporated into the price at the time that it informs the decision. So as long as you aren’t jumping the gun and making decisions based on the non-final price, I don’t think this fails in a straightforward way.
But I’m really not sure whether or not it fails in a complicated way. Suppose if the market is below 50%, the coin is still flipped but tails pays out instead (I think this is closer to the standard scheme). Suppose both heads and tails are priced at 99c before the scan. After a scan that shows “heads”, there’s not much point to buy more heads. However, if you shorted tails and you’re able to push the price of heads very low, you’re in a great spot. The market ends up being on tails, and you profit from selling all those worthless tails contracts at 99c (even if you pay, say, 60c for them in order to keep the price above heads). In fact, if you’re sure the market will exploit this opportunity in the end, there is expected value in shorting both contracts before the scan—and this is true at any price! Obviously we shouldn’t be 100% confident it will be exploited. However, if both heads and tails trade for 99c prior to the scan then you lose essentially nothing by shorting both, and you therefore might expect many other people to also want to be short both and so the chance of manipulation might be high.
A wild guess: I think both prices close to $1 might be a strong enough signal of the failure of a manipulation attempt to outweigh the incentive to try.
I was thinking about a scenario where the scan has not yet happened, but the scan will happen before prices finalize. In that scenario at a minimum, you are not incentivized to bid according to your true beliefs of what will happen. Maybe that incentive disappears before the market finalizes in this particular case, but it’s still pretty disturbing—to me it suggests that the basic idea of having the market make the choices is a dangerous one. Even if the incentives problem were to go away before finalization in general (which is unclear to me) it still means that earlier market prices won’t work properly for sharing information.
Yeah, I’m also not sure. The main issue I see is whether we can be confident that the loser is really worse without randomising ( I don’t expect the price of the loser to accurately tell us how much worse it is).
Edit: turns out that this question has been partially addressed. They sort of say “no”, but I’m not convinced. In their derivation of incompatible incentives, they condition on the final price, but traders are actually going to be calculating an expectation over final prices. They discuss an example where, if the losing action is priced too low, there’s an incentive to manipulate the market to make that action win. However, the risk of such manipulation is also an incentive to correctly price the loser, even if you’re not planning on manipulation.
I think it definitely breaks if the consequences depend on the price and the choice (in which case, I think what goes wrong is that you can’t expect the market to convert to the right probability).
E.g. there is one box, and the market can open it (a) or not (b). The choice is 75% determined by the market prices and 25% determined by a coin flip. A “powerful computer” (jokes) has specified that the box will be filled with $1m if the market price favours b, and nothing otherwise.
So, whenever the market price favours b, a contracts are conditionally worth $1m (or whatever). However, b contracts are always seemingly worthless, and as soon as a contracts are worth more than b they’re also worthless. There might be an equilibrium where b gets bid up to $250k and a to $250k-ϵ, but this doesn’t reflect the conditional probability of outcomes, and in fact a is the better outcome in spite of its lower price.
I’m playing a bit loose with the payouts here, but I don’t think it matters.
OK I tried to think of an intuitive example where using the market could cause heavy distortions in incentives. Maybe something like the following works?
Suppose that we are betting on if a certain coin will come up heads if flipped. If the market is above 50% the coin is flipped and bets activate. If the market is below 50% the coin is not flipped and bets are returned.
I happen to know that the coin either ALWAYS comes up heads or ALWAYS comes up tails. I don’t know which of these is true, but I think there is a 60% chance the coin is all-heads and a 40% chance the coin is all-tails.
Furthermore, I know that the coin will tomorrow be laser scanned and the laser scan published. This means that after tomorrow everyone will realize the coin is either all-heads or all-tails.
Ideally, I would have an incentive to buy if the market price is below 60% and sell if the market price is above 60% (to reveal my true probability).
But in reality, I would be happy to buy at a price up to 99%. Because: Even at 99%, if the coin ends up being revealed to be all-tails, the market prices will collapse.
If I’ve got that right, then having the market make decisions could be very harmful. (Let me know if this example isn’t clear.)
In this case, either the price finalises before the scan and no collapse happens, or it finalises after the scan and so the information from the scan is incorporated into the price at the time that it informs the decision. So as long as you aren’t jumping the gun and making decisions based on the non-final price, I don’t think this fails in a straightforward way.
But I’m really not sure whether or not it fails in a complicated way. Suppose if the market is below 50%, the coin is still flipped but tails pays out instead (I think this is closer to the standard scheme). Suppose both heads and tails are priced at 99c before the scan. After a scan that shows “heads”, there’s not much point to buy more heads. However, if you shorted tails and you’re able to push the price of heads very low, you’re in a great spot. The market ends up being on tails, and you profit from selling all those worthless tails contracts at 99c (even if you pay, say, 60c for them in order to keep the price above heads). In fact, if you’re sure the market will exploit this opportunity in the end, there is expected value in shorting both contracts before the scan—and this is true at any price! Obviously we shouldn’t be 100% confident it will be exploited. However, if both heads and tails trade for 99c prior to the scan then you lose essentially nothing by shorting both, and you therefore might expect many other people to also want to be short both and so the chance of manipulation might be high.
A wild guess: I think both prices close to $1 might be a strong enough signal of the failure of a manipulation attempt to outweigh the incentive to try.
I was thinking about a scenario where the scan has not yet happened, but the scan will happen before prices finalize. In that scenario at a minimum, you are not incentivized to bid according to your true beliefs of what will happen. Maybe that incentive disappears before the market finalizes in this particular case, but it’s still pretty disturbing—to me it suggests that the basic idea of having the market make the choices is a dangerous one. Even if the incentives problem were to go away before finalization in general (which is unclear to me) it still means that earlier market prices won’t work properly for sharing information.