I’m surprised the author doesn’t offer “the market decides” as a solution to this. The original idea of decision markets is that the actions are taken on the basis of market prices, and under this structure causality seems like it might be handled just fine.
I don’t have a rigorous proof of this—proof is difficult because decision theories tend to have vague “I know it when I see it” definitions to begin with. However, we can at least see that the original author’s objections are answered. Suppose that the market prices express expectations E[Y|a] and E[Y|b] for some outcome Y and some pair of options {a,b}. The author worries that whether a or b is chosen might be informed by some other events or states of the world which, if they transpired or were known to hold, would modify E[Y|a] and E[Y|b]. But if the choice is determined by the closing price of the market, then there obviously cannot be any events or states of the world that inform the choice but not the closing price.
It’s not obvious to me that such markets can successfully integrate all of the available information by the time it closes. The closing price can, in general, reflect information about the world not reflected by the price before closing, and the price before closing is trying to anticipate any such developments. It seems like it usually ought to converge, but I can imagine there might be some way to bake self-reference into the market such that it does not converge. Also, once it becomes clear that one choice is preferred to another, there’s little incentive to trade the loser, but this might not be much of a problem in practice. If convergence is a problem, adding some randomisation to the choice might help.
Also, there’s always a way to implement “the market decides”. Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won’t be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.
Certainly, if your decision is a deterministic function of the final market price, then there’s no way that any hidden information can influence the decision except via the market price. However, what I worry about here is: Do investors in such a market still have the right incentives—will they produce the same prices as they would if the decision was guaranteed to be made randomly? That might be true—and I can’t easily come up with a counterexample—but it would be nice to have an argument. Do I correctly understand your second to last paragraph as meaning that you aren’t sure of this either?
Just a quick note: I wrote a post on issues with Futarchy a while back. (I haven’t read it in months, have changed my mind on a number of things since then — some of which would probably affect my arguments in that post, and don’t know how much of it I’d still endorse, but am sharing it in case it’s useful.)
Yeah, I’m also not sure. The main issue I see is whether we can be confident that the loser is really worse without randomising ( I don’t expect the price of the loser to accurately tell us how much worse it is).
Edit: turns out that this question has been partially addressed. They sort of say “no”, but I’m not convinced. In their derivation of incompatible incentives, they condition on the final price, but traders are actually going to be calculating an expectation over final prices. They discuss an example where, if the losing action is priced too low, there’s an incentive to manipulate the market to make that action win. However, the risk of such manipulation is also an incentive to correctly price the loser, even if you’re not planning on manipulation.
I think it definitely breaks if the consequences depend on the price and the choice (in which case, I think what goes wrong is that you can’t expect the market to convert to the right probability).
E.g. there is one box, and the market can open it (a) or not (b). The choice is 75% determined by the market prices and 25% determined by a coin flip. A “powerful computer” (jokes) has specified that the box will be filled with $1m if the market price favours b, and nothing otherwise.
So, whenever the market price favours b, a contracts are conditionally worth $1m (or whatever). However, b contracts are always seemingly worthless, and as soon as a contracts are worth more than b they’re also worthless. There might be an equilibrium where b gets bid up to $250k and a to $250k-ϵ, but this doesn’t reflect the conditional probability of outcomes, and in fact a is the better outcome in spite of its lower price.
I’m playing a bit loose with the payouts here, but I don’t think it matters.
OK I tried to think of an intuitive example where using the market could cause heavy distortions in incentives. Maybe something like the following works?
Suppose that we are betting on if a certain coin will come up heads if flipped. If the market is above 50% the coin is flipped and bets activate. If the market is below 50% the coin is not flipped and bets are returned.
I happen to know that the coin either ALWAYS comes up heads or ALWAYS comes up tails. I don’t know which of these is true, but I think there is a 60% chance the coin is all-heads and a 40% chance the coin is all-tails.
Furthermore, I know that the coin will tomorrow be laser scanned and the laser scan published. This means that after tomorrow everyone will realize the coin is either all-heads or all-tails.
Ideally, I would have an incentive to buy if the market price is below 60% and sell if the market price is above 60% (to reveal my true probability).
But in reality, I would be happy to buy at a price up to 99%. Because: Even at 99%, if the coin ends up being revealed to be all-tails, the market prices will collapse.
If I’ve got that right, then having the market make decisions could be very harmful. (Let me know if this example isn’t clear.)
In this case, either the price finalises before the scan and no collapse happens, or it finalises after the scan and so the information from the scan is incorporated into the price at the time that it informs the decision. So as long as you aren’t jumping the gun and making decisions based on the non-final price, I don’t think this fails in a straightforward way.
But I’m really not sure whether or not it fails in a complicated way. Suppose if the market is below 50%, the coin is still flipped but tails pays out instead (I think this is closer to the standard scheme). Suppose both heads and tails are priced at 99c before the scan. After a scan that shows “heads”, there’s not much point to buy more heads. However, if you shorted tails and you’re able to push the price of heads very low, you’re in a great spot. The market ends up being on tails, and you profit from selling all those worthless tails contracts at 99c (even if you pay, say, 60c for them in order to keep the price above heads). In fact, if you’re sure the market will exploit this opportunity in the end, there is expected value in shorting both contracts before the scan—and this is true at any price! Obviously we shouldn’t be 100% confident it will be exploited. However, if both heads and tails trade for 99c prior to the scan then you lose essentially nothing by shorting both, and you therefore might expect many other people to also want to be short both and so the chance of manipulation might be high.
A wild guess: I think both prices close to $1 might be a strong enough signal of the failure of a manipulation attempt to outweigh the incentive to try.
I was thinking about a scenario where the scan has not yet happened, but the scan will happen before prices finalize. In that scenario at a minimum, you are not incentivized to bid according to your true beliefs of what will happen. Maybe that incentive disappears before the market finalizes in this particular case, but it’s still pretty disturbing—to me it suggests that the basic idea of having the market make the choices is a dangerous one. Even if the incentives problem were to go away before finalization in general (which is unclear to me) it still means that earlier market prices won’t work properly for sharing information.
In this case it would be best to use the language of counterfactuals (aka potential outcomes) instead of conditional expectations. In practice, the market would estimate E[Ya] and E[Yb] for the two random functions Ya and Yb, and you would choose the option with the highest estimated expected value. There is no need to put conditional probability into the mix at all, and it’s probably best not to, as there is no obvious probability to assign to the “events” a and b.
Phrasing it in terms of potential outcomes could definitely help the understanding of people who use that approach to talk about causal questions (which is a lot of people!). I’m not sure it helps anyone else, though. Under the standard account, the price of a prediction market is a probability estimate, modulo the assumption that utility = money (which is independent of the present concerns). So we’d need to offer an argument that conditional probability = marginal probability of potential outcomes.
Potential outcomes are IMO in the same boat as decision theories—their interpretation depends on a vague “I know it when I see it” type of notion. However we deal with that, I expect the story ends up sounding quite similar to my original comment—the critical step is that the choice does not depend on anything but the closing price.
a and b definitely are events, though! We could create a separate market on how the decision market resolves, and it will resolve unambiguously.
Potential outcomes are very clearly and rigorously defined as collections of separate random variables, there is no “I know it when I see it” involved. In this case you choose between two options, and there is no conditional probability involved unless you actually need it for estimation purposes.
Let’s put it a different way. You have the option of flipping two coins, either a blue coin or a red coin. You estimate the expected probability of heads as P(blue)=0.6 and P(red)=0.5. You base your choice of which coin to toss on which probability is the largest. There is actually no need to use scary-sounding terms like counterfactuals or potential outcomes at all, you’re just choosing between random outcomes.
We could create a separate market on how the decision market resolves, and it will resolve unambiguously.
That sounds like an unnecessarily convoluted solution to a question we do not need to solve!
However we deal with that, I expect the story ends up sounding quite similar to my original comment—the critical step is that the choice does not depend on anything but the closing price.
Yes, I agree. And that’s why I believe we shouldn’t use conditional probabilities at all, as it makes it confusion possible.
The definition of potential outcomes you refer to does not allow us to answer the question of whether they are estimated by the market in question.
The essence of all the decision theoretic paradoxes is that everyone agrees that we need some function options → distributions over consequences to make decisions, and no one knows how exactly to explain what that function is.
Here’s the context I’m thinking about. Say you have two options Ya and Yb. They have different true expected values E(Ya) and E(Yb). The market estimates their expectations as ^E(Ya) and ^E(Yb). And you (or the decider) choose the option with highest estimated expectation. (I was unclear about estimation vs. true values in my previous comment.)
Does this have something to do with your remarks here?
Also, there’s always a way to implement “the market decides”. Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won’t be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.
I believe we agree on the following: we evaluate the desirability of each available option by appealing to some map F:X→Δ(Y) from options X to distributions over consequences of interest Y.
We also both suggest that maybe F should be equal to the map x↦Q(x) where Q(x) is the closing price of the decision market conditional on x.
You say the price map is equal to the map x↦E(Yx), I say it is equal to x↦E(Y|x) where the expectation is with respect to some predictive subjective probability.
The reason why I make this claim is due to work like Chen 2009 that finds, under certain conditions, that prediction market prices reflect predicting subjective probabilities, and so I identify the prices with predictive subjective probabilities. I don’t think any similar work exists for potential outcomes.
The main question is: is the price map Q really the right function F? This is a famously controversial question, and causal decision theorists say: you shouldn’t always use subjective conditional probabilities to decide what to do (see Newcomb etc.) On the basis of results like Chen’s, I surmise that causal decision theorists at least don’t necessarily agree that the closing prices of the decision market defines the right kind of function, because it is a subjective conditional probability (but the devil might be in the details).
Now, let’s try to solve the problem with potential outcomes. Potential outcomes have two faces. On the one hand, Ya is a random variable equal to Y in the event X=a (this is called consistency). But there are many such variables—notably, Y itself. The other face of potential outcomes is that Ya should be interpreted as representing a counterfactual variable in the event X≠a. What potential outcomes don’t come with is a precise theory of counterfactual variable. This is the reason for my “I know it when I see it” comment.
Here’s how you could argue that E(Y|x)=E(Yx): first, suppose it’s a decision market with randomisation, so the choice X is jointly determined by the price and some physical random signal R. Assume YX⊥R - this is our “theory of counterfactual variables”. By determinism, we also have YX⊥X|R,Q where Q is the closing price of the pair of markets. By contraction YX⊥X|Q, and the result follows from consistency (apologies if this is overly brief). Then we also say F is the function x↦Yx and we conclude that indeed F(x)=E(Yx)=E(Y|x)=Q(x).
This is nicer than I expected, but I figure you could go through basically the same reasoning, but with F directly. AssumeF⊥R and P(F(a)=E(Y|a)|a)=1 (and similarly for b). Then by similar reasoning we get P(F(a)=E(Y|a)|Q)=1 (Noting that, by assumption, Q=E(Y|a))
I’m surprised the author doesn’t offer “the market decides” as a solution to this. The original idea of decision markets is that the actions are taken on the basis of market prices, and under this structure causality seems like it might be handled just fine.
I don’t have a rigorous proof of this—proof is difficult because decision theories tend to have vague “I know it when I see it” definitions to begin with. However, we can at least see that the original author’s objections are answered. Suppose that the market prices express expectations E[Y|a] and E[Y|b] for some outcome Y and some pair of options {a,b}. The author worries that whether a or b is chosen might be informed by some other events or states of the world which, if they transpired or were known to hold, would modify E[Y|a] and E[Y|b]. But if the choice is determined by the closing price of the market, then there obviously cannot be any events or states of the world that inform the choice but not the closing price.
It’s not obvious to me that such markets can successfully integrate all of the available information by the time it closes. The closing price can, in general, reflect information about the world not reflected by the price before closing, and the price before closing is trying to anticipate any such developments. It seems like it usually ought to converge, but I can imagine there might be some way to bake self-reference into the market such that it does not converge. Also, once it becomes clear that one choice is preferred to another, there’s little incentive to trade the loser, but this might not be much of a problem in practice. If convergence is a problem, adding some randomisation to the choice might help.
Also, there’s always a way to implement “the market decides”. Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won’t be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.
Certainly, if your decision is a deterministic function of the final market price, then there’s no way that any hidden information can influence the decision except via the market price. However, what I worry about here is: Do investors in such a market still have the right incentives—will they produce the same prices as they would if the decision was guaranteed to be made randomly? That might be true—and I can’t easily come up with a counterexample—but it would be nice to have an argument. Do I correctly understand your second to last paragraph as meaning that you aren’t sure of this either?
Just a quick note: I wrote a post on issues with Futarchy a while back. (I haven’t read it in months, have changed my mind on a number of things since then — some of which would probably affect my arguments in that post, and don’t know how much of it I’d still endorse, but am sharing it in case it’s useful.)
Yeah, I’m also not sure. The main issue I see is whether we can be confident that the loser is really worse without randomising ( I don’t expect the price of the loser to accurately tell us how much worse it is).
Edit: turns out that this question has been partially addressed. They sort of say “no”, but I’m not convinced. In their derivation of incompatible incentives, they condition on the final price, but traders are actually going to be calculating an expectation over final prices. They discuss an example where, if the losing action is priced too low, there’s an incentive to manipulate the market to make that action win. However, the risk of such manipulation is also an incentive to correctly price the loser, even if you’re not planning on manipulation.
I think it definitely breaks if the consequences depend on the price and the choice (in which case, I think what goes wrong is that you can’t expect the market to convert to the right probability).
E.g. there is one box, and the market can open it (a) or not (b). The choice is 75% determined by the market prices and 25% determined by a coin flip. A “powerful computer” (jokes) has specified that the box will be filled with $1m if the market price favours b, and nothing otherwise.
So, whenever the market price favours b, a contracts are conditionally worth $1m (or whatever). However, b contracts are always seemingly worthless, and as soon as a contracts are worth more than b they’re also worthless. There might be an equilibrium where b gets bid up to $250k and a to $250k-ϵ, but this doesn’t reflect the conditional probability of outcomes, and in fact a is the better outcome in spite of its lower price.
I’m playing a bit loose with the payouts here, but I don’t think it matters.
OK I tried to think of an intuitive example where using the market could cause heavy distortions in incentives. Maybe something like the following works?
Suppose that we are betting on if a certain coin will come up heads if flipped. If the market is above 50% the coin is flipped and bets activate. If the market is below 50% the coin is not flipped and bets are returned.
I happen to know that the coin either ALWAYS comes up heads or ALWAYS comes up tails. I don’t know which of these is true, but I think there is a 60% chance the coin is all-heads and a 40% chance the coin is all-tails.
Furthermore, I know that the coin will tomorrow be laser scanned and the laser scan published. This means that after tomorrow everyone will realize the coin is either all-heads or all-tails.
Ideally, I would have an incentive to buy if the market price is below 60% and sell if the market price is above 60% (to reveal my true probability).
But in reality, I would be happy to buy at a price up to 99%. Because: Even at 99%, if the coin ends up being revealed to be all-tails, the market prices will collapse.
If I’ve got that right, then having the market make decisions could be very harmful. (Let me know if this example isn’t clear.)
In this case, either the price finalises before the scan and no collapse happens, or it finalises after the scan and so the information from the scan is incorporated into the price at the time that it informs the decision. So as long as you aren’t jumping the gun and making decisions based on the non-final price, I don’t think this fails in a straightforward way.
But I’m really not sure whether or not it fails in a complicated way. Suppose if the market is below 50%, the coin is still flipped but tails pays out instead (I think this is closer to the standard scheme). Suppose both heads and tails are priced at 99c before the scan. After a scan that shows “heads”, there’s not much point to buy more heads. However, if you shorted tails and you’re able to push the price of heads very low, you’re in a great spot. The market ends up being on tails, and you profit from selling all those worthless tails contracts at 99c (even if you pay, say, 60c for them in order to keep the price above heads). In fact, if you’re sure the market will exploit this opportunity in the end, there is expected value in shorting both contracts before the scan—and this is true at any price! Obviously we shouldn’t be 100% confident it will be exploited. However, if both heads and tails trade for 99c prior to the scan then you lose essentially nothing by shorting both, and you therefore might expect many other people to also want to be short both and so the chance of manipulation might be high.
A wild guess: I think both prices close to $1 might be a strong enough signal of the failure of a manipulation attempt to outweigh the incentive to try.
I was thinking about a scenario where the scan has not yet happened, but the scan will happen before prices finalize. In that scenario at a minimum, you are not incentivized to bid according to your true beliefs of what will happen. Maybe that incentive disappears before the market finalizes in this particular case, but it’s still pretty disturbing—to me it suggests that the basic idea of having the market make the choices is a dangerous one. Even if the incentives problem were to go away before finalization in general (which is unclear to me) it still means that earlier market prices won’t work properly for sharing information.
In this case it would be best to use the language of counterfactuals (aka potential outcomes) instead of conditional expectations. In practice, the market would estimate E[Ya] and E[Yb] for the two random functions Ya and Yb, and you would choose the option with the highest estimated expected value. There is no need to put conditional probability into the mix at all, and it’s probably best not to, as there is no obvious probability to assign to the “events” a and b.
You can bet not on probabilities but on utility, see e.g. the futarchy specification by Hanson (Lizka’s summary and notes).
Phrasing it in terms of potential outcomes could definitely help the understanding of people who use that approach to talk about causal questions (which is a lot of people!). I’m not sure it helps anyone else, though. Under the standard account, the price of a prediction market is a probability estimate, modulo the assumption that utility = money (which is independent of the present concerns). So we’d need to offer an argument that conditional probability = marginal probability of potential outcomes.
Potential outcomes are IMO in the same boat as decision theories—their interpretation depends on a vague “I know it when I see it” type of notion. However we deal with that, I expect the story ends up sounding quite similar to my original comment—the critical step is that the choice does not depend on anything but the closing price.
a and b definitely are events, though! We could create a separate market on how the decision market resolves, and it will resolve unambiguously.
Potential outcomes are very clearly and rigorously defined as collections of separate random variables, there is no “I know it when I see it” involved. In this case you choose between two options, and there is no conditional probability involved unless you actually need it for estimation purposes.
Let’s put it a different way. You have the option of flipping two coins, either a blue coin or a red coin. You estimate the expected probability of heads as P(blue)=0.6 and P(red)=0.5. You base your choice of which coin to toss on which probability is the largest. There is actually no need to use scary-sounding terms like counterfactuals or potential outcomes at all, you’re just choosing between random outcomes.
That sounds like an unnecessarily convoluted solution to a question we do not need to solve!
Yes, I agree. And that’s why I believe we shouldn’t use conditional probabilities at all, as it makes it confusion possible.
The definition of potential outcomes you refer to does not allow us to answer the question of whether they are estimated by the market in question.
The essence of all the decision theoretic paradoxes is that everyone agrees that we need some function options → distributions over consequences to make decisions, and no one knows how exactly to explain what that function is.
Sorry, but I don’t understand what you mean.
Here’s the context I’m thinking about. Say you have two options Ya and Yb. They have different true expected values E(Ya) and E(Yb). The market estimates their expectations as ^E(Ya) and ^E(Yb). And you (or the decider) choose the option with highest estimated expectation. (I was unclear about estimation vs. true values in my previous comment.)
Does this have something to do with your remarks here?
I believe we agree on the following: we evaluate the desirability of each available option by appealing to some map F:X→Δ(Y) from options X to distributions over consequences of interest Y.
We also both suggest that maybe F should be equal to the map x↦Q(x) where Q(x) is the closing price of the decision market conditional on x.
You say the price map is equal to the map x↦E(Yx), I say it is equal to x↦E(Y|x) where the expectation is with respect to some predictive subjective probability.
The reason why I make this claim is due to work like Chen 2009 that finds, under certain conditions, that prediction market prices reflect predicting subjective probabilities, and so I identify the prices with predictive subjective probabilities. I don’t think any similar work exists for potential outcomes.
The main question is: is the price map Q really the right function F? This is a famously controversial question, and causal decision theorists say: you shouldn’t always use subjective conditional probabilities to decide what to do (see Newcomb etc.) On the basis of results like Chen’s, I surmise that causal decision theorists at least don’t necessarily agree that the closing prices of the decision market defines the right kind of function, because it is a subjective conditional probability (but the devil might be in the details).
Now, let’s try to solve the problem with potential outcomes. Potential outcomes have two faces. On the one hand, Ya is a random variable equal to Y in the event X=a (this is called consistency). But there are many such variables—notably, Y itself. The other face of potential outcomes is that Ya should be interpreted as representing a counterfactual variable in the event X≠a. What potential outcomes don’t come with is a precise theory of counterfactual variable. This is the reason for my “I know it when I see it” comment.
Here’s how you could argue that E(Y|x)=E(Yx): first, suppose it’s a decision market with randomisation, so the choice X is jointly determined by the price and some physical random signal R. Assume YX⊥R - this is our “theory of counterfactual variables”. By determinism, we also have YX⊥X|R,Q where Q is the closing price of the pair of markets. By contraction YX⊥X|Q, and the result follows from consistency (apologies if this is overly brief). Then we also say F is the function x↦Yx and we conclude that indeed F(x)=E(Yx)=E(Y|x)=Q(x).
This is nicer than I expected, but I figure you could go through basically the same reasoning, but with F directly. AssumeF⊥R and P(F(a)=E(Y|a)|a)=1 (and similarly for b). Then by similar reasoning we get P(F(a)=E(Y|a)|Q)=1 (Noting that, by assumption, Q=E(Y|a))
I’ll get back to you