I was glad to see the mention of the correlation=/=causality issue. To clarify, is the following similar to what you had in mind: if you are trying to analyze the effects of some kind of sanction or other foreign policy measure, simply asking “If the USFG does X (e.g., imposes some limited sanctions), what is the probability of Z (e.g., the sanctioned country ceases its activity)?” doesn’t necessarily tell you the effects of X if traders are thinking “If the USFG has the political will to do X, it’s likely they will also do Y (e.g., impose heavier sanctions)—and Y is what will actually cause Z.” Alternatively, it might be: “If the USFG resorts to doing X, it means policymakers likely don’t have the political will to also do Y—and so doing X is a sign that they won’t do Y, which is what will actually cause Z.”
That being said, I noticed you also said “You cannot simply buy a bunch of shares to get a policy accepted”
While this may be a valid point in theory, it is a very crucial assumption and I think deserves a bit more skepticism than you provide. In reality, there can be market conditions where people are unsure of whether a policy’s shares are being bought because of (a) insider trading (or any other form of smart trading), (b) dumb trading/trading mistakes (e.g., fat finger trades), (c) price manipulation (i.e., pump and dump schemes or other things to induce dumb trading and make a profit), (d) price distortion (e.g., to do the thing you describe), etc. If a trader in the market is unsure of whether it’s because of something like (a) vs. any of the other options, then they may not be willing to risk millions of dollars to “correct” the prices (which might already be correct). This problem may be exacerbated if the markets are thin (although obvious attempts at manipulation, e.g., when there is no possibility for insider information, will probably actually improve these markets).
Ultimately, I went through a very brief hype and disillusionment cycle with the idea of futarchy (although I am still very much a proponent of crowdsourced forecasting such as prediction markets), and that is one of the reasons. I definitely think there are areas where prediction markets for policy could theoretically be tried/beneficial, but I think any such attempt would have to be very carefully implemented.
I was glad to see the mention of the correlation=/=causality issue. To clarify, is the following similar to what you had in mind:...
I can’t speak for Lizka, but I do think the generalization of what you said (whether decisionmakers make specific choices tells us hidden information about their motivations and/or abilities) is an important subset of the potential issues. However, there are other issues. A more general version is that decisionmakers’ choices may give us information about world-states that either they have access to and forecasters don’t, or (if this is a prediction about a future decision, and most predictions are about the future) information neither forecasters nor decisionmakers currently have access to, but decisionmakers are expected to get access to after the forecast’s time but before decisionmakers make said decision.
An example of the latter actually happened live in a (private) covid forecasting tournament last year.
I might be butchering details a little, but basically we were asked whether and how much severe lockdowns in the future will result in reduced deaths. After some consideration, a reasonable fraction of forecasters, myself pretty loud among them, concluded that given the information available to us at the time, lockdowns is most likely correlated with increased deaths, since decisionmakers in that country in the future will know stuff about the trajectory of covid that neither them nor us currently have access to, and the decisionmakers will most likely only issue lockdowns if it looks like the number of deaths would be sufficiently high.
This was a ~unpaid tournament where only pride and a desire to do good was on the line, so we were pretty open with our reasoning. I can imagine a much stronger desire (and incentive) to be circumspect about our reasons in a market setting.
Note that this is probably only relevant to advisory markets, and not futarchy.
lockdowns is most likely correlated with increased deaths [since...] decisionmakers will most likely only issue lockdowns if it looks like the number of deaths would be sufficiently high
That is a really interesting illustration of the general causality =/= conditionality issue I mention in the post (and which Harrison elaborates on), thank you!
I agree that the generalization—the fact that a decision is made reveals currently unavailable information—is the key point, here, and Harrison’s interpretation seems like a reasonable and strong version or manifestation of the issue.
On buying a bunch of shares to get a policy accepted:
I agree that there would be scenarios in which manipulation by the wealthy is possible (and likely would happen), and you describe them well (thank you!). I mainly wanted to clarify or push back against a misconception I personally had when I initially read the paper, which was that this system basically grants decision-power entirely to those who are rich and motivated enough. The system is less silly than I initially thought, because the manipulation that is possible is much harder and less straightforward than what one might naively think (if one is new to markets, as I was).
I was glad to see the mention of the correlation=/=causality issue. To clarify, is the following similar to what you had in mind: if you are trying to analyze the effects of some kind of sanction or other foreign policy measure, simply asking “If the USFG does X (e.g., imposes some limited sanctions), what is the probability of Z (e.g., the sanctioned country ceases its activity)?” doesn’t necessarily tell you the effects of X if traders are thinking “If the USFG has the political will to do X, it’s likely they will also do Y (e.g., impose heavier sanctions)—and Y is what will actually cause Z.” Alternatively, it might be: “If the USFG resorts to doing X, it means policymakers likely don’t have the political will to also do Y—and so doing X is a sign that they won’t do Y, which is what will actually cause Z.”
That being said, I noticed you also said “You cannot simply buy a bunch of shares to get a policy accepted”
While this may be a valid point in theory, it is a very crucial assumption and I think deserves a bit more skepticism than you provide. In reality, there can be market conditions where people are unsure of whether a policy’s shares are being bought because of (a) insider trading (or any other form of smart trading), (b) dumb trading/trading mistakes (e.g., fat finger trades), (c) price manipulation (i.e., pump and dump schemes or other things to induce dumb trading and make a profit), (d) price distortion (e.g., to do the thing you describe), etc. If a trader in the market is unsure of whether it’s because of something like (a) vs. any of the other options, then they may not be willing to risk millions of dollars to “correct” the prices (which might already be correct). This problem may be exacerbated if the markets are thin (although obvious attempts at manipulation, e.g., when there is no possibility for insider information, will probably actually improve these markets).
Ultimately, I went through a very brief hype and disillusionment cycle with the idea of futarchy (although I am still very much a proponent of crowdsourced forecasting such as prediction markets), and that is one of the reasons. I definitely think there are areas where prediction markets for policy could theoretically be tried/beneficial, but I think any such attempt would have to be very carefully implemented.
I can’t speak for Lizka, but I do think the generalization of what you said (whether decisionmakers make specific choices tells us hidden information about their motivations and/or abilities) is an important subset of the potential issues. However, there are other issues. A more general version is that decisionmakers’ choices may give us information about world-states that either they have access to and forecasters don’t, or (if this is a prediction about a future decision, and most predictions are about the future) information neither forecasters nor decisionmakers currently have access to, but decisionmakers are expected to get access to after the forecast’s time but before decisionmakers make said decision.
An example of the latter actually happened live in a (private) covid forecasting tournament last year.
I might be butchering details a little, but basically we were asked whether and how much severe lockdowns in the future will result in reduced deaths. After some consideration, a reasonable fraction of forecasters, myself pretty loud among them, concluded that given the information available to us at the time, lockdowns is most likely correlated with increased deaths, since decisionmakers in that country in the future will know stuff about the trajectory of covid that neither them nor us currently have access to, and the decisionmakers will most likely only issue lockdowns if it looks like the number of deaths would be sufficiently high.
This was a ~unpaid tournament where only pride and a desire to do good was on the line, so we were pretty open with our reasoning. I can imagine a much stronger desire (and incentive) to be circumspect about our reasons in a market setting.
Note that this is probably only relevant to advisory markets, and not futarchy.
That is a really interesting illustration of the general causality =/= conditionality issue I mention in the post (and which Harrison elaborates on), thank you!
I agree that the generalization—the fact that a decision is made reveals currently unavailable information—is the key point, here, and Harrison’s interpretation seems like a reasonable and strong version or manifestation of the issue.
On buying a bunch of shares to get a policy accepted:
I agree that there would be scenarios in which manipulation by the wealthy is possible (and likely would happen), and you describe them well (thank you!). I mainly wanted to clarify or push back against a misconception I personally had when I initially read the paper, which was that this system basically grants decision-power entirely to those who are rich and motivated enough. The system is less silly than I initially thought, because the manipulation that is possible is much harder and less straightforward than what one might naively think (if one is new to markets, as I was).