I was glad to see the mention of the correlation=/=causality issue. To clarify, is the following similar to what you had in mind:...
I can’t speak for Lizka, but I do think the generalization of what you said (whether decisionmakers make specific choices tells us hidden information about their motivations and/or abilities) is an important subset of the potential issues. However, there are other issues. A more general version is that decisionmakers’ choices may give us information about world-states that either they have access to and forecasters don’t, or (if this is a prediction about a future decision, and most predictions are about the future) information neither forecasters nor decisionmakers currently have access to, but decisionmakers are expected to get access to after the forecast’s time but before decisionmakers make said decision.
An example of the latter actually happened live in a (private) covid forecasting tournament last year.
I might be butchering details a little, but basically we were asked whether and how much severe lockdowns in the future will result in reduced deaths. After some consideration, a reasonable fraction of forecasters, myself pretty loud among them, concluded that given the information available to us at the time, lockdowns is most likely correlated with increased deaths, since decisionmakers in that country in the future will know stuff about the trajectory of covid that neither them nor us currently have access to, and the decisionmakers will most likely only issue lockdowns if it looks like the number of deaths would be sufficiently high.
This was a ~unpaid tournament where only pride and a desire to do good was on the line, so we were pretty open with our reasoning. I can imagine a much stronger desire (and incentive) to be circumspect about our reasons in a market setting.
Note that this is probably only relevant to advisory markets, and not futarchy.
lockdowns is most likely correlated with increased deaths [since...] decisionmakers will most likely only issue lockdowns if it looks like the number of deaths would be sufficiently high
That is a really interesting illustration of the general causality =/= conditionality issue I mention in the post (and which Harrison elaborates on), thank you!
I agree that the generalization—the fact that a decision is made reveals currently unavailable information—is the key point, here, and Harrison’s interpretation seems like a reasonable and strong version or manifestation of the issue.
I can’t speak for Lizka, but I do think the generalization of what you said (whether decisionmakers make specific choices tells us hidden information about their motivations and/or abilities) is an important subset of the potential issues. However, there are other issues. A more general version is that decisionmakers’ choices may give us information about world-states that either they have access to and forecasters don’t, or (if this is a prediction about a future decision, and most predictions are about the future) information neither forecasters nor decisionmakers currently have access to, but decisionmakers are expected to get access to after the forecast’s time but before decisionmakers make said decision.
An example of the latter actually happened live in a (private) covid forecasting tournament last year.
I might be butchering details a little, but basically we were asked whether and how much severe lockdowns in the future will result in reduced deaths. After some consideration, a reasonable fraction of forecasters, myself pretty loud among them, concluded that given the information available to us at the time, lockdowns is most likely correlated with increased deaths, since decisionmakers in that country in the future will know stuff about the trajectory of covid that neither them nor us currently have access to, and the decisionmakers will most likely only issue lockdowns if it looks like the number of deaths would be sufficiently high.
This was a ~unpaid tournament where only pride and a desire to do good was on the line, so we were pretty open with our reasoning. I can imagine a much stronger desire (and incentive) to be circumspect about our reasons in a market setting.
Note that this is probably only relevant to advisory markets, and not futarchy.
That is a really interesting illustration of the general causality =/= conditionality issue I mention in the post (and which Harrison elaborates on), thank you!
I agree that the generalization—the fact that a decision is made reveals currently unavailable information—is the key point, here, and Harrison’s interpretation seems like a reasonable and strong version or manifestation of the issue.