Prediction market does not imply causation

Link post

There have been some discussions about prediction markets on the EA Forum, and in general, prediction markets seem pretty popular in EA circles. So I thought people on the EA Forum might find this blog post by Dynomight interesting; I think it articulates an important issue we face when trying to interpret conditional prediction markets (the fact that conditionality does not necessarily imply causality) — as well as some potential solutions. The post was written for a general audience, and, as it says on the top (in the real post, not this link-post), people more familiar with conditional prediction markets might want to skip to section 3 or even to section 6.

(Please note that I haven’t read the post carefully.)

Here are some excerpts (shared with permission):

Examples of conditionality not implying causality

2.

People worry about prediction markets for lots of reasons. Maybe someone will manipulate prices for political reasons. Maybe fees will distort prices. Maybe you’ll go Dr. Evil and bet that emissions will go up and then go emit a gazillion tons of CO₂ to ensure that you win. Valid concerns, but let’s ignore them and assume markets output “true” probabilities.

Now, what would explain the odds of emissions going up being higher with the treaty than without? The obvious explanation is that the market thinks the treaty will cause emissions to go up:

Treaty becomes law

Emissions go up

Totally plausible. But maybe the market thinks something else. Maybe the treaty does nothing but voters believe it does something, so emissions going up would cause the treaty to be signed:

Emissions go up

Climate does scary things

People freak out

People demand treaty

Treaty becomes law

In this chain of events, the treaty acts as a kind of “emissions have gone up” award. Even though signing the treaty has no effect on emissions, the fact that it became law increases the odds that emissions have increased. You could still get the same probabilities as in a world where the treaty caused increased emissions.

3.

Here’s a market that actually exists (albeit with internet points instead of money): “Conditional on NATO declaring a No-Fly Zone anywhere in Ukraine, will a nuclear weapon be launched in combat in 2022?”

This market currently says

P[launch | declare] = 18%,

P[launch | don’t declare] = 5.4%.

Technically there is no market for P[launch | don’t declare] but you can find an implied price using (1) the market for P[launch] (2) the market for P[declare] and (3) the ᴘᴏᴡᴇʀ ᴏꜰ ᴍᴀᴛʜ. [...]

So launch is 3.3x more likely given declare than given don’t declare. The obvious way of looking at this would be that NATO declaring a no-fly zone would increase the odds of a nuclear launch:

NATO declares no-fly zone

NATO and Russian planes clash over Ukraine

Conflict escalates

Nuclear weapon launched

That’s probably the right interpretation. But not necessarily. For example, do we really know the mettle of NATO leaders? It could be that declaring a no-fly zone has no direct impact on the odds of a launch, but the fact that NATO declares one reveals that NATO leaders have aggressive temperaments and are thus more likely to take other aggressive actions (note the first arrow points up):

NATO declares no-fly zone

NATO leaders are aggressive

NATO sends NATO tanks to Ukraine

NATO and Russian tanks clash in Ukraine

Nuclear weapon launched

This could also explain the current probabilities.

[...]

A mid-post summary of the argument (up to that point)

So far, this article has made this argument:

  1. You can use conditional prediction markets to get the probability of outcome B given different actions A.

  2. But just because changing the value of A changes the conditional probability of B doesn’t mean that doing A changes the probability of B.

  3. For that to be true, you need a particular causal structure for the variables being studied. (No causal path from B to A, no variable C with a causal path to both A and B)

  4. You can guarantee the right causal structure by randomizing the choice of A. If you do that, then conditional prediction market prices do imply causation.

Basically: If you run a prediction market to predict correlations, you get correlations. If you run a prediction market to predict the outcome of a randomized trial, you get causality. But to incentivize people to predict the outcomes of a randomized trial you have to actually run a randomized trial, and this is costly.

Some potential solutions to the problem

  1. “Get the arrows right.” Find careful markets to run such that the causal structure is ok (no reverse causality, no confounders, and you have a safe conclusion — explained in the post)

  2. “Commit to randomization” — “randomize decisions sometimes, at random.” (Explained in the post.) (There’s also a sketch of a proposal for getting lots of information about the world at the cost of running a few very expensive RCTs.)

  3. “Bet re-weighting” (explained in the post)

  4. “Natural experiments” (explained in the post)

  5. “The arrow of time” (explained in the post, resolves reverse causality)

  6. “Controlled conditional prediction markets” — trying to add all relevant control variables about the possible confounders — explained in the post.