Newcomb’s Paradox Explained

There are two decision theories: causal and evidential, which often agree in normal cases but disagree in weird ones, e.g. Newcomb’s paradox, so the paradox teases out our competing intuitions on how to make decisions.

Source: Hilary Greaves on 80k podcast

Setup

There are two boxes in front of you: a transparent one that you can see contains £1000 and an opaque box that either contains a million pounds or nothing. Your choice is to either take both boxes or just the opaque box.

The catch is that a very good predicter has predicted your decision and has acted, (based on their prediction) as follows:

  • If they predict that you’re going to take both boxes, they put nothing in the opaque box.

  • If they predict you’re just going to take the opaque box, they put 1 million pounds in it.

So, what should you do?

There are 2 theories on how to approach this:

Causal decision theory

This notices that the predictor has made their prediction and then fucked off, so there’s no mechanism for your choice to interact with their prediction/​ to cause anything, so your options are just: £1,000 and possible a million; or just the possibility of a million. You should clearly take the former, so causal decision theorists would choose both boxes.

Evidential decision theory

While your decision won’t cause anything, it’s evidence of what the predictor predicted, and so it’s evidence of what’s in the opaque box. You should choose just the opaque box as the predictor would anticipate this thought process, predict you will pick just the opaque box, and put a million quid in it. If you want to be sneaky, by thinking that the predictor will predict you’ll pick just the opaque box but you actually choose both, the predictor will anticipate this and leave the opaque box empty.

In other words, if it’s overwhelmingly likely that the predictor will predict correctly, then if you choose just the opaque box, it’s overwhelmingly likely the predictor would predict this, so it’s overwhelmingly likely you’ll get the million. If you choose both boxes it’s overwhelmingly likely the predictor will predict this and make the opaque box empty, so it’s overwhelmingly likely you’ll just get the thousand pounds.

Another example: smoking lesions

In this example, the causal decision theorist’s intuition is much more obvious. Imagine that the presence of smoking lesions causes 2 things: cancer and the disposition to smoke. (In this world, smoking doesn’t cause cancer, and smoking is pleasant). The question is, in this world, should I smoke? Wanting to smoke is evidence of the smoking lesion, but it doesn’t cause anything at all, so I should smoke (if I enjoy smoking).

My intuition is evidential in the 1st case but causal in the 2nd, so if anyone can explain the difference between the cases, that would be great. Thanks!