Newcomb’s Paradox Explained
There are two decision theories: causal and evidential, which often agree in normal cases but disagree in weird ones, e.g. Newcomb’s paradox, so the paradox teases out our competing intuitions on how to make decisions.
Source: Hilary Greaves on 80k podcast
Setup
There are two boxes in front of you: a transparent one that you can see contains £1000 and an opaque box that either contains a million pounds or nothing. Your choice is to either take both boxes or just the opaque box.
The catch is that a very good predicter has predicted your decision and has acted, (based on their prediction) as follows:
If they predict that you’re going to take both boxes, they put nothing in the opaque box.
If they predict you’re just going to take the opaque box, they put 1 million pounds in it.
So, what should you do?
There are 2 theories on how to approach this:
Causal decision theory
This notices that the predictor has made their prediction and then fucked off, so there’s no mechanism for your choice to interact with their prediction/ to cause anything, so your options are just: £1,000 and possible a million; or just the possibility of a million. You should clearly take the former, so causal decision theorists would choose both boxes.
Evidential decision theory
While your decision won’t cause anything, it’s evidence of what the predictor predicted, and so it’s evidence of what’s in the opaque box. You should choose just the opaque box as the predictor would anticipate this thought process, predict you will pick just the opaque box, and put a million quid in it. If you want to be sneaky, by thinking that the predictor will predict you’ll pick just the opaque box but you actually choose both, the predictor will anticipate this and leave the opaque box empty.
In other words, if it’s overwhelmingly likely that the predictor will predict correctly, then if you choose just the opaque box, it’s overwhelmingly likely the predictor would predict this, so it’s overwhelmingly likely you’ll get the million. If you choose both boxes it’s overwhelmingly likely the predictor will predict this and make the opaque box empty, so it’s overwhelmingly likely you’ll just get the thousand pounds.
Another example: smoking lesions
In this example, the causal decision theorist’s intuition is much more obvious. Imagine that the presence of smoking lesions causes 2 things: cancer and the disposition to smoke. (In this world, smoking doesn’t cause cancer, and smoking is pleasant). The question is, in this world, should I smoke? Wanting to smoke is evidence of the smoking lesion, but it doesn’t cause anything at all, so I should smoke (if I enjoy smoking).
My intuition is evidential in the 1st case but causal in the 2nd, so if anyone can explain the difference between the cases, that would be great. Thanks!
There are a lot more than two decision theories. Most are designed to do equally well or better than both causal and evidential decision theory in Newcomb-like problems and even more exotic setups.
The basic idea in all of them is that, instead of choosing the best decision at any particular decision point, they choose the best decision-making algorithm across possible world states.
I think this ‘paradox’ is chronically misunderstood. Many people claim that the player can choose whether or not to take the transparent box after the predictor makes his prediction, but this is not how humans actually seem to make decisions and it directly contradicts the setup of the question—so I claim that your ‘causal’ solution is just wrong.
In order for the predictor to be able to make accurate predictions, players’ decisions must be deducible at the time the prediction is made. Depending on your mental model of free will (or the lack thereof), this might seem completely plausible or utterly absurd. But for the paradox’s setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.
Simply put, if you’re going to reason about this using causality, then you have to explain the causality of the predictor’s predictions. And once you explain this, it becomes clear that the causal approach agrees with the evidential approach: you should take only the opaque box. It will feel like you’re making the decision while looking at the boxes, but you actually made the decision long before (if at all).
‘But for the paradox’s setup to make sense, the player must have, in some sense, made his decision before the prediction is made’ No, there just has to be something that occurs earlier which guarantees what decision the player makes during the game. But if determinism is true, that thing could be the first event in the universe’s history, which would definitely not be a decision of the player. I think maybe your thinking that if that’s the case, ‘the set-up doesn’t make sense’, because the player can’t choose otherwise and therefore their decision can’t be evaluated as rational or anything else. But it’s a very substantive philosophical assumption that if your decisions are guaranteed by the past before the decision, they can’t be evaluated for rationality (or morality or whatever) at the time they occur. Roughly that amounts to rejecting compatibilism about free will, which is the standard philosophical view*.
*https://survey2020.philpeople.org/survey/results/4838 Roughly 57-9% of English-speaking philosophers endorse it.
I don’t know what point you’re trying to make, because your response was rambling, poorly formatted and incoherent.
Are you just agreeing with me that the ‘paradox’ is solved and also nitpicking by claiming that it’s possible that humans don’t make decisions at all? If not, then I think you’re very confused.
Basically, if your final decision was knowable to the predictor before he made his prediction, then it doesn’t make sense, after his prediction is locked in, to say, “The predictor has already made his prediction, so the decision I make now can’t affect his prediction.” The predictor knew what your final decision was going to be.
I’m not making any bold claims about free will; I’m just pointing out that the ‘causal’ arguments for taking both boxes are contradicting the setup of the question.
I don’t understand what point your making either. Probably this won’t be productive to continue.
So you can’t read or type properly; I recommend avoiding online forums.
This comment is not civil or productive (and neither are others Manbearpanda has posted in this thread) and clearly violates Forum discussion norms that we expect everyone to respect. This is not a good way to have disagreements.
I’m bringing this up to the moderation team to decide if we want to take further action.
lol! I haven’t said anything inappropriate or incorrect.
You seem to value arrogant people’s fragile egos over correct solutions to problems, which is obviously a terrible idea.
We’re banning Manbearpanda for 1 month. Their recent responses on this post have been condescending, overconfident, and uncivil. Responding to a comment by referring to the discussion as “‘Effective’ ‘Altruists’ circle jerking around yet another wrong answer” is not a good way to get to the truth behind a disagreement. I also don’t think it was reasonable or helpful to dismiss a comment in the discussion as “rambling, poorly formatted and incoherent.” And the last comment seemed to totally devolve into an ad hominem and passive-aggressive attack.
Please note that bans affect the person behind the account, not the account itself.
If Manbearpanda comes back, we’ll hold them to a high standard, according to our norms.
This doesn’t seem correct. It’s possible to make a better than random guess about what a person will decide in the future, even if the person has not yet made their decision.
This is not mysterious in ordinary contexts. I can make a plan to meet with a friend and justifiably have very high confidence that they’ll show up at the agreed time. But that doesn’t preclude that they might in fact choose to cancel at the last minute.
You haven’t understood. Your analogy fails because your friend isn’t incentivised to select against you and try to make you guess incorrectly.
Obviously, from the predictor’s perspective, there can be some explicable variance and some inexplicable variance, and it’s plausible to claim that some of the inexplicable variance comes from decisions that have not yet been made. But the question states that the predictor has an exceedingly good track record, so the vast, vast majority of the variance can be explained.
You can claim that the predictor thinks you’re 99.98% to take both boxes, but you know that you’re actually only 99.96% to take both boxes. But that doesn’t help you make nonnegligible money in the game, and you’re just missing the point of the ‘paradox’.
What I said was correct. It holds up in the stochastic case where the predictor is nearly certain of your decision, though it’s simpler to think about the deterministic case where the predictor is certain.
I’m disappointed by ‘Effective’ ‘Altruists’ circle jerking around yet another wrong answer. :’)