I think this ‘paradox’ is chronically misunderstood. Many people claim that the player can choose whether or not to take the transparent box after the predictor makes his prediction, but this is not how humans actually seem to make decisions and it directly contradicts the setup of the question—so I claim that your ‘causal’ solution is just wrong.
In order for the predictor to be able to make accurate predictions, players’ decisions must be deducible at the time the prediction is made. Depending on your mental model of free will (or the lack thereof), this might seem completely plausible or utterly absurd. But for the paradox’s setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.
Simply put, if you’re going to reason about this using causality, then you have to explain the causality of the predictor’s predictions. And once you explain this, it becomes clear that the causal approach agrees with the evidential approach: you should take only the opaque box. It will feel like you’re making the decision while looking at the boxes, but you actually made the decision long before (if at all).
‘But for the paradox’s setup to make sense, the player must have, in some sense, made his decision before the prediction is made’ No, there just has to be something that occurs earlier which guarantees what decision the player makes during the game. But if determinism is true, that thing could be the first event in the universe’s history, which would definitely not be a decision of the player. I think maybe your thinking that if that’s the case, ‘the set-up doesn’t make sense’, because the player can’t choose otherwise and therefore their decision can’t be evaluated as rational or anything else. But it’s a very substantive philosophical assumption that if your decisions are guaranteed by the past before the decision, they can’t be evaluated for rationality (or morality or whatever) at the time they occur. Roughly that amounts to rejecting compatibilism about free will, which is the standard philosophical view*.
I don’t know what point you’re trying to make, because your response was rambling, poorly formatted and incoherent.
Are you just agreeing with me that the ‘paradox’ is solved and also nitpicking by claiming that it’s possible that humans don’t make decisions at all? If not, then I think you’re very confused.
Basically, if your final decision was knowable to the predictor before he made his prediction, then it doesn’t make sense, after his prediction is locked in, to say, “The predictor has already made his prediction, so the decision I make now can’t affect his prediction.” The predictor knew what your final decision was going to be.
I’m not making any bold claims about free will; I’m just pointing out that the ‘causal’ arguments for taking both boxes are contradicting the setup of the question.
This comment is not civil or productive (and neither are others Manbearpanda has posted in this thread) and clearly violates Forum discussion norms that we expect everyone to respect. This is not a good way to have disagreements.
I’m bringing this up to the moderation team to decide if we want to take further action.
We’re banning Manbearpanda for 1 month. Their recent responses on this post have been condescending, overconfident, and uncivil. Responding to a comment by referring to the discussion as “‘Effective’ ‘Altruists’ circle jerking around yet another wrong answer” is not a good way to get to the truth behind a disagreement. I also don’t think it was reasonable or helpful to dismiss a comment in the discussion as “rambling, poorly formatted and incoherent.” And the last comment seemed to totally devolve into an ad hominem and passive-aggressive attack.
Please note that bans affect the person behind the account, not the account itself.
If Manbearpanda comes back, we’ll hold them to a high standard, according to our norms.
But for the paradox’s setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.
This doesn’t seem correct. It’s possible to make a better than random guess about what a person will decide in the future, even if the person has not yet made their decision.
This is not mysterious in ordinary contexts. I can make a plan to meet with a friend and justifiably have very high confidence that they’ll show up at the agreed time. But that doesn’t preclude that they might in fact choose to cancel at the last minute.
You haven’t understood. Your analogy fails because your friend isn’t incentivised to select against you and try to make you guess incorrectly.
Obviously, from the predictor’s perspective, there can be some explicable variance and some inexplicable variance, and it’s plausible to claim that some of the inexplicable variance comes from decisions that have not yet been made. But the question states that the predictor has an exceedingly good track record, so the vast, vast majority of the variance can be explained.
You can claim that the predictor thinks you’re 99.98% to take both boxes, but you know that you’re actually only 99.96% to take both boxes. But that doesn’t help you make nonnegligible money in the game, and you’re just missing the point of the ‘paradox’.
What I said was correct. It holds up in the stochastic case where the predictor is nearly certain of your decision, though it’s simpler to think about the deterministic case where the predictor is certain.
I’m disappointed by ‘Effective’ ‘Altruists’ circle jerking around yet another wrong answer. :’)
I think this ‘paradox’ is chronically misunderstood. Many people claim that the player can choose whether or not to take the transparent box after the predictor makes his prediction, but this is not how humans actually seem to make decisions and it directly contradicts the setup of the question—so I claim that your ‘causal’ solution is just wrong.
In order for the predictor to be able to make accurate predictions, players’ decisions must be deducible at the time the prediction is made. Depending on your mental model of free will (or the lack thereof), this might seem completely plausible or utterly absurd. But for the paradox’s setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.
Simply put, if you’re going to reason about this using causality, then you have to explain the causality of the predictor’s predictions. And once you explain this, it becomes clear that the causal approach agrees with the evidential approach: you should take only the opaque box. It will feel like you’re making the decision while looking at the boxes, but you actually made the decision long before (if at all).
‘But for the paradox’s setup to make sense, the player must have, in some sense, made his decision before the prediction is made’ No, there just has to be something that occurs earlier which guarantees what decision the player makes during the game. But if determinism is true, that thing could be the first event in the universe’s history, which would definitely not be a decision of the player. I think maybe your thinking that if that’s the case, ‘the set-up doesn’t make sense’, because the player can’t choose otherwise and therefore their decision can’t be evaluated as rational or anything else. But it’s a very substantive philosophical assumption that if your decisions are guaranteed by the past before the decision, they can’t be evaluated for rationality (or morality or whatever) at the time they occur. Roughly that amounts to rejecting compatibilism about free will, which is the standard philosophical view*.
*https://survey2020.philpeople.org/survey/results/4838 Roughly 57-9% of English-speaking philosophers endorse it.
I don’t know what point you’re trying to make, because your response was rambling, poorly formatted and incoherent.
Are you just agreeing with me that the ‘paradox’ is solved and also nitpicking by claiming that it’s possible that humans don’t make decisions at all? If not, then I think you’re very confused.
Basically, if your final decision was knowable to the predictor before he made his prediction, then it doesn’t make sense, after his prediction is locked in, to say, “The predictor has already made his prediction, so the decision I make now can’t affect his prediction.” The predictor knew what your final decision was going to be.
I’m not making any bold claims about free will; I’m just pointing out that the ‘causal’ arguments for taking both boxes are contradicting the setup of the question.
I don’t understand what point your making either. Probably this won’t be productive to continue.
So you can’t read or type properly; I recommend avoiding online forums.
This comment is not civil or productive (and neither are others Manbearpanda has posted in this thread) and clearly violates Forum discussion norms that we expect everyone to respect. This is not a good way to have disagreements.
I’m bringing this up to the moderation team to decide if we want to take further action.
lol! I haven’t said anything inappropriate or incorrect.
You seem to value arrogant people’s fragile egos over correct solutions to problems, which is obviously a terrible idea.
We’re banning Manbearpanda for 1 month. Their recent responses on this post have been condescending, overconfident, and uncivil. Responding to a comment by referring to the discussion as “‘Effective’ ‘Altruists’ circle jerking around yet another wrong answer” is not a good way to get to the truth behind a disagreement. I also don’t think it was reasonable or helpful to dismiss a comment in the discussion as “rambling, poorly formatted and incoherent.” And the last comment seemed to totally devolve into an ad hominem and passive-aggressive attack.
Please note that bans affect the person behind the account, not the account itself.
If Manbearpanda comes back, we’ll hold them to a high standard, according to our norms.
This doesn’t seem correct. It’s possible to make a better than random guess about what a person will decide in the future, even if the person has not yet made their decision.
This is not mysterious in ordinary contexts. I can make a plan to meet with a friend and justifiably have very high confidence that they’ll show up at the agreed time. But that doesn’t preclude that they might in fact choose to cancel at the last minute.
You haven’t understood. Your analogy fails because your friend isn’t incentivised to select against you and try to make you guess incorrectly.
Obviously, from the predictor’s perspective, there can be some explicable variance and some inexplicable variance, and it’s plausible to claim that some of the inexplicable variance comes from decisions that have not yet been made. But the question states that the predictor has an exceedingly good track record, so the vast, vast majority of the variance can be explained.
You can claim that the predictor thinks you’re 99.98% to take both boxes, but you know that you’re actually only 99.96% to take both boxes. But that doesn’t help you make nonnegligible money in the game, and you’re just missing the point of the ‘paradox’.
What I said was correct. It holds up in the stochastic case where the predictor is nearly certain of your decision, though it’s simpler to think about the deterministic case where the predictor is certain.
I’m disappointed by ‘Effective’ ‘Altruists’ circle jerking around yet another wrong answer. :’)