Newcomb’s problem, honesty, evidence, and hidden agendas
Thought experiments are usually intended to stimulate thinking, rather than be true to life. Newcomb’s problem seems important to me in that it leads to a certain response to a certain kind of manipulation, if it is taken too literally. But let’s assume we’re all too mature for that.
In Newcomb’s problem, a person is given a context, and a suggestion, that their behavior has been predicted beforehand, and that the person with that predictive knowledge is telling them about it . There are hypothetical situations in which that knowledge is correct, but Newcomb’s problem doesn’t appear to be one of them.
But to address the particulars I will focus on testing the scientist’s honesty and accuracy. Let’s recap quickly:
the scientist claims to make a prediction, and that the prediction determines one of two possible behavioral options. You take two boxes from the scientist, or take the opaque one only.
the scientist claims to make a decision about whether to put $1,000,000 in an opaque box before interacting with a person(you) who enters the scientist’s tent using a brain scan machine posted at the tent entrance. The brain scan machine gives the scientist a signal about what you’re likely to do, and the scientist either puts a million in the opaque box, or not. In addition, there’s a clear box in the tent containing $1000.
you can’t see what’s in the opaque box the whole time you’re in the tent. You can see the $1000 the entire time.
if the scientist believes what they claim, then the scientist thinks that interaction with you will have no affect on what you do once you walk in the tent. It was decided when you walked through the door. In other words, in the scientist’s mind, no matter what the scientist or you would otherwise do, only one of two outcomes will occur. You will take both boxes or just the opaque box.
So here’s what I think. There are far more situations in life where someone tells you a limited set of your options from a larger set than there are situations in which someone tells you your full set of options. The scientist claimed only two outcomes would occur (put differently, you would do one of two things). The scientist supposedly has this brain scan technology that tells them what your two options are, and the scientist is confident that the technology works. Your willingness to believe the scientist at all depends on the scientist’s claims being believed in their entirety. That means the scientist’s claims about the reliability of the machine as well. Once some claims show as false, you have reason to question the rest. At that point, the thought experiment’s setup fails. Let’s test the scientist’s claims.
So, don’t take either box. Instead, walk out of the tent. If you make it out without taking any boxes, then you know that the scientist was wrong or lying about what you would do. You did not take any boxes. You just left both boxes on the table. Now, think this over. If the scientist was sincere, then there’s a mad scientist with a $1,001,000 in the tent you just walked out of who either thought you would follow their instructions or thought that they had predicted you so well that they could just tell you what you would do. If the scientist was not sincere, then there’s a lying and manipulative scientist in the tent with a $1,000 and an opaque mystery box that they’re hoping you’ll take from them.
BTW: If someone gives me free money, even a $1000, to take a mystery package from them, I decline.
But, you say, “I think it’s understood that you could walk out of the tent, or start a conversation, maybe even ask the scientist about the opaque box’s contents, or do other things instead.” However, if that’s so, why couldn’t you just take the $1000, say thanks, and leave rather than take the opaque box with you? What constrained your freedom of choice?
Was it the mad scientist? Did the mad scientist zipper the tent entrance behind you and booby-trap the boxes so you either take both boxes or just the opaque one? Is the scientist going to threaten you if you don’t take either box? If so, then you’ve got a mad scientist who’s not only interested in predicting what you do, but also interested in controlling what you do, by constraining it as much as they can. And that’s not the thought experiment at all. No, the thought experiment is about the scientist predicting you, not controlling you, right? And you’re an ethical person, because otherwise you would shake the scientist down for the million still in the tent, so we’ll ignore that option.
However, in case the thought experiment is about the scientist controlling you, well, I would leave the tent immediately and be grateful that the scientist didn’t choose to keep you there longer. That is, leave if you can. Basically, it seems that if you do anything too creative in response to the scientist, you could be in for a fight. I would go with trying to leave.
But lets assume you don’t believe that the scientist is controlling you in any way, something about controlling you seems like a different thought experiment. Lets just go with you walking out of the tent without any boxes.
Catch your breath, think over what happened, and don’t go back in the tent and try to interact with the scientist anymore. Remember, anyone willing to do that sort of thing to strangers like you is plausibly a desperate criminal wanting you to take a mysterious package from them. Or a distraught (and plausibly delusional) scientist who you just proved has a worthless brain scan machine that they wasted millions of dollars testing.
EDIT: ok, so in case it’s not obvious, you disproved that the scientist’s brain scanner works. It predicted two behavioral outcomes, and you chose a third from several, including:
trying to take the $1000 out of the clear box and leaving the opaque box behind
shaking down the scientist for the million presumably in the tent somewhere, if it’s not all in the two boxes
starting a conversation with the scientist, maybe to make a case that you really need a million dollars no matter what kind of decision-maker you are
leaving the tent asap
and plausibly others
By disproving that the brain scanner works reliably, you made a key claim of the scientist’s false: “my brain scanner will predict whether you take both boxes or only one”. Other claims from the scientist, like “I always put a million in the opaque box if my brain scanner tells me to” and “So far, my brain scanner has always been right” are now suspect. That means that the scientist’s behavior and the entire thought experiment can be seen differently, perhaps as a scam, or as evidence of a mad scientist’s delusional belief in a worthless machine.
You could reply:
“What if the brain scanning machine only works for those situations where you take both boxes or only the opaque box and then just leave?”: Well, that would mean that loads of people could come in the tent, do all kinds of things, like ransack it, or take the clear box, or just leave the tent while taking nothing, and the machine gives the scientist a bogus signal for all of those cases. The machine has, then, been wrong, and frequently.
“What if the brain scanner gives no signal if you won’t do one of the two things that the scientist expects?”: Interesting, but then why is the scientist telling you their whole speal (“here are two boxes, I scanned your brain when you came through the door, blah blah blah...”) after finding out that you won’t just take one of the two options that the scientist offers? After all, as a rational actor you can still do all the things you want to do after listening the scientist’s speal.
“Maybe the scientist changes their speal, adds a caveat that you follow their instructions in order for the predictions to work.” OK, then. Let’s come back to that.
“What if there are guards in the tent, and you’re warned that you must take either the opaque box or both boxes or the guards will fatally harm you?”: Well, once again, it’s clear that the scientist is interested in controlling and limiting your behavior after you enter the tent, which means that the brain scanner machine is far from reliable at predicting your behavior in general.
“Hah! But you will choose the opaque box or both boxes, under duress. This proves that some people are one-boxers and others are two-boxers. I got you!”: Well, some people would follow the scientist’s instructions, you’re right. Other people would have a panic attack, or ask the scientist which choice the scientist would prefer, or just run for their lives from the tent, or even offer the guards a chance to split the scientist’s money if the guards change sides. Pretty soon, that brain scanning machine is looking a lot less relevant to what the tent’s visitors do than the guards and the scientist are. From what I understand, attempting to give someone calm and reassuring instructions while also threatening their lives (“Look, just take the $1000 and the opaque box, everything will be fine”) doesn’t tend to work very well
“Wait a minute. What if the scientist has a brain scanning device that predicts 100′s of different behaviors you could do by scanning you as you walk in the tent, and …”: Let me stop you there. If the scientist needs that kind of predictive power, and develops it, it’s _ to know what to do_ when you walk in the tent, not just to know what you will do when you walk in the tent. And just because the scientist knows what you will do if you’re confronted with a situation, doesn’t mean that the scientist has a useful response to what you will do. At this point, whose decision-making is really under the microscope, the tent’s visitor, or the scientist’s?
“Let’s back this up. All we’re really thinking about is someone who willingly participates in the scientist’s game, trusts the scientist, and follows the scientist’s instructions. Aren’t you just distorting the experiment’s context?”If someone claims to be able to predict your behavior, and the only way for their predictions to ever seem accurate is for you to play along with the options they provide, then don’t you see that dishonesty is already present? You are the one being dishonest, or you both are. You’re playing along with the mad scientist, or the mad scientist isn’t mad at all, but has some ulterior motive for wanting you take an opaque box with you, or otherwise participate in their bizarre game. The predictions aren’t really about what you would do if confronted with two boxes in such a situation. The predictions are make-believe that you play with someone with boxes in a tent, and only if you’re that kind of person. Not everyone is.
No, you just said that the visitor to the tent is ‘playing along’. But the thought experiment is about someone who trusts the scientist , and playing along is not trusting the scientist .” Yes, exactly the kind of thing that I’ve been cautioning you about. Don’t be one of those people. There are people who trust you and select among the options you give them for whatever reason you offer, no matter how contrary to existing evidence (e.g., of their own free will) the option selection is. Their decision strategies do not include acting on good evidence or understanding causality very well. And such people would likely leave with just the opaque box, and, if the scientist is to be believed, will be rewarded for it with a million dollars. However, they fall for every magic trick, and do not gather evidence carefully.
No, no, it’s not a magic trick. The thought experiment says that the scientist is really checking the brain scanning machine and putting the money in the opaque box, or not, according to what the machine says, and then making the same claims to every visitor about how the whole experiment works, and asking the visitors to participate according to the scientist’s simple instructions. All along you’ve been distorting this every which way. The machine could fail, but we know it succeeds. It succeeds with everybody, and the point of the thought experiment is just to think through what you ought to do in that situation, to get the most money, if you agree to the scientist’s terms. The only way to prove the scientist is wrong as a single visitor is to do everything right, leave with the opaque box only, but then find nothing inside. But we know that never happens. I see. Yeah. OK! I think you’ve changed the experiment a little though. Before, it was just, walk in, and get predicted. Now, it’s walk in and choose to cooperate, and the scientist is telling the truth, and the brain scanning machine appears to work, and then get predicted. And you can’t just play along, a visitor has to believe the scientist, and for good reason, in order to for people to draw any conclusions about what the experiment means.
“What? No, you don’t have to believe the scientist. You can play along, get some money, just choose one or two boxes. That’s what everyone should do, and the experiment shows it.” Some people would do that. We might as well flip a coin, or just pretend that we have reason to believe the scientist’s claim for causal reasons, and make up a causal reason. How about something like, “Hey, that million in the opaque box is like Schrodinger’s cat.” Maybe we make up a causal reason in hindsight after we find that million in the opaque box and leave the clear box behind. However, “rational” people would only follow the instructions if they believed the evidence warranted it, then those “rational” people would explore the reasons why. As far as I know, this thought experiment is supposed to mean that evidential and causal decision theory can conflict, but in fact, I think it only means that causal decisions can be revised based on new evidence. For example, brain scanner prediction, mind control, subtle influence by the scientist, money teleportation, time travel by someone observing you and taking the money back in time, or an unlikely string of random predictive success by a totally useless brain scanner, all potential explanations of the reason that the scientist’s machine would appear to work, if you decided to test if it works by taking the opaque box.
So what? Then the thought experiment only applies to people who follow instructions and trust the scientist and have good reason to trust the scientist’s claims, if you accept the idea that it’s supposed to distinguish evidential and causal decision theory. All your discussion of it managed to do was convince me that the thought experiment is well-designed, but also plausible. I think brain scanners like that, that work specific to a context where you choose to follow instructions, are plausible. If they were built, then setting something like this up in real life would be easy.” Yeah, and expensive. Plenty of people would take the opaque box only. I think this makes me want to revise the definition of “plausible” a little bit, for myself. I would just leave the tent. Julia Galef also thinks that such devices as brain scanners are plausible, or she claimed that, in her old video. So you’re in good company.
Newcomb’s problem, honesty, evidence, and hidden agendas
Thought experiments are usually intended to stimulate thinking, rather than be true to life. Newcomb’s problem seems important to me in that it leads to a certain response to a certain kind of manipulation, if it is taken too literally. But let’s assume we’re all too mature for that.
In Newcomb’s problem, a person is given a context, and a suggestion, that their behavior has been predicted beforehand, and that the person with that predictive knowledge is telling them about it . There are hypothetical situations in which that knowledge is correct, but Newcomb’s problem doesn’t appear to be one of them.
But to address the particulars I will focus on testing the scientist’s honesty and accuracy. Let’s recap quickly:
the scientist claims to make a prediction, and that the prediction determines one of two possible behavioral options. You take two boxes from the scientist, or take the opaque one only.
the scientist claims to make a decision about whether to put $1,000,000 in an opaque box before interacting with a person(you) who enters the scientist’s tent using a brain scan machine posted at the tent entrance. The brain scan machine gives the scientist a signal about what you’re likely to do, and the scientist either puts a million in the opaque box, or not. In addition, there’s a clear box in the tent containing $1000.
you can’t see what’s in the opaque box the whole time you’re in the tent. You can see the $1000 the entire time.
if the scientist believes what they claim, then the scientist thinks that interaction with you will have no affect on what you do once you walk in the tent. It was decided when you walked through the door. In other words, in the scientist’s mind, no matter what the scientist or you would otherwise do, only one of two outcomes will occur. You will take both boxes or just the opaque box.
So here’s what I think. There are far more situations in life where someone tells you a limited set of your options from a larger set than there are situations in which someone tells you your full set of options. The scientist claimed only two outcomes would occur (put differently, you would do one of two things). The scientist supposedly has this brain scan technology that tells them what your two options are, and the scientist is confident that the technology works. Your willingness to believe the scientist at all depends on the scientist’s claims being believed in their entirety. That means the scientist’s claims about the reliability of the machine as well. Once some claims show as false, you have reason to question the rest. At that point, the thought experiment’s setup fails. Let’s test the scientist’s claims.
So, don’t take either box. Instead, walk out of the tent. If you make it out without taking any boxes, then you know that the scientist was wrong or lying about what you would do. You did not take any boxes. You just left both boxes on the table. Now, think this over. If the scientist was sincere, then there’s a mad scientist with a $1,001,000 in the tent you just walked out of who either thought you would follow their instructions or thought that they had predicted you so well that they could just tell you what you would do. If the scientist was not sincere, then there’s a lying and manipulative scientist in the tent with a $1,000 and an opaque mystery box that they’re hoping you’ll take from them.
BTW: If someone gives me free money, even a $1000, to take a mystery package from them, I decline.
But, you say, “I think it’s understood that you could walk out of the tent, or start a conversation, maybe even ask the scientist about the opaque box’s contents, or do other things instead.” However, if that’s so, why couldn’t you just take the $1000, say thanks, and leave rather than take the opaque box with you? What constrained your freedom of choice?
Was it the mad scientist? Did the mad scientist zipper the tent entrance behind you and booby-trap the boxes so you either take both boxes or just the opaque one? Is the scientist going to threaten you if you don’t take either box? If so, then you’ve got a mad scientist who’s not only interested in predicting what you do, but also interested in controlling what you do, by constraining it as much as they can. And that’s not the thought experiment at all. No, the thought experiment is about the scientist predicting you, not controlling you, right? And you’re an ethical person, because otherwise you would shake the scientist down for the million still in the tent, so we’ll ignore that option.
However, in case the thought experiment is about the scientist controlling you, well, I would leave the tent immediately and be grateful that the scientist didn’t choose to keep you there longer. That is, leave if you can. Basically, it seems that if you do anything too creative in response to the scientist, you could be in for a fight. I would go with trying to leave.
But lets assume you don’t believe that the scientist is controlling you in any way, something about controlling you seems like a different thought experiment. Lets just go with you walking out of the tent without any boxes. Catch your breath, think over what happened, and don’t go back in the tent and try to interact with the scientist anymore. Remember, anyone willing to do that sort of thing to strangers like you is plausibly a desperate criminal wanting you to take a mysterious package from them. Or a distraught (and plausibly delusional) scientist who you just proved has a worthless brain scan machine that they wasted millions of dollars testing.
EDIT: ok, so in case it’s not obvious, you disproved that the scientist’s brain scanner works. It predicted two behavioral outcomes, and you chose a third from several, including:
trying to take the $1000 out of the clear box and leaving the opaque box behind
shaking down the scientist for the million presumably in the tent somewhere, if it’s not all in the two boxes
starting a conversation with the scientist, maybe to make a case that you really need a million dollars no matter what kind of decision-maker you are
leaving the tent asap
and plausibly others
By disproving that the brain scanner works reliably, you made a key claim of the scientist’s false: “my brain scanner will predict whether you take both boxes or only one”. Other claims from the scientist, like “I always put a million in the opaque box if my brain scanner tells me to” and “So far, my brain scanner has always been right” are now suspect. That means that the scientist’s behavior and the entire thought experiment can be seen differently, perhaps as a scam, or as evidence of a mad scientist’s delusional belief in a worthless machine.
You could reply:
“What if the brain scanning machine only works for those situations where you take both boxes or only the opaque box and then just leave?”: Well, that would mean that loads of people could come in the tent, do all kinds of things, like ransack it, or take the clear box, or just leave the tent while taking nothing, and the machine gives the scientist a bogus signal for all of those cases. The machine has, then, been wrong, and frequently.
“What if the brain scanner gives no signal if you won’t do one of the two things that the scientist expects?”: Interesting, but then why is the scientist telling you their whole speal (“here are two boxes, I scanned your brain when you came through the door, blah blah blah...”) after finding out that you won’t just take one of the two options that the scientist offers? After all, as a rational actor you can still do all the things you want to do after listening the scientist’s speal.
“Maybe the scientist changes their speal, adds a caveat that you follow their instructions in order for the predictions to work.” OK, then. Let’s come back to that.
“What if there are guards in the tent, and you’re warned that you must take either the opaque box or both boxes or the guards will fatally harm you?”: Well, once again, it’s clear that the scientist is interested in controlling and limiting your behavior after you enter the tent, which means that the brain scanner machine is far from reliable at predicting your behavior in general.
“Hah! But you will choose the opaque box or both boxes, under duress. This proves that some people are one-boxers and others are two-boxers. I got you!”: Well, some people would follow the scientist’s instructions, you’re right. Other people would have a panic attack, or ask the scientist which choice the scientist would prefer, or just run for their lives from the tent, or even offer the guards a chance to split the scientist’s money if the guards change sides. Pretty soon, that brain scanning machine is looking a lot less relevant to what the tent’s visitors do than the guards and the scientist are. From what I understand, attempting to give someone calm and reassuring instructions while also threatening their lives (“Look, just take the $1000 and the opaque box, everything will be fine”) doesn’t tend to work very well
“Wait a minute. What if the scientist has a brain scanning device that predicts 100′s of different behaviors you could do by scanning you as you walk in the tent, and …”: Let me stop you there. If the scientist needs that kind of predictive power, and develops it, it’s _ to know what to do_ when you walk in the tent, not just to know what you will do when you walk in the tent. And just because the scientist knows what you will do if you’re confronted with a situation, doesn’t mean that the scientist has a useful response to what you will do. At this point, whose decision-making is really under the microscope, the tent’s visitor, or the scientist’s?
“Let’s back this up. All we’re really thinking about is someone who willingly participates in the scientist’s game, trusts the scientist, and follows the scientist’s instructions. Aren’t you just distorting the experiment’s context?” If someone claims to be able to predict your behavior, and the only way for their predictions to ever seem accurate is for you to play along with the options they provide, then don’t you see that dishonesty is already present? You are the one being dishonest, or you both are. You’re playing along with the mad scientist, or the mad scientist isn’t mad at all, but has some ulterior motive for wanting you take an opaque box with you, or otherwise participate in their bizarre game. The predictions aren’t really about what you would do if confronted with two boxes in such a situation. The predictions are make-believe that you play with someone with boxes in a tent, and only if you’re that kind of person. Not everyone is.
No, you just said that the visitor to the tent is ‘playing along’. But the thought experiment is about someone who trusts the scientist , and playing along is not trusting the scientist .” Yes, exactly the kind of thing that I’ve been cautioning you about. Don’t be one of those people. There are people who trust you and select among the options you give them for whatever reason you offer, no matter how contrary to existing evidence (e.g., of their own free will) the option selection is. Their decision strategies do not include acting on good evidence or understanding causality very well. And such people would likely leave with just the opaque box, and, if the scientist is to be believed, will be rewarded for it with a million dollars. However, they fall for every magic trick, and do not gather evidence carefully.
No, no, it’s not a magic trick. The thought experiment says that the scientist is really checking the brain scanning machine and putting the money in the opaque box, or not, according to what the machine says, and then making the same claims to every visitor about how the whole experiment works, and asking the visitors to participate according to the scientist’s simple instructions. All along you’ve been distorting this every which way. The machine could fail, but we know it succeeds. It succeeds with everybody, and the point of the thought experiment is just to think through what you ought to do in that situation, to get the most money, if you agree to the scientist’s terms. The only way to prove the scientist is wrong as a single visitor is to do everything right, leave with the opaque box only, but then find nothing inside. But we know that never happens. I see. Yeah. OK! I think you’ve changed the experiment a little though. Before, it was just, walk in, and get predicted. Now, it’s walk in and choose to cooperate, and the scientist is telling the truth, and the brain scanning machine appears to work, and then get predicted. And you can’t just play along, a visitor has to believe the scientist, and for good reason, in order to for people to draw any conclusions about what the experiment means.
“What? No, you don’t have to believe the scientist. You can play along, get some money, just choose one or two boxes. That’s what everyone should do, and the experiment shows it.” Some people would do that. We might as well flip a coin, or just pretend that we have reason to believe the scientist’s claim for causal reasons, and make up a causal reason. How about something like, “Hey, that million in the opaque box is like Schrodinger’s cat.” Maybe we make up a causal reason in hindsight after we find that million in the opaque box and leave the clear box behind. However, “rational” people would only follow the instructions if they believed the evidence warranted it, then those “rational” people would explore the reasons why. As far as I know, this thought experiment is supposed to mean that evidential and causal decision theory can conflict, but in fact, I think it only means that causal decisions can be revised based on new evidence. For example, brain scanner prediction, mind control, subtle influence by the scientist, money teleportation, time travel by someone observing you and taking the money back in time, or an unlikely string of random predictive success by a totally useless brain scanner, all potential explanations of the reason that the scientist’s machine would appear to work, if you decided to test if it works by taking the opaque box.
So what? Then the thought experiment only applies to people who follow instructions and trust the scientist and have good reason to trust the scientist’s claims, if you accept the idea that it’s supposed to distinguish evidential and causal decision theory. All your discussion of it managed to do was convince me that the thought experiment is well-designed, but also plausible. I think brain scanners like that, that work specific to a context where you choose to follow instructions, are plausible. If they were built, then setting something like this up in real life would be easy.” Yeah, and expensive. Plenty of people would take the opaque box only. I think this makes me want to revise the definition of “plausible” a little bit, for myself. I would just leave the tent. Julia Galef also thinks that such devices as brain scanners are plausible, or she claimed that, in her old video. So you’re in good company.
And thanks!