I donât think this does away with the problem, because for decision making purposes the fact that a random event is extinction-causing or not is still relevant (thinking of the Supervolcano vs Martians case in the paper). I didnât see this addressed in the paper. Hereâs a scenario that hopefully illustrates the issue:
A game is set up where a ball will be drawn from a jar. If it comes out red then âextinctionâ occurs, the player loses immediately. If it comes out green then âsurvivalâ occurs, and the player continues to the next round. This is repeated (with the ball replaced every time) for an unknown number of rounds with the player unable to do anything.
Eventually, the game master decides to stop (for their own unknowable reasons), and offers the player two options:
Play one more round of drawing the ball from the jar and risking extinction if it comes out red
Take a fixed 10% chance of extinction
If they get through this round then they win the game.
The game is played in two formats:
Jack is offered the game as described above, where he can lose before getting to the decision point
Jill is offered a game where rounds before the decision point donât count, she can observe the colour of the ball but doesnât risk extinction. Only on the final round does she risk extinction
Letâs say they both start with a prior that P(red) is 15%, and that the actual P(red) is 20%. Should they adopt different strategies?
The answer is yes:
For Jack, he will only end up at the decision point if he observes 0 red balls. Assuming a large number of rounds are played, if he naively applies Bayesian reasoning he will conclude P(red) is very close to 0 and choose option 1 (another round of picking a ball). This is clearly irrational, because it will always result in option 1 being chosen regardless of the true probability and of his prior[1]. A better strategy is to stick with his prior if it is at all informative
For Jill, she will end up at the decision point regardless of whether she sees a red ball. Assuming a large number of practice rounds are played, in almost all worlds applying naive Bayesian reasoning will tell her P(red) is close to 20%, and she should pick option 2. In this case the decision is sensitive to the true probability, and she only loses out in the small proportion of worlds where she observes an unusually low number of red balls, so the naive Bayesian strategy seems rational
The point is that the population of Jacks that get the opportunity to make the decision is selected to be only those that receive evidence that imply a low probability, and this systematically biases the decision in a way that is predictable beforehand (such that having the information that this selection effect exists can change your optimal decision).
I think this is essentially the same objection raised by quila below, and is in the same vein as Jonas Mossâs comment on Tobyâs post (Iâm not 100% sure of this, Iâm more confident that the above objection is basically right than that itâs the same as these two others).
Itâs quite possible Iâm missing something in the paper, since I didnât read it in that much detail and other people seem convinced by it. But I didnât see anything that would make a difference for this basic case of an error in decision making being caused by the anthropic shadow (and particularly I didnât see how observing a larger number of rounds makes a difference).
A way to see that this is common-sense irrational is to suppose itâs a coin flip instead of a ball being drawn, where itâs very hard to imagine how you could physically bias a coin to 99% heads, so you would have a very strong prior against that. In this case if you saw 30 heads in a row (and you could see that it wasnât a two-headed coin) it would still seem stupid to take the risk of getting tails on the next round
Under typical decision theory, your decisions are a product of your beliefs and by the utilities that you assign to different outcomes. In order to argue that Jack and Jill ought to be making different decisions here, it seems that you must either:
Dispute the paperâs claim that Jack and Jill ought to assign the same probabilities in the above type of situations.
Be arguing that Jack and Jill ought to be making their decisions differently despite having identical preferences about the next round and identical beliefs about the likelihood that a ball will turn out to be red.
Are you advancing one of these claims? If (1), I think youâre directly disagreeing with the paper for reasons that donât just come down to how to approach decision making. If (2), maybe say more about why you propose Jack and Jill make different decisions despite having identical beliefs and preferences?
I thought about it more, and I am now convinced that the paper is right (at least in the specific example I proposed).
The thing I didnât get at first is that given a certain prior over P(extinction), and a number of iterations survived, there are âmore surviving worldsâ where the actual P(extinction) is low relative to your initial prior, and that this is exactly accounted for by the Bayes factor.
I also wrote a script that simulates the example I proposed, and am convinced that the naive Bayes approach does in fact give the best strategy in Jackâs case too (I havenât proved that there isnât a counterexample, but was convinced by fiddling with the parameters around the boundary of cases where always-option-1 dominates vs always-option-2).
I no longer endorse this, see reply below:
I donât think this does away with the problem, because for decision making purposes the fact that a random event is extinction-causing or not is still relevant (thinking of the Supervolcano vs Martians case in the paper). I didnât see this addressed in the paper. Hereâs a scenario that hopefully illustrates the issue:
A game is set up where a ball will be drawn from a jar. If it comes out red then âextinctionâ occurs, the player loses immediately. If it comes out green then âsurvivalâ occurs, and the player continues to the next round. This is repeated (with the ball replaced every time) for an unknown number of rounds with the player unable to do anything.
Eventually, the game master decides to stop (for their own unknowable reasons), and offers the player two options:
Play one more round of drawing the ball from the jar and risking extinction if it comes out red
Take a fixed 10% chance of extinction
If they get through this round then they win the game.
The game is played in two formats:
Jack is offered the game as described above, where he can lose before getting to the decision point
Jill is offered a game where rounds before the decision point donât count, she can observe the colour of the ball but doesnât risk extinction. Only on the final round does she risk extinction
Letâs say they both start with a prior that P(red) is 15%, and that the actual P(red) is 20%. Should they adopt different strategies?
The answer is yes:
For Jack, he will only end up at the decision point if he observes 0 red balls. Assuming a large number of rounds are played, if he naively applies Bayesian reasoning he will conclude P(red) is very close to 0 and choose option 1 (another round of picking a ball). This is clearly irrational, because it will always result in option 1 being chosen regardless of the true probability and of his prior[1]. A better strategy is to stick with his prior if it is at all informative
For Jill, she will end up at the decision point regardless of whether she sees a red ball. Assuming a large number of practice rounds are played, in almost all worlds applying naive Bayesian reasoning will tell her P(red) is close to 20%, and she should pick option 2. In this case the decision is sensitive to the true probability, and she only loses out in the small proportion of worlds where she observes an unusually low number of red balls, so the naive Bayesian strategy seems rational
The point is that the population of Jacks that get the opportunity to make the decision is selected to be only those that receive evidence that imply a low probability, and this systematically biases the decision in a way that is predictable beforehand (such that having the information that this selection effect exists can change your optimal decision).
I think this is essentially the same objection raised by quila below, and is in the same vein as Jonas Mossâs comment on Tobyâs post (Iâm not 100% sure of this, Iâm more confident that the above objection is basically right than that itâs the same as these two others).
Itâs quite possible Iâm missing something in the paper, since I didnât read it in that much detail and other people seem convinced by it. But I didnât see anything that would make a difference for this basic case of an error in decision making being caused by the anthropic shadow (and particularly I didnât see how observing a larger number of rounds makes a difference).
A way to see that this is common-sense irrational is to suppose itâs a coin flip instead of a ball being drawn, where itâs very hard to imagine how you could physically bias a coin to 99% heads, so you would have a very strong prior against that. In this case if you saw 30 heads in a row (and you could see that it wasnât a two-headed coin) it would still seem stupid to take the risk of getting tails on the next round
Under typical decision theory, your decisions are a product of your beliefs and by the utilities that you assign to different outcomes. In order to argue that Jack and Jill ought to be making different decisions here, it seems that you must either:
Dispute the paperâs claim that Jack and Jill ought to assign the same probabilities in the above type of situations.
Be arguing that Jack and Jill ought to be making their decisions differently despite having identical preferences about the next round and identical beliefs about the likelihood that a ball will turn out to be red.
Are you advancing one of these claims? If (1), I think youâre directly disagreeing with the paper for reasons that donât just come down to how to approach decision making. If (2), maybe say more about why you propose Jack and Jill make different decisions despite having identical beliefs and preferences?
I thought about it more, and I am now convinced that the paper is right (at least in the specific example I proposed).
The thing I didnât get at first is that given a certain prior over P(extinction), and a number of iterations survived, there are âmore surviving worldsâ where the actual P(extinction) is low relative to your initial prior, and that this is exactly accounted for by the Bayes factor.
I also wrote a script that simulates the example I proposed, and am convinced that the naive Bayes approach does in fact give the best strategy in Jackâs case too (I havenât proved that there isnât a counterexample, but was convinced by fiddling with the parameters around the boundary of cases where always-option-1 dominates vs always-option-2).
Thanks, this has actually updated me a lot :)