Thank you for your comment! I agree with you that the difference between the birdâs-eye view and the wormâs eye view is very important, and certainly has the potential to explain why the extinction case is not the same as the blue/âgreen sky case. It is this distinction that I was referring to in the post when asking whether the âanthropicnessâ of the extinction case could explain why the two arguments should be treated differently.
But Iâm not sure I agree that you are handling the wormâs-eye case in the correct way. I could be wrong, but I think the explanation you have outlined in your comment is effectively equivalent to my âPossible Solution #1â, in the post. That is, because it is impossible to observe non-existence, we should treat existence as a certainty, and condition on it.
My problem with this solution is as I explained in that section of the post. I think the strongest objection comes from considering the anthropic explanation of fine tuning. Do you agree with the following statement?:
âThe fine tuning of the cosmological constants for the existence of life is (Bayesian) evidence of a multiverse.â
My impression is that this statement is generally accepted by people who engage in anthropic reasoning, but you canât explain it if you treat existence as a certainty. If existence is never surprising, then the fine tuning of cosmological constants for life cannot be evidence for anything.
There is also the Russian roulette thought experiment, which I think hits home that you should be able to consider the unlikeliness of your existence and make inferences based on it.
I wouldnât say you treat existence as certainty, as you could certainly be dead, but you have to condition on it when youâre alive. You have to condition on it since you will never find yourself outside the space of existence (or blue skies! ) in anthropic problems. And thatâs the purpose /â meaning of conditioning; you restrict your probability space to the certain subset of basic events you can possibly see.
Then again, there might be nothing very special about existence here. Letâs revisit the green sky problem again, but consider it from a slightly different point of view. Instead of living in the word with a blue or a green sky, imagine yourself living outside of that whole universe. I promise to give you a sample of a world, with registered catastrophes and all, but I will not show you a world with a green sky (i.e., I will sample worlds until the sky turns out blue). In this case, the math is clear. You should condition on the sky being green. Is there a relevant difference between the existence scenario and this scenario?
Maybe there is? You are not guaranteed to see a world at all in the âexistenceâ scenario, as you will not exist if the world turns out to be a blue-sky world, but you are guaranteed an observation in the âoutside viewâ scenario. Does this matter though? I donât think it does, as you canât do an analysis either way if youâre dead, but I might be wrong. Maybe this is where our disagreement lies?
I donât find the objection of the Russian roulette persuasive at all. Intuition shouldnât be trusted in probability, as e.g. the Monty Hall experiment tells us, and least of all in confusing anthropic problems. We should focus on getting the definitions, concepts, and math correctly without stopping to think about how intuitive different solutions are. (By the way, I donât even find the Russian roulette experiment weird or contra-intuitive. I find it intuitive and obvious. Strange? Maybe not. Philosophical intuitions arenât as widely shared as one would believe.)
> âThe fine tuning of the cosmological constants for the existence of life is (Bayesian) evidence of a multiverse.â > My impression is that this statement is generally accepted by people who engage in anthropic reasoning, but you canât explain it if you treat existence as a certainty. If existence is never surprising, then the fine tuning of cosmological constants for life cannot be evidence for anything.
I donât know if thatâs true, though it might be. I suppose the problem about fine-tuning could be sufficiently different from this one to warrant its own analysis.
I think thatâs a good summary of where our disagreement lies. I think that your âsample worlds until the sky turns out blueâ methodology for generating a sample is very different to the existence/ânon-existence case, especially if there is actually only one world! If there are many worlds, itâs more similar, and this is why I think anthropic shadow has more of a chance of working in that case (that was my âPossible Solution #2â).
I find it very interesting that your intuition on the Russian roulette is the other way round to mine. So if there are two guns, one with 1/â1000 probability of firing, and one with 999/â1000 probability of firing, and you pick one at random and it doesnât fire, you think that you have no information about which gun you picked? Because youâd be dead otherwise?
I agree that we donât get very far by just stating our different intuitions, so let me try to convince you of my point of view a different way:
Suppose that you really do have no information after firing a gun once and surviving. Then, if told to play the game again, you should be indifferent between sticking with the same gun, or switching to the different gun. Lets say you settle on the switching strategy (maybe I offer you some trivial incentive to do so). I, on the other hand, would strongly favour sticking with the same gun. This is because I think I have extremely strong evidence that the gun I picked is the less risky one, if I have survived once.
Now lets take a birds-eye view, and imagine an outside observer watching the game, betting on which one of us is more likely to survive through two rounds. Obviously they would favour me over you. My odds of survival are approximately 50% (it more or less just depends on whether I pick the safe gun first or not). Your odds of survival are approximately 1 in 1000 (you are guaranteed to have one shot with the dangerous gun).
This doesnât prove that your approach to formulating probabilities is wrong, but if ultimately we are interested in using probabilities to inform our decisions, I think this suggests that my approach is better.
On the fine tuning, if it is different, I would like to understand why. Iâd love to know what the general procedure weâre supposed to use is to analyse anthropic problems. At the moment I struggle to see how it could both include the anthropic shadow effect, and also have the fine tuning of cosmological constants be taken as evidence for a multiverse.
Now, the best-known probability measure defined on this set is the conditional probability
Qt(A)=Pt(A⣠where the sky is blue at time t).
This is, in a sense, the probability measure that most closely mimics Pt . On the other hand, the measure that mimics Pt most closely is Qt(A)=Pt(A), hands down. This measure has a problem though, namely that maxQt(A)<1, hence it isnât a probability measure anymore.
I think the main reason why I intuitively want to condition on the color of the sky is that I want to work with proper probability measures, not just measures bounded by 0 and 1. (Thatâs why Iâm talking about, e.g., being âuncomfortable pretending we could have observed non-existenceâ.) But your end goal is to have the best measure on the data you can actually observe, taking into account possibilities you canât observe. This naturally leads us to Qt(A)=Pt(A) instead of Qt(A)=Qt(A)=Pt(A⣠where the sky is blue at time t).
There is another independent aspect to anthropic reasoning too, which is how you assign probabilities to âindexicalâ facts. This is the part of anthropic reasoning I always thought was more contentious. For example, if two people are created, one with red hair and one with blue hair, and you are one of these people, what is the probability that you have red hair (before you look in the mirror)? We are supposed to use the âSelf-Sampling Assumptionâ here, and say the answer is 1â2, but if you just naively apply that rule too widely then you can end up with conclusions like the Doomsday Argument, or Adam+Eve paradox.
I think that a complete account of anthropic reasoning would need to cover this as well, but I think what youâve outlined is a good summary of how we should treat cases where we are only able to observe certain outcomes because we do not exist in others.
The roulette example might get to the heart of the problem with the wormâs-eye view! From the wormâs-eye view, the sky will always be blue, so P(skycolor=green)=0, making it impossible to deal with problems where the sky might turn green in the future.
In the roulette example, weâre effectively dealing with an expected utility problem where we condition on existence when learning about the probability, but not when we act. That looks incoherent to me; we canât condition and uncondition on an event willy-nilly: Either we will live in a world where an event must be true, or we donât. So yeah, it seems like youâre right, and weâre effectively treating existence as a certainty when looking at the problem from the wormâs-eye view.
As I see it, this strongly suggests we should take the birdâs-eye view, as you proposed, and not the wormâs-eye view. Or something else entirely; Iâm still uncomfortable pretending we could have observed non-existence.
Thank you for your comment! I agree with you that the difference between the birdâs-eye view and the wormâs eye view is very important, and certainly has the potential to explain why the extinction case is not the same as the blue/âgreen sky case. It is this distinction that I was referring to in the post when asking whether the âanthropicnessâ of the extinction case could explain why the two arguments should be treated differently.
But Iâm not sure I agree that you are handling the wormâs-eye case in the correct way. I could be wrong, but I think the explanation you have outlined in your comment is effectively equivalent to my âPossible Solution #1â, in the post. That is, because it is impossible to observe non-existence, we should treat existence as a certainty, and condition on it.
My problem with this solution is as I explained in that section of the post. I think the strongest objection comes from considering the anthropic explanation of fine tuning. Do you agree with the following statement?:
âThe fine tuning of the cosmological constants for the existence of life is (Bayesian) evidence of a multiverse.â
My impression is that this statement is generally accepted by people who engage in anthropic reasoning, but you canât explain it if you treat existence as a certainty. If existence is never surprising, then the fine tuning of cosmological constants for life cannot be evidence for anything.
There is also the Russian roulette thought experiment, which I think hits home that you should be able to consider the unlikeliness of your existence and make inferences based on it.
I wouldnât say you treat existence as certainty, as you could certainly be dead, but you have to condition on it when youâre alive. You have to condition on it since you will never find yourself outside the space of existence (or blue skies! ) in anthropic problems. And thatâs the purpose /â meaning of conditioning; you restrict your probability space to the certain subset of basic events you can possibly see.
Then again, there might be nothing very special about existence here. Letâs revisit the green sky problem again, but consider it from a slightly different point of view. Instead of living in the word with a blue or a green sky, imagine yourself living outside of that whole universe. I promise to give you a sample of a world, with registered catastrophes and all, but I will not show you a world with a green sky (i.e., I will sample worlds until the sky turns out blue). In this case, the math is clear. You should condition on the sky being green. Is there a relevant difference between the existence scenario and this scenario?
Maybe there is? You are not guaranteed to see a world at all in the âexistenceâ scenario, as you will not exist if the world turns out to be a blue-sky world, but you are guaranteed an observation in the âoutside viewâ scenario. Does this matter though? I donât think it does, as you canât do an analysis either way if youâre dead, but I might be wrong. Maybe this is where our disagreement lies?
I donât find the objection of the Russian roulette persuasive at all. Intuition shouldnât be trusted in probability, as e.g. the Monty Hall experiment tells us, and least of all in confusing anthropic problems. We should focus on getting the definitions, concepts, and math correctly without stopping to think about how intuitive different solutions are. (By the way, I donât even find the Russian roulette experiment weird or contra-intuitive. I find it intuitive and obvious. Strange? Maybe not. Philosophical intuitions arenât as widely shared as one would believe.)
> âThe fine tuning of the cosmological constants for the existence of life is (Bayesian) evidence of a multiverse.â
> My impression is that this statement is generally accepted by people who engage in anthropic reasoning, but you canât explain it if you treat existence as a certainty. If existence is never surprising, then the fine tuning of cosmological constants for life cannot be evidence for anything.
I donât know if thatâs true, though it might be. I suppose the problem about fine-tuning could be sufficiently different from this one to warrant its own analysis.
I think thatâs a good summary of where our disagreement lies. I think that your âsample worlds until the sky turns out blueâ methodology for generating a sample is very different to the existence/ânon-existence case, especially if there is actually only one world! If there are many worlds, itâs more similar, and this is why I think anthropic shadow has more of a chance of working in that case (that was my âPossible Solution #2â).
I find it very interesting that your intuition on the Russian roulette is the other way round to mine. So if there are two guns, one with 1/â1000 probability of firing, and one with 999/â1000 probability of firing, and you pick one at random and it doesnât fire, you think that you have no information about which gun you picked? Because youâd be dead otherwise?
I agree that we donât get very far by just stating our different intuitions, so let me try to convince you of my point of view a different way:
Suppose that you really do have no information after firing a gun once and surviving. Then, if told to play the game again, you should be indifferent between sticking with the same gun, or switching to the different gun. Lets say you settle on the switching strategy (maybe I offer you some trivial incentive to do so). I, on the other hand, would strongly favour sticking with the same gun. This is because I think I have extremely strong evidence that the gun I picked is the less risky one, if I have survived once.
Now lets take a birds-eye view, and imagine an outside observer watching the game, betting on which one of us is more likely to survive through two rounds. Obviously they would favour me over you. My odds of survival are approximately 50% (it more or less just depends on whether I pick the safe gun first or not). Your odds of survival are approximately 1 in 1000 (you are guaranteed to have one shot with the dangerous gun).
This doesnât prove that your approach to formulating probabilities is wrong, but if ultimately we are interested in using probabilities to inform our decisions, I think this suggests that my approach is better.
On the fine tuning, if it is different, I would like to understand why. Iâd love to know what the general procedure weâre supposed to use is to analyse anthropic problems. At the moment I struggle to see how it could both include the anthropic shadow effect, and also have the fine tuning of cosmological constants be taken as evidence for a multiverse.
Hereâs a rough sketch of how we could, potentially, think about anthropic problems. Let Pt be a sequence of true, birdâs-eye view probability measures and Qt your own measures, trying to mimic Pt as closely as possible. These measures arenât defined on the same sigma-algebra. The sequence of true measures is defined on some original sigma-algebra ÎŁ, but your measure is defined only on the sigma-algebra {Aâ©{Ï where the sky is blue at time t}}.
Qt(A)=Pt(A⣠where the sky is blue at time t).Now, the best-known probability measure defined on this set is the conditional probability
This is, in a sense, the probability measure that most closely mimics Pt . On the other hand, the measure that mimics Pt most closely is Qt(A)=Pt(A), hands down. This measure has a problem though, namely that maxQt(A)<1, hence it isnât a probability measure anymore.
I think the main reason why I intuitively want to condition on the color of the sky is that I want to work with proper probability measures, not just measures bounded by 0 and 1. (Thatâs why Iâm talking about, e.g., being âuncomfortable pretending we could have observed non-existenceâ.) But your end goal is to have the best measure on the data you can actually observe, taking into account possibilities you canât observe. This naturally leads us to Qt(A)=Pt(A) instead of Qt(A)=Qt(A)=Pt(A⣠where the sky is blue at time t).
I think that makes sense!
There is another independent aspect to anthropic reasoning too, which is how you assign probabilities to âindexicalâ facts. This is the part of anthropic reasoning I always thought was more contentious. For example, if two people are created, one with red hair and one with blue hair, and you are one of these people, what is the probability that you have red hair (before you look in the mirror)? We are supposed to use the âSelf-Sampling Assumptionâ here, and say the answer is 1â2, but if you just naively apply that rule too widely then you can end up with conclusions like the Doomsday Argument, or Adam+Eve paradox.
I think that a complete account of anthropic reasoning would need to cover this as well, but I think what youâve outlined is a good summary of how we should treat cases where we are only able to observe certain outcomes because we do not exist in others.
The roulette example might get to the heart of the problem with the wormâs-eye view! From the wormâs-eye view, the sky will always be blue, so P(skycolor=green)=0, making it impossible to deal with problems where the sky might turn green in the future.
In the roulette example, weâre effectively dealing with an expected utility problem where we condition on existence when learning about the probability, but not when we act. That looks incoherent to me; we canât condition and uncondition on an event willy-nilly: Either we will live in a world where an event must be true, or we donât. So yeah, it seems like youâre right, and weâre effectively treating existence as a certainty when looking at the problem from the wormâs-eye view.
As I see it, this strongly suggests we should take the birdâs-eye view, as you proposed, and not the wormâs-eye view. Or something else entirely; Iâm still uncomfortable pretending we could have observed non-existence.