Here’s a rough sketch of how we could, potentially, think about anthropic problems. Let Pt be a sequence of true, bird’s-eye view probability measures and Qt your own measures, trying to mimic Pt as closely as possible. These measures aren’t defined on the same sigma-algebra. The sequence of true measures is defined on some original sigma-algebra Σ, but your measure is defined only on the sigma-algebra {A∩{ω where the sky is blue at time t}}.
Now, the best-known probability measure defined on this set is the conditional probability
Qt(A)=Pt(A∣ where the sky is blue at time t).
This is, in a sense, the probability measure that most closely mimics Pt . On the other hand, the measure that mimics Pt most closely is Qt(A)=Pt(A), hands down. This measure has a problem though, namely that maxQt(A)<1, hence it isn’t a probability measure anymore.
I think the main reason why I intuitively want to condition on the color of the sky is that I want to work with proper probability measures, not just measures bounded by 0 and 1. (That’s why I’m talking about, e.g., being “uncomfortable pretending we could have observed non-existence”.) But your end goal is to have the best measure on the data you can actually observe, taking into account possibilities you can’t observe. This naturally leads us to Qt(A)=Pt(A) instead of Qt(A)=Qt(A)=Pt(A∣ where the sky is blue at time t).
There is another independent aspect to anthropic reasoning too, which is how you assign probabilities to ‘indexical’ facts. This is the part of anthropic reasoning I always thought was more contentious. For example, if two people are created, one with red hair and one with blue hair, and you are one of these people, what is the probability that you have red hair (before you look in the mirror)? We are supposed to use the ‘Self-Sampling Assumption’ here, and say the answer is 1⁄2, but if you just naively apply that rule too widely then you can end up with conclusions like the Doomsday Argument, or Adam+Eve paradox.
I think that a complete account of anthropic reasoning would need to cover this as well, but I think what you’ve outlined is a good summary of how we should treat cases where we are only able to observe certain outcomes because we do not exist in others.
Here’s a rough sketch of how we could, potentially, think about anthropic problems. Let Pt be a sequence of true, bird’s-eye view probability measures and Qt your own measures, trying to mimic Pt as closely as possible. These measures aren’t defined on the same sigma-algebra. The sequence of true measures is defined on some original sigma-algebra Σ, but your measure is defined only on the sigma-algebra {A∩{ω where the sky is blue at time t}}.
Qt(A)=Pt(A∣ where the sky is blue at time t).Now, the best-known probability measure defined on this set is the conditional probability
This is, in a sense, the probability measure that most closely mimics Pt . On the other hand, the measure that mimics Pt most closely is Qt(A)=Pt(A), hands down. This measure has a problem though, namely that maxQt(A)<1, hence it isn’t a probability measure anymore.
I think the main reason why I intuitively want to condition on the color of the sky is that I want to work with proper probability measures, not just measures bounded by 0 and 1. (That’s why I’m talking about, e.g., being “uncomfortable pretending we could have observed non-existence”.) But your end goal is to have the best measure on the data you can actually observe, taking into account possibilities you can’t observe. This naturally leads us to Qt(A)=Pt(A) instead of Qt(A)=Qt(A)=Pt(A∣ where the sky is blue at time t).
I think that makes sense!
There is another independent aspect to anthropic reasoning too, which is how you assign probabilities to ‘indexical’ facts. This is the part of anthropic reasoning I always thought was more contentious. For example, if two people are created, one with red hair and one with blue hair, and you are one of these people, what is the probability that you have red hair (before you look in the mirror)? We are supposed to use the ‘Self-Sampling Assumption’ here, and say the answer is 1⁄2, but if you just naively apply that rule too widely then you can end up with conclusions like the Doomsday Argument, or Adam+Eve paradox.
I think that a complete account of anthropic reasoning would need to cover this as well, but I think what you’ve outlined is a good summary of how we should treat cases where we are only able to observe certain outcomes because we do not exist in others.