I think I agree that there is a difference between the extinction example and the coin example, to do with the observer bias, which seems important. Iâm still not sure how to articulate this difference properly though, and why it should make the conclusion different. It is true that you have perfect knowledge of Q, N, and the final state marker in the coin example, but you do in the (idealized) extinction scenario that I described as well. In the extinction case I supposed that we knew Q, N, and the fact that we havenât yet gone extinct (which is the analogue of a blue marker).
The real difference I suppose is that in the extinction scenario we could never have seen the analogue of the red marker, because we would never have existed if that had been the outcome. But why does this change anything?
I think youâre right that we could modify the coin example to make it closer to the extinction example, by introducing amnesia, or even just saying that you are killed if both coins ever land heads together. But to sum up why I started talking about a coin example with no observer selection effects present:
In the absence of a complete consistent formalism for dealing with observer effects, the argument of the âanthropic shadowâ paper still appears to carry some force, when it says that the naive estimates of observers will be underestimates on average, and that therefore, as observers, we should revise our naive estimates up by an appropriate amount. However, an argument with identical structure gives the wrong answer in the coin example, where everything is understood and we can clearly see what the right answer actually is. The naive estimates of people who see blue will be underestimates on average, but that does not mean, in this case, that if we see blue we should revise our naive estimates up. In this case the naive estimate is the correct bayesian one. This should cast doubt on arguments which take this form, including the anthropic shadow argument, unless we can properly explain why they apply in one case but not the other, and thatâs what I am uncertain how to do.
Thank you for sharing the Nature paper. I will check it out!
Thank you for your answer!
I think I agree that there is a difference between the extinction example and the coin example, to do with the observer bias, which seems important. Iâm still not sure how to articulate this difference properly though, and why it should make the conclusion different. It is true that you have perfect knowledge of Q, N, and the final state marker in the coin example, but you do in the (idealized) extinction scenario that I described as well. In the extinction case I supposed that we knew Q, N, and the fact that we havenât yet gone extinct (which is the analogue of a blue marker).
The real difference I suppose is that in the extinction scenario we could never have seen the analogue of the red marker, because we would never have existed if that had been the outcome. But why does this change anything?
I think youâre right that we could modify the coin example to make it closer to the extinction example, by introducing amnesia, or even just saying that you are killed if both coins ever land heads together. But to sum up why I started talking about a coin example with no observer selection effects present:
In the absence of a complete consistent formalism for dealing with observer effects, the argument of the âanthropic shadowâ paper still appears to carry some force, when it says that the naive estimates of observers will be underestimates on average, and that therefore, as observers, we should revise our naive estimates up by an appropriate amount. However, an argument with identical structure gives the wrong answer in the coin example, where everything is understood and we can clearly see what the right answer actually is. The naive estimates of people who see blue will be underestimates on average, but that does not mean, in this case, that if we see blue we should revise our naive estimates up. In this case the naive estimate is the correct bayesian one. This should cast doubt on arguments which take this form, including the anthropic shadow argument, unless we can properly explain why they apply in one case but not the other, and thatâs what I am uncertain how to do.
Thank you for sharing the Nature paper. I will check it out!