I’m no expert in the field, but this problem really bothers me, too—so perhaps you should read my remarks as additional questions.
So the first part of my question is:
“Anthropic shadow” is an observation bias / selection effect concerning the data-generating process. I don’t see such bias in your red/blue example, where (CMIW) you have both perfect info on Q, N and the final state of the marker. For this to be analogous to anthropic bias regarding x-risks, you should add a new feature—like someone erasing your memory and records with probability P* whenever Coin#1 lands heads.
(My “personal” toy model of anthropic shadow problems is someone trying to estimate the probability of heads for the next coin toss, after a sequence TTTT… knowing that, whenever the coin lands heads, the memory of previous tosses is erased. It’s tempting to just apply Laplace’s Rule of Succession here—but it’d mean knowing the amnesia possibility gives you no information.
I don’t think that’s an exact representation of our anthropic bias over x-risks, but it does highlight a problem easy to underestimate)
And the second part is: How can the anthropic shadow argument be phrased in a fully bayesian way?
I guess that’s the jackpot, right? idk. But one the best attacks on this problem I’ve seen so far is Snyder-Beattie, Ord & Bonsall Nature paper.
I think I agree that there is a difference between the extinction example and the coin example, to do with the observer bias, which seems important. I’m still not sure how to articulate this difference properly though, and why it should make the conclusion different. It is true that you have perfect knowledge of Q, N, and the final state marker in the coin example, but you do in the (idealized) extinction scenario that I described as well. In the extinction case I supposed that we knew Q, N, and the fact that we haven’t yet gone extinct (which is the analogue of a blue marker).
The real difference I suppose is that in the extinction scenario we could never have seen the analogue of the red marker, because we would never have existed if that had been the outcome. But why does this change anything?
I think you’re right that we could modify the coin example to make it closer to the extinction example, by introducing amnesia, or even just saying that you are killed if both coins ever land heads together. But to sum up why I started talking about a coin example with no observer selection effects present:
In the absence of a complete consistent formalism for dealing with observer effects, the argument of the ‘anthropic shadow’ paper still appears to carry some force, when it says that the naive estimates of observers will be underestimates on average, and that therefore, as observers, we should revise our naive estimates up by an appropriate amount. However, an argument with identical structure gives the wrong answer in the coin example, where everything is understood and we can clearly see what the right answer actually is. The naive estimates of people who see blue will be underestimates on average, but that does not mean, in this case, that if we see blue we should revise our naive estimates up. In this case the naive estimate is the correct bayesian one. This should cast doubt on arguments which take this form, including the anthropic shadow argument, unless we can properly explain why they apply in one case but not the other, and that’s what I am uncertain how to do.
Thank you for sharing the Nature paper. I will check it out!
I’m no expert in the field, but this problem really bothers me, too—so perhaps you should read my remarks as additional questions.
“Anthropic shadow” is an observation bias / selection effect concerning the data-generating process. I don’t see such bias in your red/blue example, where (CMIW) you have both perfect info on Q, N and the final state of the marker. For this to be analogous to anthropic bias regarding x-risks, you should add a new feature—like someone erasing your memory and records with probability P* whenever Coin#1 lands heads.
(My “personal” toy model of anthropic shadow problems is someone trying to estimate the probability of heads for the next coin toss, after a sequence TTTT… knowing that, whenever the coin lands heads, the memory of previous tosses is erased. It’s tempting to just apply Laplace’s Rule of Succession here—but it’d mean knowing the amnesia possibility gives you no information.
I don’t think that’s an exact representation of our anthropic bias over x-risks, but it does highlight a problem easy to underestimate)
I guess that’s the jackpot, right? idk. But one the best attacks on this problem I’ve seen so far is Snyder-Beattie, Ord & Bonsall Nature paper.
Thank you for your answer!
I think I agree that there is a difference between the extinction example and the coin example, to do with the observer bias, which seems important. I’m still not sure how to articulate this difference properly though, and why it should make the conclusion different. It is true that you have perfect knowledge of Q, N, and the final state marker in the coin example, but you do in the (idealized) extinction scenario that I described as well. In the extinction case I supposed that we knew Q, N, and the fact that we haven’t yet gone extinct (which is the analogue of a blue marker).
The real difference I suppose is that in the extinction scenario we could never have seen the analogue of the red marker, because we would never have existed if that had been the outcome. But why does this change anything?
I think you’re right that we could modify the coin example to make it closer to the extinction example, by introducing amnesia, or even just saying that you are killed if both coins ever land heads together. But to sum up why I started talking about a coin example with no observer selection effects present:
In the absence of a complete consistent formalism for dealing with observer effects, the argument of the ‘anthropic shadow’ paper still appears to carry some force, when it says that the naive estimates of observers will be underestimates on average, and that therefore, as observers, we should revise our naive estimates up by an appropriate amount. However, an argument with identical structure gives the wrong answer in the coin example, where everything is understood and we can clearly see what the right answer actually is. The naive estimates of people who see blue will be underestimates on average, but that does not mean, in this case, that if we see blue we should revise our naive estimates up. In this case the naive estimate is the correct bayesian one. This should cast doubt on arguments which take this form, including the anthropic shadow argument, unless we can properly explain why they apply in one case but not the other, and that’s what I am uncertain how to do.
Thank you for sharing the Nature paper. I will check it out!