In the tile case, the observers who see a blue tile are underestimating on average. If you see a blue tile, you then know that you belong to that group, who are underestimating on average. But that still should not change your estimate. That’s weird and unintuitive, but true in the coin/​tile case (unless I’ve got the maths badly wrong somewhere).
I get that there is a difference in the anthropic case. If you kill everyone with a red tile, then you’re right, the observers on average will be biased, because it’s only the observers with a blue tile who are left, and their estimates were biased to begin with. But what I don’t understand is, why is finding out that you are alive any different to finding out that your tile is blue? Shouldn’t the update be the same?
I can see that is a difference between the two cases. What I’m struggling to understand is why that leads to a different answer.
My understanding of the steps of the anthropic shadow argument (possibly flawed or incomplete) is something like this:
You are an observer → We should expect observers to underestimate the frequency of catastrophic events on average, if they use the frequency of catastrophic events in their past → You should revise your estimate of the frequency of catastrophic events upwards
But in the coin/​tile case you could make an exactly analogous argument:
You see a blue tile → We should expect people who see a blue tile to underestimate the frequency of heads on average, if they use the frequency of heads in their past → You should revise your estimate of the frequency of heads upwards.
But in the coin/​tile case, this argument is wrong, even though it appears intuitively plausible. If you do the full bayesian analysis, that argument leads you to the wrong answer. Why should we trust the argument of identical structure in the anthropic case?
In the tile case, the observers who see a blue tile are underestimating on average. If you see a blue tile, you then know that you belong to that group, who are underestimating on average. But that still should not change your estimate. That’s weird and unintuitive, but true in the coin/​tile case (unless I’ve got the maths badly wrong somewhere).
I get that there is a difference in the anthropic case. If you kill everyone with a red tile, then you’re right, the observers on average will be biased, because it’s only the observers with a blue tile who are left, and their estimates were biased to begin with. But what I don’t understand is, why is finding out that you are alive any different to finding out that your tile is blue? Shouldn’t the update be the same?
No, because it’s possible you observe blue tile or red tile.
You observe things (alive) or don’t observe things (not alive.)
In the first situation, the observer knows multiple facts about the world could be observed. Not so in the second case.
I can see that is a difference between the two cases. What I’m struggling to understand is why that leads to a different answer.
My understanding of the steps of the anthropic shadow argument (possibly flawed or incomplete) is something like this:
You are an observer → We should expect observers to underestimate the frequency of catastrophic events on average, if they use the frequency of catastrophic events in their past → You should revise your estimate of the frequency of catastrophic events upwards
But in the coin/​tile case you could make an exactly analogous argument:
You see a blue tile → We should expect people who see a blue tile to underestimate the frequency of heads on average, if they use the frequency of heads in their past → You should revise your estimate of the frequency of heads upwards.
But in the coin/​tile case, this argument is wrong, even though it appears intuitively plausible. If you do the full bayesian analysis, that argument leads you to the wrong answer. Why should we trust the argument of identical structure in the anthropic case?