Can’t we imagine 100 people doing that experiment. People will get different results- some more heads than they “should” and some fewer heads than they “should.” But the sample means will cluster around the real rate of heads. So any observer won’t know if their result has too many heads or too few. So they go with their naive estimate.
With apocalypses, you know by definition you’re one of the observers that wasn’t wiped out. So I do think this reasoning works. If I’m wrong or my explanation makes no sense, please let me know!
If 100 people do the experiment, the ones who end up with a blue tile will, on average, have fewer heads than they should, for exactly the same reason that most observers will live after comparitively fewer catastrophic events.
But in the coin case that still does not mean that seeing a blue tile should make you revise your naive estimate upwards. The naive estimate is still, in bayesian terms, the correct one.
I don’t understand why the anthropic case is different.
In the tile case, the observers on average will be correct. Some will get too many heads, some few. But the observers on average will be correct. You won’t know whether you should adjust your personal estimate.
In the anthropic case, the observers on average will zero apocalypses no matter how common apocalypses are.
Imagine if in the tile case, everyone who was about to get more heads than average was killed by an assassin and the assassin told you what they were doing. Then when you did the experiment and lived, you would know your estimate was biased.
In the tile case, the observers who see a blue tile are underestimating on average. If you see a blue tile, you then know that you belong to that group, who are underestimating on average. But that still should not change your estimate. That’s weird and unintuitive, but true in the coin/tile case (unless I’ve got the maths badly wrong somewhere).
I get that there is a difference in the anthropic case. If you kill everyone with a red tile, then you’re right, the observers on average will be biased, because it’s only the observers with a blue tile who are left, and their estimates were biased to begin with. But what I don’t understand is, why is finding out that you are alive any different to finding out that your tile is blue? Shouldn’t the update be the same?
I can see that is a difference between the two cases. What I’m struggling to understand is why that leads to a different answer.
My understanding of the steps of the anthropic shadow argument (possibly flawed or incomplete) is something like this:
You are an observer → We should expect observers to underestimate the frequency of catastrophic events on average, if they use the frequency of catastrophic events in their past → You should revise your estimate of the frequency of catastrophic events upwards
But in the coin/tile case you could make an exactly analogous argument:
You see a blue tile → We should expect people who see a blue tile to underestimate the frequency of heads on average, if they use the frequency of heads in their past → You should revise your estimate of the frequency of heads upwards.
But in the coin/tile case, this argument is wrong, even though it appears intuitively plausible. If you do the full bayesian analysis, that argument leads you to the wrong answer. Why should we trust the argument of identical structure in the anthropic case?
Hi Toby,
Can’t we imagine 100 people doing that experiment. People will get different results- some more heads than they “should” and some fewer heads than they “should.” But the sample means will cluster around the real rate of heads. So any observer won’t know if their result has too many heads or too few. So they go with their naive estimate.
With apocalypses, you know by definition you’re one of the observers that wasn’t wiped out. So I do think this reasoning works. If I’m wrong or my explanation makes no sense, please let me know!
Thanks for your reply!
If 100 people do the experiment, the ones who end up with a blue tile will, on average, have fewer heads than they should, for exactly the same reason that most observers will live after comparitively fewer catastrophic events.
But in the coin case that still does not mean that seeing a blue tile should make you revise your naive estimate upwards. The naive estimate is still, in bayesian terms, the correct one.
I don’t understand why the anthropic case is different.
In the tile case, the observers on average will be correct. Some will get too many heads, some few. But the observers on average will be correct. You won’t know whether you should adjust your personal estimate.
In the anthropic case, the observers on average will zero apocalypses no matter how common apocalypses are.
Imagine if in the tile case, everyone who was about to get more heads than average was killed by an assassin and the assassin told you what they were doing. Then when you did the experiment and lived, you would know your estimate was biased.
In the tile case, the observers who see a blue tile are underestimating on average. If you see a blue tile, you then know that you belong to that group, who are underestimating on average. But that still should not change your estimate. That’s weird and unintuitive, but true in the coin/tile case (unless I’ve got the maths badly wrong somewhere).
I get that there is a difference in the anthropic case. If you kill everyone with a red tile, then you’re right, the observers on average will be biased, because it’s only the observers with a blue tile who are left, and their estimates were biased to begin with. But what I don’t understand is, why is finding out that you are alive any different to finding out that your tile is blue? Shouldn’t the update be the same?
No, because it’s possible you observe blue tile or red tile.
You observe things (alive) or don’t observe things (not alive.)
In the first situation, the observer knows multiple facts about the world could be observed. Not so in the second case.
I can see that is a difference between the two cases. What I’m struggling to understand is why that leads to a different answer.
My understanding of the steps of the anthropic shadow argument (possibly flawed or incomplete) is something like this:
You are an observer → We should expect observers to underestimate the frequency of catastrophic events on average, if they use the frequency of catastrophic events in their past → You should revise your estimate of the frequency of catastrophic events upwards
But in the coin/tile case you could make an exactly analogous argument:
You see a blue tile → We should expect people who see a blue tile to underestimate the frequency of heads on average, if they use the frequency of heads in their past → You should revise your estimate of the frequency of heads upwards.
But in the coin/tile case, this argument is wrong, even though it appears intuitively plausible. If you do the full bayesian analysis, that argument leads you to the wrong answer. Why should we trust the argument of identical structure in the anthropic case?