My own take is that anthropic shadow seems to be a pretty weak argument for things like asteroids or supervolcanoes or (arguably) nukes in the world we live in, because our current scientific understanding of near-misses for total extinction is that there weren’t that many (any?) of them.
I think the anthropics argument can however be mostly rescued by: “Of observers that otherwise naively seem very similar to us, if they lived in an environment with much higher background risk of asteroids/supervolcanoes/etc, their observations would not seem very different from that of our ancestors.” Thus, we cannot use past examples of failed apocalypses as strong Bayesian evidence for future categories of apocalypses, as we are anthropically selected to come from the sample of people who have not been correct in determinations of past high risk.
EDIT: Rereading this argument, I’m worried that it’s “too clever.”
(I have not read the relevant papers)
My own take is that anthropic shadow seems to be a pretty weak argument for things like asteroids or supervolcanoes or (arguably) nukes in the world we live in, because our current scientific understanding of near-misses for total extinction is that there weren’t that many (any?) of them.
I think the anthropics argument can however be mostly rescued by: “Of observers that otherwise naively seem very similar to us, if they lived in an environment with much higher background risk of asteroids/supervolcanoes/etc, their observations would not seem very different from that of our ancestors.” Thus, we cannot use past examples of failed apocalypses as strong Bayesian evidence for future categories of apocalypses, as we are anthropically selected to come from the sample of people who have not been correct in determinations of past high risk.
EDIT: Rereading this argument, I’m worried that it’s “too clever.”
related notes by Ben Garfinkel (though not focused on anthropics specifically).