From what I can tell, your arguments are substantially the same as those in the paper, although I could be wrong?
Personally I’ve become pretty convinced that the anthropic shadow argument doesn’t work. I think if you follow the anthropic reasoning through properly, a long period of time without a catastrophe like nuclear war is strong Bayesian evidence that the catastrophe rate is low, and I think this holds under pretty much whatever method of anthropic reasoning you favour.
This paper isn’t using anthropic shadow as its framing as it’s looking at the issue through the lens of the fine-tuning argument, rather than through the methodology in the anthropic shadow paper. I didn’t bring in anthropic shadow as I didn’t want to parse exactly how my view was the same or different from it and figured adding discussion of an additional paper would add unneeded complexity.
Notably, my paper uses close calls that seem to have been avoided only by luck as evidence that risk is higher than it appears and then explains our luck through our position as observers, rather than simply using our position as observers as evidence that risk is higher than it appears.
I am curious to know how close calls avoided only through luck, like the Damascus missile explosion affect your assessment. I will read and respond to your paper.(EDIT: I left a comment)
I’ve replied to your comment on the other post now.
I don’t want to repeat myself here too much, but my feeling is that explaining our luck in close calls using our position as observers does have the same problems that I think the anthropic shadow argument does.
It was never guaranteed that observers would survive until now, and the fact that we have is evidence of a low catastrophe rate.
It was never guaranteed that observers would survive until now, and the fact that we have is evidence of a low catastrophe rate.
This is why this paper assumes there are an arbitrarily large number of worlds. If there are an arbitrarily large number of worlds then if it is possible for a particular event to occur it becomes guaranteed that it will occur in a particular world.
To formalize my claim if
it is possible for observers to survive until now, and
There are an arbitrarily large number of worlds, then
All possibilities will occur in some world
Of course, if you don’t believe in many worlds this argument remains valid but isn’t sound. This is the same line of argument that allows the many world’s views to block the inference to God from universal fine-tuning. Do you have a view on the fine-tuning cases?
I will respond to your comment on the other post, we can move into DMs if you are interested in discussing this further as that would consolidate the conversation.
This is a really interesting topic.
I believe what you are describing here is the ‘Anthropic Shadow’ effect, which was described in this Bostrom paper: https://​​nickbostrom.com/​​papers/​​anthropicshadow.pdf
From what I can tell, your arguments are substantially the same as those in the paper, although I could be wrong?
Personally I’ve become pretty convinced that the anthropic shadow argument doesn’t work. I think if you follow the anthropic reasoning through properly, a long period of time without a catastrophe like nuclear war is strong Bayesian evidence that the catastrophe rate is low, and I think this holds under pretty much whatever method of anthropic reasoning you favour.
I spelled out my argument in an EA forum post recently, so I’ll link to that rather than repeating it here. It’s a confusing topic and I’m not very sure of myself, so would appreciate your thoughts on whether I’m right, wrong, or whether it’s actually independent of what you’re talking about here: https://​​forum.effectivealtruism.org/​​posts/​​A47EWTS6oBKLqxBpw/​​against-anthropic-shadow
This paper isn’t using anthropic shadow as its framing as it’s looking at the issue through the lens of the fine-tuning argument, rather than through the methodology in the anthropic shadow paper. I didn’t bring in anthropic shadow as I didn’t want to parse exactly how my view was the same or different from it and figured adding discussion of an additional paper would add unneeded complexity.
Notably, my paper uses close calls that seem to have been avoided only by luck as evidence that risk is higher than it appears and then explains our luck through our position as observers, rather than simply using our position as observers as evidence that risk is higher than it appears.
I am curious to know how close calls avoided only through luck, like the Damascus missile explosion affect your assessment. I will read and respond to your paper.(EDIT: I left a comment)
I’ve replied to your comment on the other post now.
I don’t want to repeat myself here too much, but my feeling is that explaining our luck in close calls using our position as observers does have the same problems that I think the anthropic shadow argument does.
It was never guaranteed that observers would survive until now, and the fact that we have is evidence of a low catastrophe rate.
This is why this paper assumes there are an arbitrarily large number of worlds. If there are an arbitrarily large number of worlds then if it is possible for a particular event to occur it becomes guaranteed that it will occur in a particular world.
To formalize my claim if
it is possible for observers to survive until now, and
There are an arbitrarily large number of worlds, then
All possibilities will occur in some world
Of course, if you don’t believe in many worlds this argument remains valid but isn’t sound. This is the same line of argument that allows the many world’s views to block the inference to God from universal fine-tuning. Do you have a view on the fine-tuning cases?
I will respond to your comment on the other post, we can move into DMs if you are interested in discussing this further as that would consolidate the conversation.