Executive summary: In this exploratory post, Dynomight argues that we should modestly lower our estimate of the annual risk of nuclear war given 62 uneventful years, not because of anthropic reasoning about our existence, but because Bayesian updating on survival data—when grounded in an uncertain prior—supports such a revision.
Key points:
The author contrasts two schools of thought: one that updates beliefs based on survival data (e.g., no nuclear war since 1962), and another that discounts such data due to anthropic bias (“you wouldn’t be here if you hadn’t survived”).
Through analogies like “Dynomight family dinner” and happy puppy bags, the post illustrates that anthropic concerns are often red herrings—the key is how uncertain your prior is.
Bayesian reasoning shows that a broad prior (i.e., high uncertainty about the true probability) leads to meaningful updating based on survival data, while a narrow prior resists change.
In the nuclear war case, the author argues we should have a broad prior due to complex, unpredictable geopolitical dynamics—so the observed 62 peaceful years should slightly reduce our risk estimate.
The post emphasizes that the absence of disaster doesn’t prove safety but can still rationally shift beliefs—just not drastically unless the peaceful streak continues much longer (e.g., 1,000 years).
This analysis supports a modest decrease in estimated annual nuclear war risk, not because we exist, but because of ordinary probabilistic reasoning applied to the data we do have.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: In this exploratory post, Dynomight argues that we should modestly lower our estimate of the annual risk of nuclear war given 62 uneventful years, not because of anthropic reasoning about our existence, but because Bayesian updating on survival data—when grounded in an uncertain prior—supports such a revision.
Key points:
The author contrasts two schools of thought: one that updates beliefs based on survival data (e.g., no nuclear war since 1962), and another that discounts such data due to anthropic bias (“you wouldn’t be here if you hadn’t survived”).
Through analogies like “Dynomight family dinner” and happy puppy bags, the post illustrates that anthropic concerns are often red herrings—the key is how uncertain your prior is.
Bayesian reasoning shows that a broad prior (i.e., high uncertainty about the true probability) leads to meaningful updating based on survival data, while a narrow prior resists change.
In the nuclear war case, the author argues we should have a broad prior due to complex, unpredictable geopolitical dynamics—so the observed 62 peaceful years should slightly reduce our risk estimate.
The post emphasizes that the absence of disaster doesn’t prove safety but can still rationally shift beliefs—just not drastically unless the peaceful streak continues much longer (e.g., 1,000 years).
This analysis supports a modest decrease in estimated annual nuclear war risk, not because we exist, but because of ordinary probabilistic reasoning applied to the data we do have.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.