I think the argument was written up formally on the forum, but I’m not finding it. I think it goes like if the chance of X risk is 0.1%/year, the expected duration of humans is 1000 years. If you decrease the risk to 0.05%/year, the duration is 2000 years, so you have only added a millennium. However, if you get safe AI and colonize the galaxy, you might get billions of years. But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.
> But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.
For clarity’s sake, I don’t disagree with this. This does mean that your argument for overwhelming value of mitigating nuclear war is still predicated on developing a safe AI (or some other way of massively reducing the base rate) at a future date, rather than being a self-contained argument based solely on nuclear war being an x-risk. Which is totally fine and reasonable, but a useful distinction to make in my experience. For example, it would now make sense to compare whether working on safe AI directly or working on nuclear war in order to increase the number of years we have to develop safe AI is generating better returns per effort spent. This in turn I think is going to depend heavily on AI timelines, which (at least to me) was not obviously an important consideration for the value of working on mitigating the fallout of a nuclear war!
I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).
I see. Thanks.
I think the argument was written up formally on the forum, but I’m not finding it. I think it goes like if the chance of X risk is 0.1%/year, the expected duration of humans is 1000 years. If you decrease the risk to 0.05%/year, the duration is 2000 years, so you have only added a millennium. However, if you get safe AI and colonize the galaxy, you might get billions of years. But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.
> But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.
For clarity’s sake, I don’t disagree with this. This does mean that your argument for overwhelming value of mitigating nuclear war is still predicated on developing a safe AI (or some other way of massively reducing the base rate) at a future date, rather than being a self-contained argument based solely on nuclear war being an x-risk. Which is totally fine and reasonable, but a useful distinction to make in my experience. For example, it would now make sense to compare whether working on safe AI directly or working on nuclear war in order to increase the number of years we have to develop safe AI is generating better returns per effort spent. This in turn I think is going to depend heavily on AI timelines, which (at least to me) was not obviously an important consideration for the value of working on mitigating the fallout of a nuclear war!
I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).