I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).
I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).