I’m sorry for not getting around to responding to this, and may not be able to for some time. But I wanted to quickly let you know that I appreciated both this comment and your post, and both updated me significantly toward your position and away from my Reason 4.
I consider grantmakers and donors interested in decreasing extinction risk had better focus on artificial intelligence (AI) instead of nuclear war (more).
I would say the case for sometimes prioritising nuclear extinction risk over AI extinction risk is much weaker than the case for sometimes prioritising natural extinction risk over nuclear extinction risk (more).
I get a sense the extinction risk from nuclear war was massively overestimated in The Existential Risk Persuasion Tournament (XPT) (more).
I have the impression Toby Ord greatly overestimated tail risk in The Precipice (more).
I believe interventions to decrease deaths from nuclear war should be assessed based on standard cost-benefit analysis (more).
I think increasing calorie production via new food sectors is less cost-effective to save lives than measures targeting distribution (more).
I’m sorry for not getting around to responding to this, and may not be able to for some time. But I wanted to quickly let you know that I appreciated both this comment and your post, and both updated me significantly toward your position and away from my Reason 4.
Thanks for the update, Cullen! Relatedly, you may want to check my post on Nuclear war tail risk has been exaggerated?.