I think this is caused by two factors, 1) longtermism has decided to distinguish between existential risks and global catastrophic risks, and 2) there is a much stronger general culture of denial around nuclear risk than any other risk.
When in the Precipice(which I take to have been broadly read) Toby Ord says that AI is a 1⁄10 risk and nuclear war and climate change are both 1/1000 and does so essentially on the basis that he views the later two as survivable this created the justification to write them off.(this isn’t to say this is where the trend began)
If you talk to most people they just don’t believe nuclear war is likely, as a result, it seems like nuclear war and unaligned artificial intelligence are both treated as being speculative, except nuclear war has a long history of not happening. This is of course nonsense, as nuclear weapons exist right now.
I might find time to go back and reread the Precipice and dig into the probabilities you reference. Those seem odd. It’s also odd because something that reduces humanity to subsistence levels for a very long time and eliminates ninety some odd percent of the population is absolutely catastrophic. I suppose I’m a hyberbolic discounter at heart and do think that while we should care about the far future, it’s really silly to get into the 1-1 logic that a human a billion years from now should be equally valued for decision making as one today or ten years from now.
You can find the numbers in the Risk Landscape section. Yeah, it’s wonky. It seems very odd to me to both A) be confident that nuclear risk won’t kill everyone but will just kill 90% of people and B) that this would strongly influence which risks you worry about.
I think this is caused by two factors, 1) longtermism has decided to distinguish between existential risks and global catastrophic risks, and 2) there is a much stronger general culture of denial around nuclear risk than any other risk.
When in the Precipice(which I take to have been broadly read) Toby Ord says that AI is a 1⁄10 risk and nuclear war and climate change are both 1/1000 and does so essentially on the basis that he views the later two as survivable this created the justification to write them off.(this isn’t to say this is where the trend began)
If you talk to most people they just don’t believe nuclear war is likely, as a result, it seems like nuclear war and unaligned artificial intelligence are both treated as being speculative, except nuclear war has a long history of not happening. This is of course nonsense, as nuclear weapons exist right now.
Yes 1000% on the cultural factors that have desensitized us to nuclear risk. Tyler Cowen has a nice series of posts out today on this subject: https://marginalrevolution.com/marginalrevolution/2022/08/which-is-the-hingy-est-century.html
I might find time to go back and reread the Precipice and dig into the probabilities you reference. Those seem odd. It’s also odd because something that reduces humanity to subsistence levels for a very long time and eliminates ninety some odd percent of the population is absolutely catastrophic. I suppose I’m a hyberbolic discounter at heart and do think that while we should care about the far future, it’s really silly to get into the 1-1 logic that a human a billion years from now should be equally valued for decision making as one today or ten years from now.
I’ll check out the article.
You can find the numbers in the Risk Landscape section. Yeah, it’s wonky. It seems very odd to me to both A) be confident that nuclear risk won’t kill everyone but will just kill 90% of people and B) that this would strongly influence which risks you worry about.
Also, you might be interested in the submission I just made to the contest about many worlds and nuclear risk, as I am also begging EA to care about nuclear risk. https://forum.effectivealtruism.org/posts/Gg2YsjGe3oahw2kxE/nuclear-fine-tuning-how-many-worlds-have-been-destroyed?
Thanks for sharing. I’ll check out your post.
Do you know how one might get another copy of the Precipice? I donated mine to a friend.
If you sign up for the 80,000 hours newsletter you can get one for free, it is also on libgen