I agree that nuclear war—and even nuclear winter—would be very unlikely to directly cause human extinction. My loose impression is that other EAs who have looked into this agree as well.
However, I’m not sure if it’s good to publicize work on existential risk from nuclear war under this headline, and with this scope. Here is why:
You only discuss whether nuclear war would somewhat directly cause human extinction—i.e. by either immediately killing everyone, or causing everyone to starve within, say, the next 20 years. However, you don’t discuss whether nuclear war could cause a trajectory change of human civilization that make it more vulnerable to future existential risk. For example, if nuclear war would cause an irrecoverable loss of post-industrial levels of technology that would arguably constitute an existential catastrophe itself (by basically removing the chance of close-to-optimal futures) and also make humanity more vulnerable to natural extinction risk (e.g. they can no longer do asteroid deflection). FWIW, I think the example I just gave is fairly unlikely as well; my point here just is that your post doesn’t tell us anything about such considerations. It would be entirely consistent with all evidence you present to think that nuclear war is a major indirect existential risk (in the sense just discussed).
For this reason, I in particular disagree “that the greatest existential threat from nuclear war appears to be from climate impacts” (as you say in the conclusion). I think that in fact the possibly greatest existential threat from nuclear war is a negative trajectory change precipitated by ‘the collapse of civilization’, though we don’t really know how likely that is or whether this would in fact be negative on extremely long timescales.
(Note I’m not intending for this to just be a special case of the true but somewhat vacuous general claim that, for all we know, literally any event could cause a negative or positive trajectory change. The point is that the unprecedented damage caused by large-scale nuclear war seems unusually likely to cause a trajectory change in either direction.)
[Less important:] I’m somewhat less optimistic than you about point 3C, i.e. nuclear war planners being aware of nuclear winter. I agree they are aware of the risk. However, I’m not sure if they have incentives to care. They might not care if they view “large-scale nuclear war that causes all our major cities to be destroyed” or “nuclear war that leads to a total defeat by an adversary” as essentially the worst possible outcomes, which seems at least plausible to me. Certainly I think they won’t care about the risk as much as a typical longtermist—from an impartial perspective, even a, say, 1% risk of nuclear winter would be very concerning, whereas it could plausibly be a minor consideration when planning nuclear war from a more parochial perspective. Perhaps even more importantly, even if they did care as much as a longtermist, it’s not clear to me if the strategic dynamics allow them to adjust their policies. For example, a nuclear war planner may well think that only a ‘countervalue’ strategy of targeting adversaries’ population centers has a sufficient deterrence effect.
So overall I think our epistemic situation is: We know that one type of existential risk from nuclear war is very small, but we don’t really have a good idea for how large total existential risk from nuclear war is. It’s of course fine, and often a good idea for tractability or presentation reasons, to focus on only one aspect of a problem. But given this epistemic situation, I think the cost of spreading a message that can easily be rounded off to “nuclear war isn’t that dangerous [from a longtermist perspective]” are high, particularly since perceptions that nuclear war would be extremely bad may be partly causally responsible for the fact that we haven’t yet seen one.
Note I’m not claiming that this post by itself has large negative consequences. No nuclear power is going to chance their policies because of an EA Forum post. But I’d be concerned if there was a growing body of EA work with a messaging like this. For future public work I’d feel better if the summary was more like “nuclear war wouldn’t kill every last human within a few decades, but is still extremely concerning from both a long-termist and present-generation perspective” + some constructive implications (e.g. perhaps focus more on how to make post-collapse recovery more likely or to go well).
I think I gave the impression that I’m making a more expansive claim than I actually mean to make, and will edit the post to clarify this. The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically. I know most people who’ve examined it know this is wrong, but I wanted that information to be laid out pretty clearly, so someone could get a summary of this argument. I think that’s just the beginning in assessing existential risk from nuclear war, and I really wouldn’t want people to read my post and walk away thinking “nuclear war is nothing to worry about from a longtermist perspective.”
I agree that “We know that one type of existential risk from nuclear war is very small, but we don’t really have a good idea for how large total existential risk from nuclear war”. I’m planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.
It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity’s potential / eventual extinction. I currently think humans would retain the most significant basic survival technologies following a collapse and then reacquire lost technological capacities relatively quickly. (I discussed this investigation here though not in depth). I’m planning too write this up as part of my compounding risks post or as a separate one.
Agreed that it’s very hard to know the sign on a huge history-altering event, whether it’s a nuclear war or covid.
The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically.
This agrees with my impression, and I do think it’s valuable to correct this misconception. (Sorry, I think it would have been better and clearer if I had said this in my first comment.) This is why I favor work with somewhat changed messaging/emphasis over no work.
It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity’s potential / eventual extinction.
I’m not sure we disagree. My current best guess is that most plausible kinds of civilizational collapse wouldn’t be an existential risk, including collapse caused by nuclear war. (For basically the reasons you mention.) However, I feel way less confident about this than about the claim that nuclear war wouldn’t immediately kill everyone. In any case, my point was not that I in fact think this is likely, but just that it’s sufficiently non-obvious that it would be costly if people walked away with the impression that it’s definitely not a problem.
I’m planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.
This sounds like a very valuable topic, and I’m excited to see more work on it.
FWIW, my guess is that you’re already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it’d be good to describe in detail “here is how this combination of different hazards could kill everyone”]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I’d be happy to review a post prior to publication, though I’m not sure if I’m particularly qualified.)
FWIW, my guess is that you’re already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it’d be good to describe in detail “here is how this combination of different hazards could kill everyone”]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I’d be happy to review a post prior to publication, though I’m not sure if I’m particularly qualified.)
Yes, I was planning to get review prior to publishing this. In general when it comes to risks from biotechnology, I’m trying to follow the principles we developed here: https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards I’d be excited to see, or help workshop, better guidance for navigating information hazards in this space in the future.
I think I gave the impression that I’m making a more expansive claim than I actually mean to make, and will edit the post to clarify this.
Have you made these edits yet, or is this still on the to-do list? Having just read the post, I strongly agree with Max’s assessment, and still think readers could very easily round this post’s claims off to “Nuclear war is very unlikely to be a big deal for longtermists”. The key changes that I’d see as valuable would be:
changing the title (maybe to something like “Nuclear war is unlikely to directly cause human extinction”)
explicitly saying something in the introductory part about how the possibilities of nuclear war causing indirect extinction or other existential catastrophes/trajectory changes are beyond the scope of this post
(There may of course also be other changes that would accomplish similar results)
I also do think that this post contains quite valuable info. And I’d agree that there aresome people, including in the EA community, who seem much too confident that nuclear war would directly cause extinction (though, like Max, I’m not aware of anyone who meets that description and has looked into the topic much).
So if this post had had roughly those tweaks / when you make roughly those tweaks, I’d think it’d be quite valuable. (Unfortunately, in its present form, I worry that the post might create more confusion than it resolves.)
I’d also be excited to see the sort of future work you describe on compounding risks and recovery from collapse! I think those topics are plausibly important and sorely under-explored.
”By a full-scale war, I mean a nuclear exchange between major world powers, such as the US, Russia, and China, using the complete arsenals of each country. The total number of warheads today (14,000) is significantly smaller than during the height of the cold war (70,000). While extinction from nuclear war is unlikely today, it may become more likely if significantly more warheads are deployed or if designs of weapons change significantly.”
I also think indirect extinction from nuclear war is unlikely, but I would like to address this more in a future post. I disagree that additional clarifications are needed. I think people made these points clearly in the comments, and that anyone motivated to investigate this area seriously can read those. If you want to try to doublecrux on why we disagree here I’d be up for that, though on a call might be preferable for saving time.
I agree that nuclear war—and even nuclear winter—would be very unlikely to directly cause human extinction. My loose impression is that other EAs who have looked into this agree as well.
However, I’m not sure if it’s good to publicize work on existential risk from nuclear war under this headline, and with this scope. Here is why:
You only discuss whether nuclear war would somewhat directly cause human extinction—i.e. by either immediately killing everyone, or causing everyone to starve within, say, the next 20 years. However, you don’t discuss whether nuclear war could cause a trajectory change of human civilization that make it more vulnerable to future existential risk. For example, if nuclear war would cause an irrecoverable loss of post-industrial levels of technology that would arguably constitute an existential catastrophe itself (by basically removing the chance of close-to-optimal futures) and also make humanity more vulnerable to natural extinction risk (e.g. they can no longer do asteroid deflection). FWIW, I think the example I just gave is fairly unlikely as well; my point here just is that your post doesn’t tell us anything about such considerations. It would be entirely consistent with all evidence you present to think that nuclear war is a major indirect existential risk (in the sense just discussed).
For this reason, I in particular disagree “that the greatest existential threat from nuclear war appears to be from climate impacts” (as you say in the conclusion). I think that in fact the possibly greatest existential threat from nuclear war is a negative trajectory change precipitated by ‘the collapse of civilization’, though we don’t really know how likely that is or whether this would in fact be negative on extremely long timescales.
(Note I’m not intending for this to just be a special case of the true but somewhat vacuous general claim that, for all we know, literally any event could cause a negative or positive trajectory change. The point is that the unprecedented damage caused by large-scale nuclear war seems unusually likely to cause a trajectory change in either direction.)
[Less important:] I’m somewhat less optimistic than you about point 3C, i.e. nuclear war planners being aware of nuclear winter. I agree they are aware of the risk. However, I’m not sure if they have incentives to care. They might not care if they view “large-scale nuclear war that causes all our major cities to be destroyed” or “nuclear war that leads to a total defeat by an adversary” as essentially the worst possible outcomes, which seems at least plausible to me. Certainly I think they won’t care about the risk as much as a typical longtermist—from an impartial perspective, even a, say, 1% risk of nuclear winter would be very concerning, whereas it could plausibly be a minor consideration when planning nuclear war from a more parochial perspective. Perhaps even more importantly, even if they did care as much as a longtermist, it’s not clear to me if the strategic dynamics allow them to adjust their policies. For example, a nuclear war planner may well think that only a ‘countervalue’ strategy of targeting adversaries’ population centers has a sufficient deterrence effect.
So overall I think our epistemic situation is: We know that one type of existential risk from nuclear war is very small, but we don’t really have a good idea for how large total existential risk from nuclear war is. It’s of course fine, and often a good idea for tractability or presentation reasons, to focus on only one aspect of a problem. But given this epistemic situation, I think the cost of spreading a message that can easily be rounded off to “nuclear war isn’t that dangerous [from a longtermist perspective]” are high, particularly since perceptions that nuclear war would be extremely bad may be partly causally responsible for the fact that we haven’t yet seen one.
Note I’m not claiming that this post by itself has large negative consequences. No nuclear power is going to chance their policies because of an EA Forum post. But I’d be concerned if there was a growing body of EA work with a messaging like this. For future public work I’d feel better if the summary was more like “nuclear war wouldn’t kill every last human within a few decades, but is still extremely concerning from both a long-termist and present-generation perspective” + some constructive implications (e.g. perhaps focus more on how to make post-collapse recovery more likely or to go well).
I think I gave the impression that I’m making a more expansive claim than I actually mean to make, and will edit the post to clarify this. The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically. I know most people who’ve examined it know this is wrong, but I wanted that information to be laid out pretty clearly, so someone could get a summary of this argument. I think that’s just the beginning in assessing existential risk from nuclear war, and I really wouldn’t want people to read my post and walk away thinking “nuclear war is nothing to worry about from a longtermist perspective.”
I agree that “We know that one type of existential risk from nuclear war is very small, but we don’t really have a good idea for how large total existential risk from nuclear war”. I’m planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.
It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity’s potential / eventual extinction. I currently think humans would retain the most significant basic survival technologies following a collapse and then reacquire lost technological capacities relatively quickly. (I discussed this investigation here though not in depth). I’m planning too write this up as part of my compounding risks post or as a separate one.
Agreed that it’s very hard to know the sign on a huge history-altering event, whether it’s a nuclear war or covid.
This agrees with my impression, and I do think it’s valuable to correct this misconception. (Sorry, I think it would have been better and clearer if I had said this in my first comment.) This is why I favor work with somewhat changed messaging/emphasis over no work.
I’m not sure we disagree. My current best guess is that most plausible kinds of civilizational collapse wouldn’t be an existential risk, including collapse caused by nuclear war. (For basically the reasons you mention.) However, I feel way less confident about this than about the claim that nuclear war wouldn’t immediately kill everyone. In any case, my point was not that I in fact think this is likely, but just that it’s sufficiently non-obvious that it would be costly if people walked away with the impression that it’s definitely not a problem.
This sounds like a very valuable topic, and I’m excited to see more work on it.
FWIW, my guess is that you’re already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it’d be good to describe in detail “here is how this combination of different hazards could kill everyone”]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I’d be happy to review a post prior to publication, though I’m not sure if I’m particularly qualified.)
Yes, I was planning to get review prior to publishing this. In general when it comes to risks from biotechnology, I’m trying to follow the principles we developed here: https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards I’d be excited to see, or help workshop, better guidance for navigating information hazards in this space in the future.
Have you made these edits yet, or is this still on the to-do list? Having just read the post, I strongly agree with Max’s assessment, and still think readers could very easily round this post’s claims off to “Nuclear war is very unlikely to be a big deal for longtermists”. The key changes that I’d see as valuable would be:
changing the title (maybe to something like “Nuclear war is unlikely to directly cause human extinction”)
explicitly saying something in the introductory part about how the possibilities of nuclear war causing indirect extinction or other existential catastrophes/trajectory changes are beyond the scope of this post
(There may of course also be other changes that would accomplish similar results)
I also do think that this post contains quite valuable info. And I’d agree that there are some people, including in the EA community, who seem much too confident that nuclear war would directly cause extinction (though, like Max, I’m not aware of anyone who meets that description and has looked into the topic much).
So if this post had had roughly those tweaks / when you make roughly those tweaks, I’d think it’d be quite valuable. (Unfortunately, in its present form, I worry that the post might create more confusion than it resolves.)
I’d also be excited to see the sort of future work you describe on compounding risks and recovery from collapse! I think those topics are plausibly important and sorely under-explored.
The part I added was:
”By a full-scale war, I mean a nuclear exchange between major world powers, such as the US, Russia, and China, using the complete arsenals of each country. The total number of warheads today (14,000) is significantly smaller than during the height of the cold war (70,000). While extinction from nuclear war is unlikely today, it may become more likely if significantly more warheads are deployed or if designs of weapons change significantly.”
I also think indirect extinction from nuclear war is unlikely, but I would like to address this more in a future post. I disagree that additional clarifications are needed. I think people made these points clearly in the comments, and that anyone motivated to investigate this area seriously can read those. If you want to try to doublecrux on why we disagree here I’d be up for that, though on a call might be preferable for saving time.