A case against focusing on tail-end nuclear war risks
A case against focusing on tail-end nuclear war risks: Why I think that nuclear risk reduction efforts should prioritize preventing any kind of nuclear war over preventing or preparing for the worst case scenarios
This is an essay written in the context of my participation in the Cambridge Existential Risk Initiative 2022, where I did research on nuclear risks and effective strategies for mitigating them. It partially takes on the following question (which is one of the cruxes I identified for the nuclear risk field): Which high-level outcomes should nuclear risk reduction efforts focus on?
I argue against two approaches that seem quite prominent in current EA discourse on the nuclear risk cause area: preventing the worst types of nuclear war, and preparing for a post-nuclear war world.
Instead, I advocate for a focus on preventing any kind of military deployment of nuclear weapons (with no differentiation between “small-” or “large-scale” war).
I flesh out three arguments that inform my stance on this question:
The possible consequences of a one-time deployment of nuclear weapons are potentially catastrophic. (more)
It seems virtually impossible to put a likelihood on which consequences will occur, how bad they will be, and whether interventions taken today to prevent or prepare for them will succeed. (more)
De-emphasizing the goal of preventing any nuclear war plausibly has adverse effects, in the form of opportunity costs, a weakening of norms against the deployment of nuclear weapons, and a general moral numbing of ourselves and our audiences. (more)
Setting the stage
Context for this research project
This essay was written in the context of the Cambridge Existential Risk Initiative (CERI) 2022, a research fellowship in which I participated this summer. My fellowship project/goal was to disentangle “the nuclear risk cause area”, i.e. to figure out which specific risks it encompasses and to get a sense for what can and should be done about these risks. I took several stabs at the problem, which I will not go into in this document (they are compiled in this write-up).
The methodology I settled on eventually was to collect a number of crucial questions (cruxes), which need to be answered to make an informed decision about how to approach nuclear risk mitigation. One of the cruxes I identified was about the “high-level outcomes” that people interested in nuclear risk reduction might aim for. I came up with a list of the nuclear-related outcomes that I think could conceivably constitute a high-level goal (it is likely that this list is incomplete and I welcome suggestions for additional items to put on it). These high-level outcomes are found in the dark blue rectangles in Figure 1.
For lack of time and skills, I didn’t do an in-depth investigation to compare and prioritize between these goals. However, I did spend some time thinking about a subset of the goals. In conversations with others at the fellowship, I encountered the view that nuclear weapons are relevant to a committed longtermist only insofar as they constitute an existential risk, and that only certain kinds of nuclear conflict fulfill that criterion, which is why work to reduce nuclear risk (in the EA community) should mainly be targeted at addressing the danger of these “tail-end outcomes” (i.e., the worst kinds of nuclear war, which may cause nuclear winter). I felt quite concerned by this stance and what I perceive to be its fragile foundations and dangerous implications. In particular, I was shaken by the claim that “small-scale nuclear conflict essentially doesn’t matter to us”. This essay is an attempt to spell out my disagreements with and concerns about this view, and to make the case that preventing any kind of nuclear war should be a core priority for effective altruists (or longtermists) seeking to work on nuclear risk reduction.
“Using” nuclear weapons: this is a broad phrase that refers to any purposive action involving nuclear weapons, which includes using the possession of nuclear weapons as a threat and deterrent
“Detonating” nuclear weapons: this refers to any detonation of nuclear weapons, including accidental ones and those that happen during tests as these weapons are developed; the phrase by itself does not imply that nuclear weapons are used in an act of war and it says nothing about the place and target of the detonation
Spelling out the argument
The main thrust of my argument is that I contest the viability/feasibility of present-day attempts to either prevent cascading consequences of the one-time deployment of nuclear weapons or to prepare for those consequences. Because of this, I argue that it is dangerously overconfident to focus on those two outcomes (preventing the worst case; preparing for the world after) while deprioritizing the goal of preventing any kind of nuclear war. I point to the unpredictable consequences of a one-time deployment of nuclear weapons, and to the potentially far-reaching damage/risk they may bring, to assert that “small-scale” nuclear conflict does matter from a longtermist perspective (as well as from many other ethical perspectives).
Below, I spell out my argument by breaking it up into three separate reasons, which together motivate my conclusion regarding which high-level outcome to prioritize in the nuclear risk reduction space.
Reason 1: The possible consequences of a one-time deployment of nuclear weapons are potentially catastrophic
While I would argue that the sheer destruction wrought by the detonation of a single nuclear weapon is sufficient grounds for caring about the risk and for making the choice to attempt to reduce it, I understand and accept that this is less obvious when taking on a scope sensitive moral perspective (such as total utilitarianism), which recognizes/assumes that there are many incredibly important problems and risks in the world, that we cannot fix all of them at once, and that this obliges us to prioritize based on just how large the risk posed by each problem is (and on how good our chances to reduce it are). I will here make the case that the consequences of a one-time deployment of nuclear weapons are large enough in expectation to warrant the attention of a cause prioritizing consequentialist, especially if that consequentialist accepts the ethical argument of longtermism. I assume that such a (longtermist) consequentialist considers a problem sufficiently important if it contributes significantly to existential risks.
There are several plausible ways in which a one-time use of nuclear weapons can precipitate existential risk:
Escalation / Failure at containing nuclear war
A relatively immediate consequence of a first deployment of nuclear weapons, or of a “small-scale” nuclear conflict, is that it can escalate into a larger exchange of attacks, with no obvious or foolproof stopping point.
The first deployment of nuclear weapons can also lead to large-scale nuclear war more indirectly, by setting a precedent, weakening strong normative anti-use inhibitions among decision-makers, and thus making the use of these weapons in future conflicts more likely.
In either case, a first deployment of nuclear weapons would then be a significant causal factor for larger-scale deployment, as such contributing to the risk of nuclear winter (which in itself has some chance of constituting an existential catastrophe).
The one-time deployment of nuclear weapons could severely upset norms against violence in the global system (and/or in specific localities).
It could thus be a significant causal factor in increasing the incidence and severity of violent conflict globally.
In a worst case scenario, such conflict could occur at a scale that induces civilizational collapse even if no further nuclear weapons are deployed.
The one-time deployment of nuclear weapons would plausibly lead to political, economic, social, and cultural/normative shocks that could cause turmoil, instability, and a breakdown/weakening of societies’ abilities to deal with, prepare for, and respond to pressing and longterm challenges (such as run-away climate change, the development and control of dangerous technologies, etc).
One more specific example/scenario of these effects would be: The one-time deployment of nuclear weapons makes people more suspicious and distrusting of other countries and of efforts for bi- and multilateral cooperation, thus exacerbating great power conflict, fueling technological arms race dynamics, decreasing the chance that revolutionary tech like AGI/TAI is developed safely, and increasing the chance of destructive conflict over the appropriation and use of such tech.
Reason 2: It seems virtually impossible to put a likelihood on any of these consequences, or on the chance that interventions taken today to prevent or prepare for them succeed
How would a one-time deployment of nuclear weapons today influence state leaders’ decisions to acquire, modernize, and use nuclear weapons in the future? Would it lead to an immediate retaliation and escalation, or would there be restraint? Would the vivid demonstration of these weapons’ destructive power reinforce emotions, norms, and reasons that deter from their use in war, or would the precedent break a taboo that has hencetoforth prevented the intentional deployment of nuclear weapons? How would such an event impact deterrence dynamics (incl. the credibility of retaliatory threats, a trust in each party’s rational choice to avoid MAD, etc)? How would it impact proliferation aspirations by current non-nuclear weapons states?
How would a deployment of nuclear weapons influence the current global order, the relations and interactions between nation-states and non-state actors, and the global economy? How would it affect cultural norms and values, as well as social movements and civil society across the world? How would all this impact on efforts to tackle global challenges and risks? How would it impact the global as well as several regional communities’ resilience and ability to respond to crises?
I consider these questions impossible to answer with any degree of certainty. Why? Because we have close to no relevant empirical data to infer the probability of different consequences of a one-time nuclear deployment, nor can these probabilities be derived logically from some reliable higher-order principles/theories.
Statements about the causal effects of interventions face similarly severe challenges: Even if we were able to make a decent guess at what the world after a one-time deployment of nuclear weapons would look like, I don’t think we have much data or understanding to inform decisions meant to prepare for such a world. In addition, such preparatory or preventive interventions can’t be assessed while or after they are implemented, since there is no feedback that would give us a sense of how effective the interventions are.
Reason 3: De-emphasizing the goal of preventing any nuclear war plausibly has adverse effects
I’m thinking about two channels through which a focus on the tail-ends of nuclear war risk can have the adverse effect of making the deployment of nuclear weapons more likely: opportunity costs, and a weakening of the normative taboo against nuclear weapons employment. In an expansion of the latter point, I’m also concerned about the more general desensitizing effect that such a focus can bring.
The first channel is probably obvious and doesn’t require much additional explanation. If resources (time, political capital, money) are spent narrowly on preventing or preparing for the worst-case outcome of a nuclear conflict, those same resources cannot be spent on reducing the probability of nuclear war per se. Given that the resources to address nuclear risks are seriously limited in our present world (especially after the decision by one of the biggest funders in this space, the MacArthur Foundation, to phase out its support by the end of 2023; see Kimball 2021), such diversion of available means would arguably be quite significant and could seriously impact our overall efforts to prevent nuclear war.
Weakening of the nuclear taboo
The second channel is a little less direct and requires acceptance of a few assumptions:
First, that “nuclear war is being and has been averted at least in part by norms that discourage leaders of nuclear weapons states from making (or from seriously contemplating) the decision to use nuclear weapons against ‘enemy’ targets, [...] either [because] decision-makers feel normative pressure from domestic and/or global public opinion, or [because] they have internalized these norms themselves, or both.”
Second, that these norms are affected by the way in which authoritative voices talk about nuclear weapons, and that they are weakened when authoritative voices start to use diminutive language to describe some forms of nuclear war (describing them as minor, small-scale, or limited) as well as when they rank different types of nuclear war and downplay the danger of some types by contrasting their expected cost with the catastrophic consequences of the worst-case outcomes.
Third, that the discourse of the policy research community is an authoritative voice in this space, and
fourth, that effective altruists and effective altruist organizations can contribute to and shape this authoritative discourse.
What follows from these assumptions is that research that makes the case for prioritizing worst-case outcomes (by “demonstrating” how working on them is orders of magnitude more important than working on “smaller nuclear war issues”) could have the unintended negative side effect of increasing the probability of a nuclear first strike, because such research could weaken normative barriers to the deployment of nuclear weapons.
In addition to the concern that focusing on worst case outcomes can end up increasing nuclear war risk (by increasing the probability of weapons deployment), I am worried that such a focus, and the language surrounding it, has a morally desensitizing effect. I feel a stark sense of unease (or, at times, terror) when I hear people talk nonchalantly about “unimportant small-scale nuclear war” or, to name a more specific example, “the likely irrelevance of India-Pakistan war scenarios”, when I read long articles that calculate the probability for different death counts in the event of nuclear escalation with the explicit intention of figuring out whether this is a risk worthy of our attention, or when I listen in on discussions about which conditions would make the deployment of nuclear weapons a “rational choice”. These experiences set off bright warning lights before my inner eye, and I feel strongly compelled to speak out against the use and promulgation of the types of arguments and reasoning just listed.
Maybe the reason for this intuitive response is that I’ve been taught—mostly implicitly, through the culture and art I encountered while growing up in a progressive, educated slice of Austrian society—that it is this kind of careless, trivializing language, and a collective failure to stand up and speak out against it, that have preceded and (at least partially) enabled the worst kinds of mass atrocity in history? I don’t have full clarity on my own views here; I find myself unable to pin down what I believe the size and probability of the problem (the desensitizing effect of this type of speech) to be; nor can I meticulously trace where my worry comes from and which sources, arguments, and evidence it is informed by. For the last months (all throughout and after the summer fellowship), I’ve been trying to get a better understanding of what I think and to mold my opposition into a watertight or at least comprehensible argument, and I largely failed. And yet—I continue to believe fairly strongly that there is harm and danger in the attitudes and conversations I witnessed. I’m not asking anyone to defer to that intuitive belief, of course; what I ask you to do is to reflect on the feelings I describe, and I’d also encourage you to not be instantly dismissive if you share similar concerns without being able to back them up with a rational argument.
But even if you think the last paragraph betrays some failing of my commitment to rational argument and reason, I hope that it doesn’t invalidate the entire post for you. Though I personally assign substantial weight to the costs (and risks) of “moral numbing”, I think that the overall case I make in this essay remains standing if you entirely discard the part about “desensitizing effects”.
Other pieces I wrote on this topic
Disentanglement of nuclear security cause area_2022_Weiler: written prior to CERI, as part of a part-time and remote research fellowship in spring 2022)
List of useful resources for learning about nuclear risk reduction efforts: This is a work-in-progress; if I ever manage to compile a decent list of resources, I will insert a link here.
How to decide and act in the face of deep uncertainty?: This is a work-in-progress; if I ever manage to bring my thoughts on this thorny question into a coherent write-up, I will insert a link here.
Atkinson, Carol. 2010. “Using nuclear weapons.” Review of International Studies, 36(4), 839-851. doi:10.1017/S0260210510001312.
Bostrom, Nick. 2013. “Existential Risk Prevention as Global Priority.” Global Policy 4 (1): 15–31. https://doi.org/10.1111/1758-5899.12002.
CERI, (Cambridge Existential Risk Initiative). 2022. “CERI Fellowship.” CERI. 2022. https://www.cerifellowship.org/.
Clare, Stephen. 2022. “Modelling Great Power conflict as an existential risk factor”, subsection: “Will future weapons be even worse?” EA Forum, February 3. https://forum.effectivealtruism.org/posts/mBM4y2CjfYef4DGcd/modelling-great-power-conflict-as-an-existential-risk-factor#Will_future_weapons_be_even_worse_.
Cohn, Carol. 1987. “Sex and Death in the Rational World of Defense Intellectuals.” Signs 12 (4): 687-718. https://www.jstor.org/stable/3174209.
EA Forum. n.d. “Disentanglement Research.” In Effective Altruism Forum. Centre for Effective Altruism. Accessed October 9, 2022. https://forum.effectivealtruism.org/topics/disentanglement-research.
———. n.d. “Great Power Conflict.” In Effective Altruism Forum. Centre for Effective Altruism. Accessed October 9, 2022. https://forum.effectivealtruism.org/topics/great-power-conflict.
———. n.d. “Longtermism.” In Effective Altruism Forum. Centre for Effective Altruism. Accessed October 9, 2022. https://forum.effectivealtruism.org/topics/longtermism.
Hilton, Benjamin, and Peter McIntyre. 2022. “Nuclear War.” 80,000 Hours: Problem Profiles (blog). June 2022. https://80000hours.org/problem-profiles/nuclear-security/#top.
Hook, Glenn D. 1985. “Making Nuclear Weapons Easier to Live With: The Political Role of Language in Nuclearization.” Bulleting of Peace Proposals 16 (1), January 1985. https://doi.org/10.1177/096701068501600110.
Kaplan, Fred. 1991 (1983). The Wizards of Armageddon. Stanford, Calif: Stanford University Press. https://www.sup.org/books/title/?id=2805.
Ladish, Jeffrey. 2020. “Nuclear War Is Unlikely to Cause Human Extinction.” EA Forum (blog). November 7, 2020. https://forum.effectivealtruism.org/posts/mxKwP2PFtg8ABwzug/nuclear-war-is-unlikely-to-cause-human-extinction.
Larsen, Jeffrey A., and Kerry M. Kartchner. 2014. On Limited Nuclear War in the 21st Century. On Limited Nuclear War in the 21st Century. Stanford University Press. https://doi.org/10.1515/9780804790918.
LessWrong. n.d. “Motivated Reasoning.” In LessWrong. Accessed November 22, 2022. https://www.lesswrong.com/tag/motivated-reasoning.
Rodriguez, Luisa (Luisa_Rodriguez). 2019. “How Bad Would Nuclear Winter Caused by a US-Russia Nuclear Exchange Be? - EA Forum.” EA Forum (blog). June 20, 2019. https://forum.effectivealtruism.org/posts/pMsnCieusmYqGW26W/how-bad-would-nuclear-winter-caused-by-a-us-russia-nuclear.
Todd, Benjamin. 2017 (2022). “The case for reducing existential risks.” 80,000 Hours, first published in October 2017, last updated in June 2022. https://80000hours.org/articles/existential-risks/.
I don’t know how common or prominent this view is in the EA community. Some info to get a tentative sense of its prominence: One person expressed the view explicitly in a conversation with me and suggested that many people on LessWrong (a proxy for the rationalist community, which has significant overlaps with EA?) share it. A second person expressed the view in more cautious terms in a conversation. In addition, over the summer several people asked me and other nuclear risk fellows about the probability that nuclear war leads to existential catastrophe, which might indicate some affinity to the view described above. I would also argue that articles like this one (Ladish 2020) on the EA Forum seem implicitly based on this view.
Note that the reasons I spell out in this post can, in some sense, be accused of motivated reasoning. I had an intuitive aversion against focusing on tail end nuclear risks (and against de-emphasizing the goal of preventing any kind of nuclear deployment) before I came up with the supporting arguments in concrete and well-formulated terms. The three reasons I present below thus came about as a result of me asking myself “Why do I think it’s such a horrible idea to focus on the prevention of and preparation for the worst case of a nuclear confrontation?”
While in itself, this is not (or should not) be sufficient grounds for readers to discount my arguments, I think it constitutes good practice to be transparent about where our beliefs might come from (as far as that is possible) and I thank Oscar Delaney for raising the point in the comments and thus prompting me to add this footnote.
I owe the nuance to distinguish between “using”, “detonating”, and “deploying” nuclear weapons to conversations with academics working on nuclear issues. It is also recommended by Atkinson 2010: “Whether used or not in the material sense, the idea that a country either has or does not have nuclear weapons exerts political influence as a form of latent power, and thus represents an instance of the broader meaning of use in a full constructivist analysis.”
In a previous draft, I wrote “use nuclear weapons in war” to convey the same meaning; I tried to replace this throughout the essay, but the old formulation may still crop up and should be treated as synonymous with “deploying” or “employing nuclear weapons”.
The immediate and near-term consequences of a nuclear detonation are described in, for instance, Hilton and McIntyre 2022.
Containing nuclear war has been discussed as a strategic goal since the very early decades of the nuclear age (i.e., from the 1950s onwards). These discussions have always featured a standoff between the merits and necessity vs. futility and adverse consequences of thinking about “smart nuclear war-fighting” (i.e., keeping nuclear war contained to a “small scale”); my arguments above stand in the tradition of those who contend that such thinking and strategizing is creating more harm than good. For a more comprehensive treatment of all sides of the debate, see the first part of On limited nuclear war in the 21st century, edited by Larsen and Kartchner (2014), and for a historical account of the first time the discussion arose, see Chapter 14 of Kaplan’s Wizards of Armageddon (1991 (1983)).
Rodriguez (2019) and Ladish (2020) provide extensive analyses of the effects of nuclear wars of different magnitudes, admitting the possibility that nuclear winter could cause human extinction but ultimately assessing it as rather low. Both of these are posts on the EA Forum; I have not looked at the more academic literature on nuclear winter and won’t take a stance on exactly how likely it is to cause an existential catastrophe.
This is very similar to, and somewhat inspired by, Clare’s illustration of “Pathways from reduced cooperation and war to existential risk via technological disasters” in a post that discusses great power conflict as an existential risk factor (Clare 2022).
A possible objection here could be that a similar level of uncertainty exists for the goal of preventing a one-time deployment of nuclear weapons in the first place. I agree that we also face disconcertingly high uncertainty when attempting to prevent any kind of nuclear war (see the non-conclusive conclusions of my investigation into promising strategies to reduce the probability of nuclear war as evidence of how seriously I take that uncertainty). However, I would argue that interventions to reduce the likelihood of any nuclear war are at least somewhat more tractable than those that aim to foresee and prepare for the consequences of nuclear weapons deployment. This is because the former at least have the present world—which we have some evidence and familiarty with—as their starting point, whereas the latter attempt to intervene in a world which is plausibly significantly different from the one we inhabit today. In other words: The knowledge that would be relevant for figuring out how to prevent the worst kinds of nuclear war, or to prepare for a post-nuclear war world, is particularly unattainable to us at the present moment, simply because we have so little experience with events that could give us insights into what the consequences of a break in the nuclear taboo would be.
Unless reducing the probability of any kind of nuclear war is viewed as the best available means for preventing or preparing for the worst-case outcome, which is what I argue for in this essay.
I put this in quotation marks because the lines are copied directly from this investigative report I wrote on reducing the probability of nuclear war (in other words, I’m shamelessly quoting myself here).
For a more extensive discussion and defense of the claim that language affects nuclear policy, I point the reader to the literature of “nukespeak”. I would especially recommend two articles on the topic from the 1980s: Hook 1985, “Making Nuclear Weapons Easier to Live With. The Political Role of Language in Nuclearization” (download a free version here: pdf), and Cohn 1987, “Sex and Death in the Rational World of Defense Intellectuals”.
In the case of discourses on nuclear weapons, this point has been made, for instance, by Cohn 1987. In addition, authors have argued that the ideas and discourse of the (policy) research community in the field of International Relations more broadly have a significant influence on the perceptions and actions of policymakers (e.g., Smith 2004 and his discussion of how International Relations scholars have been shaping dominant definitions of violence and, through that, have had an influence on policies to counter it).