Modeling responses to changes in nuclear risk

Modelling risk responses

There are two ways we can think about reducing nuclear risk. We can reduce the probability that there’s a nuclear war and we can reduce the harm of a nuclear war if one does occur. Counter-intuitively, both of these types of risk reduction can increase the overall risk of a nuclear war via changing the behaviour of actors.

A state when deciding what to do is trading off the expected harm from nuclear war and the benefits of achieving a goal. For instance, the Soviet Union desperately wanted to take West Berlin, but decided not to because the risk of it leading to war was too high. When we reduce the risk of nuclear war, either by making the war less destructive or reducing the probability that something like the invasion of Berlin leads to nuclear war, we reduce the cost of invading Berlin. If the only way to get nuclear war was a state deliberately taking actions that have a chance of leading to war, then reducing the risk of nuclear war would always increase the risk of nuclear war because it makes it more likely that it’s worthwhile for states to take aggressive actions that risk nuclear war.

There are two reasons why reducing the expected harm of nuclear war could decrease the risk of nuclear war. Firstly, there’s the risk of nuclear war that doesn’t depend on states taking specific, risky actions. For instance, a large fraction of the risk of nuclear war probably comes from accidental use. Reducing the total yield that nuclear states possess reduces the harm of nuclear war if it does start accidentally, for instance via an early warning system failure. Secondly, the model of states deliberately choosing actions can break down. I think this is pretty common in professionalised militaries. For instance, the Soviet troops stationed in Cuba during the missile crisis decided unilaterally to shoot down an American spy plane in Cuban airspace.

We can generalise this model by modelling states as expected utility maximisers, choosing between a large range of actions, all of which carry some level of nuclear risk. If we reduce the probability of nuclear war by reducing the total probability of nuclear war it’s unlikely that this new level of nuclear risk will be optimal for agents. Unless the state is already minimising nuclear risk as much as possible the state will be willing to increase nuclear risk a bit in exchange for meeting some of its other goals. This model is most obviously applicable to a budget—there’s some nuclear risk reduction that the state will literally pay for and that number can be reduced. Another example is with the always-never dilemma. You want to be able to always be able to respond to a real nuclear strike while never responding to a false positive. There’s an inherent tradeoff between these two and it theory the specific false positive level a state accepts is an optimisation decision.

An important case not covered so far is when there’s an increase in the effectiveness of state actions to reduce nuclear risk. For instance, improving the quality of diplomacy makes it more worthwhile to attempt diplomacy. This could still increase nuclear risk by making it less costly to take actions which risk nuclear war, but if states were going to take the action that risks nuclear war anyway, then this will strictly reduce nuclear risk.

Policy implications

The policy implications of these considerations shouldn’t be taken too seriously, this type of rational actor modelling is just one type of modelling approach and one of the key findings to come out of Tetlocks forecasting research is that we should be using multiple models to try to understand social phenomena.

With that being said I think these considerations imply that we should push for some policies over others all else equal. The most important of these considerations is mostly just a rewording of the risks of weakening deterrence—a reduction in nuclear risk could incentivise actions which increase nuclear risk. The direction this consideration pushes in depends on estimates of the risk of states taking risky actions by accident, the risk of more truly accidental nuclear war, and how close states are to the threshold where they’re willing to take actions which risk nuclear war.

To be concrete, what would have to change for the US to be willing to defend Taiwan? If the US would essentially never defend Taiwan while China has a secure second strike and there’s a high chance of quasi-accidental conflict with China then reducing the harm of that war, or reducing the chance that that conflict escalates to full nuclear war would be very valuable. However, if the actions the US military is willing to take are quite responsive to the chance of nuclear war, then this model pushes towards focusing on things which minimise the risk of accidental nuclear war rather.

A final implication of this model is that, all else equal, we should aim to increase the effectiveness of current efforts by states to reduce nuclear risk rather than trying independently to reduce nuclear risk, as the former increases state spending on nuclear risk reduction while the latter decreases it. This could look like improving the quality of people working on nuclear risk reduction within the government rather than building fallout shelters.

How good is this model

I’m pretty uncertain about how well this model describes reality and, critically, how close we are to thresholds where a change in expected harm of a nuclear war leads to states taking on more risk.

I think this model most clearly breaks down in a budgetary context. My read on nuclear history currently is that considerations of the interests of different branches of the US military were the dominant factors in deciding how money was spent. One could imagine that the defence department decided centrally how money should be spent based on a comparison of how much, for instance, getting new submarines reduced the risk of a nuclear attack on the US vs how much, for instance, getting new tanks improved the ability to fight in proxy wars with the Soviet union. However, I think a better model is that the different services wanted more money for their services and were willing to use whatever arguments necessary to get the money, and the final allocation came down to power and influence.[1]

I think this model does better when looking at whether military actions are worth the risk of a nuclear strike. I think we’re well above the threshold where a reduction in the number of people dead from a nuclear war has any effect on the willingness of states to engage in nuclear war. There are really clear quotes from Bundy and MacNamara on this (Bundy was JFKs national security advisor and MacNamara his secretary of Defense) saying that possibility of a single nuclear weapon hitting the US was close to an absolute deterrent and scope insensitivity makes should make us think this is likely true.

On the other hand, my sense is that the changes in the probability that an action would lead to war did have an effect on military actions. I think the clearest example of this are Soviet actions in Berlin. In 1961 the Soviet Union attempted to take Berlin. Prior to the Berlin crisis of 1961 the Soviet thought that the Americans thought that the Soviets had a very substantial nuclear advantage. Because of this they thought that the American’s wouldn’t respond to the attempt to take Berlin. However, the Americans did respond and chose this time to reveal that they in fact knew that the US had a sizeable nuclear advantage over the Soviet union. The Soviets ended up the ones who backed down in the face of tanks. I think this example is an example of the mechanism proposed in the model. The US had a substantial nuclear arsenal in 1961 and had committed to the defence of Berlin and showed their metal previously in the Berlin airlift. The Soviets must therefore have known that there was a chance that the US would respond to their provocations and a chance that this would spiral into a general war. However, after the US both actually did respond and revealed they were aware of their nuclear advantage, the Soviets backed down. There currently isn’t a historical consensus about why exactly the USSR backed down. Historians are divided between explanations based on the change in beliefs about nuclear superiority, the Soviet union seeing unexpected American resolve and that alone forcing the climbdown, and a version of the madman theory where Khrushchev didn’t believe Kennedy was in control US the government and a rogue element could start a nuclear war. Regardless, the 1961 Berlin crisis seems like a case of an increase in the estimate that an action would lead to a nuclear war led to a change in behaviour.

  1. ^


    An uncertainty I have here is that it could be that this process of negotiation between the different branches of the military could be modelled by a consistent utility function regardless. If it looks like a voting process then it’s very possible it can’t be given how often social choice functions derived from voting methods can’t be represented as utility functions because the preferences they produce aren’t transitive.