EA on nuclear war and expertise

I believe that there may be systematic issues with how Effective Altruism works when dealing with issues outside the core competence of its general membership, particularly in areas related to defense policy. These areas are complex and unlike many fields EA operates in, are often short of good public-facing explanations, which may make them hard for outside researchers, such as those from EA organizations, to understand in short order. This raises the possibility of getting basic, uncontroversial details wrong, which can both render analyses inaccurate and ruin the credibility of EA with experts in the field.

A bit of background: I’ve been fascinated by the defense world for over 20 years, and have spent the last 5 working for a major defense contractor. I write a blog, mostly covering naval history, at navalgazing.net, and it was this work which brought me into contact with what appears to be the most in-depth evaluation of nuclear war risks by EA, work conducted by Luisa Rodriguez for Rethink Priorities.

Unfortunately, while this was a credible effort to assess the risks of nuclear war, unfamiliarity with the field meant that a number of errors crept in, ranging from the trivial to the serious. The most obvious are in the analysis of the survivability of the US and Russian nuclear arsenals.

For instance, when discussing the sea-based deterrent, the article states that “[America’s] submarines’ surfaces are covered in plastic, which disperses radar signals instead of reflecting them.” This is clearly a reference to the anechoic tiles used on modern submarines, but these are intended to protect from sonar, not radar. This probably traces to the source used, an article that was confusingly written by someone who clearly doesn’t know all that much about submarine design and ASW sensors, but it is exactly the sort of error which flags an article as being written by someone who doesn’t really know what they’re talking about.

But that’s merely a nitpick, and there’s also a very basic flaw with the assumption that nearly all of the warheads aboard SSBNs will survive. While I completely agree that any submarine at sea is nearly invulnerable (and am in fact rather more skeptical than some of the sources that improved technology will render the seas transparent) a substantial fraction of the SSBN force is in port, not at sea. The US attempts to minimize this by providing each submarine with two crews and swapping them out, but even with this, each operational submarine still spend about a third of its time in port between patrols. (It will spend more time in overhauls, but sending ships to the yard with missiles aboard is considered bad form, so those warheads probably won’t count against the US total.) How these would fare in a war would depend heavily on the situation leading up to the outbreak of war. If Russia launched a surprise attack, the bases at Bremerton and Kings Bay would undoubtedly be the highest-priority targets. If there had been substantial warning, then most would likely have been ordered to sea to strengthen the US deterrent.

The description of the strategic bomber force bears little connection to the reality of said force and contains even more jarring errors, such as describing the bombers as “air-based”. The second paragraph begins with the following statement: “While many strategic bombers are concentrated at air bases and aircraft carriers, making them potentially convenient to target, early warning systems would likely give US pilots enough time to take off before the nuclear warheads arrived at their targets and detonated.”

Every part of this sentence is wrong. While the US Navy did operate aircraft that were at least arguably strategic bombers, they were retired from this role in the mid-60s. Nuclear strike with lighter aircraft remained a major carrier role for the rest of the Cold War, all shipboard nuclear weapons except the SLBMs were withdrawn in the early 90s. Whether or not they have nuclear weapons onboard, aircraft carriers are exceedingly inconvenient to target. And there is little prospect of any strategic bombers getting off the ground if they are caught unaware. During the Cold War, Strategic Air Command did keep aircraft on emergency alert, capable of scrambling and getting out of range in the interval between detecting incoming ICBMs and the missiles reaching the base. But this was never more than a minority of the bomber force, and the practice ended with the fall of the Soviet Union. Even more strongly than the SSBN force, the survivability of the bomber force will depend heavily on how much warning is available. At a minimum, it is likely to take an hour or more to load the weapons and brief the crew, a serious problem when a nuclear warhead will reach the base in 15 minutes.

A similar lack of understanding comes out in the discussion of the bombers themselves. The US has a mix of B-2s equipped with gravity bombs and B-52s equipped with cruise missiles, but the B-52s are completely ignored, and the discussion of stealth technology doesn’t make much sense. The discussion of ICBMs is somewhat better, and while I disagree with the pessimism on missile defense, the position taken by the author is at least colorable.

Similar problems plague the section on Russia, although lack of information and my being less familiar with their setup make them harder to analyze. Particularly notable is citing a 2001 report on Russian submarine readiness, as that marks the nadir of funding for Russian strategic forces. After Putin came to power, more funding flowed to Russia’s nuclear forces, and while poor readiness due to corruption certainly cannot be ruled out, it’s also far from certain that this is the case.

The other articles written for the Rethink Priorities series on nuclear war have fewer basic errors, probably because they are on subjects that are slightly less opaque and reliant on domain knowledge. I would argue that the risk of nuclear winter is substantially overstated thanks to reliance on papers that have a number of obvious flaws, and which John Schilling and I had critiqued on the EA-adjacent SSC in 2016. Guarding against that sort of problem is a separate issue, and a rather more difficult one than I can address here.

The basic lesson of all this is the importance of domain knowledge in both understanding and analyzing a problem, and in making sure that those who deal with that problem professionally will take you seriously. Obviously, in the fields EA has the most involvement in, this is unlikely to be an issue, but it could recur as EA looks into new issues, and should be guarded against by trying to find and work with people who are familiar with the domain.