Nuclear winter scepticism

Link post

Update on 16 October 2023: I have now done my own in-depth analysis of nuclear winter.

This is a crosspost for Nuclear Winter by Bean from Naval Gazing, published on 24 April 2022. It argues Toon 2008 has overestimated the soot ejected into the stratosphere following a nuclear war by something like a factor of 191[1] (= 1.5*2*2*(1 + 2)/​2*(2 + 3)/​2*(4 + 13)/​2). I have not investigated the claims made in Bean’s post, but that seems worthwhile. If its conclusions hold, the soot ejected into the stratosphere following the 4.4 k nuclear detonations analysed in Toon 2008 would be 0.942 Tg[2] (= 180191) instead of “180 Tg”. From Fig. 3a of Toon 2014, the lower soot ejection would lead to a reduction in temperature of 0.2 ºC, and in precipitation of 0.6 %. These would have a negligible impact in terms of food security, and imply the deaths from the climatic effects being dwarfed by the “770 million [direct] casualties” mentioned in Toon 2008.

For context, Luísa Rodriguez estimated “30 Tg” of soot would be ejected into the stratosphere in a nuclear war between the United States and Russia. Nevertheless, Luísa notes the following:

As a final point, I’d like to emphasize that the nuclear winter is quite controversial (for example, see: Singer, 1985; Seitz, 2011; Robock, 2011; Coupe et al., 2019; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018; Also see the summary of the nuclear winter controversy in Wikipedia’s article on nuclear winter). Critics argue that the parameters fed into the climate models (like, how much smoke would be generated by a given exchange) as well as the assumptions in the climate models themselves (for example, the way clouds would behave) are suspect, and may have been biased by the researchers’ political motivations (for example, see: Singer, 1985; Seitz, 2011; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018). I take these criticisms very seriously — and believe we should probably be skeptical of this body of research as a result. For the purposes of this estimation, I assume that the nuclear winter research comes to the right conclusion. However, if we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

As Luísa, I have been assuming “the nuclear winter research comes to the right conclusion”, but I suppose it is worth bringing more attention to potential concerns. I have also not flagged them in my posts, so I am crossposting Bean’s analysis for some balance.

Nuclear Winter

When I took a broad overview of how destructive nuclear weapons are, one of the areas I looked at was nuclear winter, but I only dealt with it briefly. As such, it was something worth circling back to for a more in-depth look at the science involved.

First, as my opponent here, I’m going to take What the science says: Could humans survive a nuclear war between NATO and Russia? from the prestigious-sounding “Alliance For Science”, affiliated with Cornell University, and the papers it cites in hopes of being fair to the other side. Things don’t start off well, as they claim that we’re closer to nuclear war than any time since the Cuban Missile Crisis, which is clearly nonsense given Able Archer 83 among others. This is followed with the following gem: “Many scientists have investigated this question already. Their work is surprisingly little known, likely because in peacetime no one wants to think the unthinkable. But we are no longer in peacetime and the shadows of multiple mushroom clouds are looming once again over our planet.” Clearly, I must have hallucinated the big PR push around nuclear winter back in the mid-80s. Well, I didn’t because I wasn’t born yet, but everyone else must have.

Things don’t get much better. They take an alarmist look at the global nuclear arsenal, and a careful look at the casualties from Hiroshima and Nagasaki, bombs vastly smaller than modern strategic weapons and with rather different damage profiles. Hilariously, their ignorance of the nuclear war literature extends to the point of ignoring fallout because there was relatively little fallout from those two airbursts, although it’s well-known that groundbursts produce much more and are likely to be used in a modern nuclear war.

But now we get to the actual science, although before digging in I should point out that there are only a few scientists working on this area. The papers they cite include at least one of Rich Turco, Owen Toon or Alan Robock as an author, sometimes more than one. Turco was lead author on the original 1983 nuclear winter paper in Science and Toon was a co-author, while Robock has also been in the field for decades. The few papers I found elsewhere which do not include one or more of these three tend to indicate notably lower nuclear winter effects.

There are three basic links in the chain of logic behind the nuclear winter models here: how much soot is produced, how high it gets, and what happens in the upper atmosphere.

First, the question of soot production. Environmental Consequences of Nuclear War, by Toon, Robock and Turco gives the best statement of the methodology behind soot production that I’ve found. Essentially, they take an estimate of fuel loading based on population density, then assume that the burned area scales linearly with warhead yield based on the burned area from Hiroshima. This is a terrible assumption on several levels. First, Hiroshima is not the only case we have for a city facing nuclear attack, and per Effects of Nuclear Weapons p.300, Nagasaki suffered only a quarter as much burned area as Hiroshima thanks to differences in geography despite a similar yield. Taking only the most extreme case for burned area does not seem like a defensible assumption, particularly as Japanese cities at the time were unusually vulnerable to fire. For instance, the worst incendiary attack on a city during WWII was the attack on Tokyo in March 1945, when 1,665 tons of bombs set a fire that ultimately burned an area of 15.8 square miles, as opposed to 4.4 square miles burned at Hiroshima. To put this into perspective, Dresden absorbed 3,900 tons of bombs during the famous firebombing raids, which only burned about 2.5 square miles. Modern cities are probably even less flammable, as fire fatalities per capita have fallen by half since the 1940s.

Nor is the assumption that burned area will scale linearly with yield a particularly good one. I couldn’t find it in the source they cite, and it flies in the face of all other scaling relationships around nuclear weapons. Given that most of the burned area will result from fires spreading and not direct ignition, a better assumption is probably to look at the areas where fires will have an easy time spreading due to blast damage, which tends to rip open buildings and spread flammable debris everywhere. per Glasstone p.108, blast radius typically scales with the 1/​3rd power of yield, so we can expect damaged area from fire as well as blast to scale with the yield23. Direct-ignition radius is more like yield0.4, so for a typical modern strategic nuclear warhead (~400 kT), this will overstate burned area by a factor of 2 for direct-ignition and a factor of 3 for blast radius.

And then we come to their targeting assumptions, which are, if anything, worse. The only criteria for where weapons are placed are the country in question and how much flammable material is nearby, and they are carefully spaced to keep the burned areas from overlapping. This is obvious nonsense for any serious targeting plan. 100 kT weapons are spaced 15.5 km apart, far enough to spare even many industrial targets if we apply more realistic assumptions about the burned area. A realistic targeting plan would acknowledge that many hardened military targets are close together and would have overlap in their burned areas, and that a lot of nuclear warheads will be targeted at military facilities like missile silos which are in areas with far lower density of flammable materials.

Their inflation of soot production numbers is clearly shown by their own reference 9, which is a serious study of smoke/​soot production following an attack on US military assets (excluding missile silos) by 3,030 500 kT warheads. This team estimated that about 21 Tg of soot would be produced after burning an area similar to what Toon, Robock and Turco estimated would be burned after an attack by only 1000 100 kT weapons, which they claim would produce 28 Tg of smoke. They attribute this to redundancy in targeting on the part of the earlier study, instead of their repeatedly taking steps to inflate their soot estimates. They also assume 4,400 warheads from the US and Russia alone, significantly higher than current arsenals. Looking at their soot estimates more broadly, their studies consistently fail to reflect any changes from the world’s shrinking nuclear arsenals. A 2007 paper uses 150 Tg as the midpoint of a set of estimates from 1990, despite significant reductions in arsenals between those two dates.

One more issue before we leave this link is that all fuel within the burned area is consumed. This is probably a bad assumption, given that Glasstone mentions that collapsed buildings tend to shield flammable materials inside of them from burning. I don’t have a better assumption here, but it’s at least worth noting and adding to the pile of worst-case assumptions built into these models.

But what about soot actually getting to high altitudes? After all, if the soot stays in the lower atmosphere, it’s going to be rained out fairly quickly. Yes, the people downwind won’t have a great time of it for a few days, but we’re not looking at months or years of nuclear winter. As best I can tell, the result here is deafening silence. The only factor I can see at any point between stuff burning and things entering the upper atmosphere is a 0.8 in Box 1. Other than that, everything is going into the upper troposphere/​stratosphere.

I am extremely skeptical of this assumption, and figured it was worth checking against empirical data from the biggest recent fire, the 2019-2020 Australian bushfires. These burned something like 400 Tg of wood[3] which in turn would produce somewhere between 1.3 Tg and 4 Tg of soot based on a 1990 paper from Turco and Toon, depending on how much the fires were like vegetation fires vs urban wood fires. Under the Toon/​Robock assumptions, it sounds like we should have at least 1 Tg of soot in the stratosphere, but studies of the fires estimate between 0.4 and 0.9 Tg of aerosols reached the stratosphere, with 2.5% of this being black carbon (essentially another term for soot). This suggests that even being as generous as possible, the actual percentage of soot which reaches the stratosphere is something like 2%, not 80%. The lack of climatic impact of the Kuwait oil fires, which released about 8 Tg of soot in total, also strongly suggests that the relevant assumptions about soot transport into the upper atmosphere need to be examined far more closely. Robock attempts to deal with this by claiming that the Kuwait fires were too spread out and a larger city fire would still show self-lofting effects. The large area covered by the Australian bushfires and the low amount of soot reaching the stratosphere calls this into question.

More evidence of problems in this area comes from a paper by Robock himself, attempting to study the effects of the firebombings in WWII. Besides containing the best sentence ever published in a scientific paper in its plain-language abstract[4], it also fails to find any evidence that there was a significant drop in temperature or solar energy influx in 1945. Robock tries to spin this as positively as he can, but is forced to admit that it doesn’t provide evidence for his theory, blaming poor data, particularly around smoke production. I would add in potential issues around smoke transport. More telling, however, is that all of the data points to at most a very limited effect in 1945-1946, with no trace of signal surviving later, despite claims of multi-year soot lifetimes in the papers on nuclear winter.

Which brings us neatly to the last question, involving how long anything which does reach the stratosphere will last. A 2019 paper from Robock and Toon suggests that the e-folding life will be something like 3.5 years, while a paper published the same year and including both men as authors has smoke from the 2017 Canadian wildfires persisting in the stratosphere for a mere 8 months, which they themselves noting that this is 40% shorter than their model predicted. They attempt to salvage the thesis here, even suggesting that organic smoke will contribute more than expected, but this looks to me like reporting results different from what they actually got. They attempt to salvage this by claiming that the smoke will reach higher, but at this point, I simply don’t trust their models without a through validation on known events, and Kuwait is only mentioned in one paper, where they claim it doesn’t count.

A few other aspects bear mentioning. First is the role of latitude. Papers repeatedly identify subtropical fires as particularly damaging, apparently due to some discontinuity in the effects of smoke at 30° latitude which greatly increases smoke persistence. This seems dubious, given that Kuwait falls just within this zone, as does most of Australia, where the wildfires have yet to show this kind of effect. Second, all of the nuclear winter papers are short on validation data, usually pointing only to modeling of volcanic aerosols, which they themselves usually admit are very different from the soot they’re modeling. There is generally no discussion of validation against more relevant data.

A few academic papers also call the Toon/​Robock conclusion into question, most notably this one from a team at Los Alamos National Laboratory. They model one of the lower-end scenarios, an exchange of 100 15 kT warheads in a hypothetical war between India and Pakistan, which Toon and Robock model as producing 5 Tg of soot and a significant nuclear winter. This team used mathematical models of both the blast and the resulting fire, and even in a model specifically intended to overestimate soot production got only 3.7 Tg of soot, of which only about 25% ever reached above 12 km in altitude and persisted in the long term, with more typical simulations seeing only around 6% of the soot reaching that altitude. The other three-quarters stayed in the lower atmosphere, where it was rapidly removed by weather. This was then fed into the same climate models that were used by Robock and Toon, and the results were generally similar to earlier studies of a 1 Tg scenario, which showed some effect but nowhere near the impacts Robock predicted from the scale of the conflict. It’s worth noting that these models appear to have significantly overestimated soot lifetime in the stratosphere, as shown by the data from the Canadian fire.

Robock argued back against the paper, claiming that the area it looked at was not densely populated enough, lowering the production of soot and preventing a firestorm from forming. The Los Alamos team responded, running simulations with higher fuel density that showed a strongly nonlinear relationship between fuel density and soot production, with a factor of 4 increase in fuel density doubling soot and a factor of 72 increasing soot production by a factor of only 6, as oxygen starvation limited the ability of the fire to burn. In both of these cases, the percentage of soot reaching an altitude where it could persist in the stratosphere was around 6%, and the authors clearly emphasize that their earlier work is a reasonably upper bound on soot production.

So what to make of all of this? While I can’t claim a conclusive debunking of the papers behind nuclear winter, it’s obvious that there are a lot of problems with the papers involved. Most of the field is the work of a tiny handful of scientists, two of whom were involved from its beginnings as an appendage of the anti-nuclear movement, while the last has also been a prominent advocate against nuclear weapons. And in the parts of their analysis that don’t require a PhD in atmospheric science to understand or a supercomputer simulation to check, we find assumptions that, applying a megaton or two of charity, bespeak a total unfamiliarity with nuclear effects and targeting, which is hard to square with the decades they have spent in the field[5]. A tabulation of the errors in their flagship paper is revealing:

Error FactorCause
1.5Overestimating the number of warheads
2Targeting for flammability rather than efficacy
2Flammability for Hiroshima rather than normal city
1-2Flammability for 1940s city instead of 2020s city
2-3Linear burn area scaling
4-1380% soot in the stratosphere vs 6%-20%
48-468Total

Even using the most conservative numbers here, an all-out exchange between the US and Russia would produce a nuclear winter that would at most resemble the one that Robock and Toon predict for a regional nuclear conflict, although it would likely end much sooner given empirical data about stratospheric soot lifetimes. Some of the errors are long-running, most notably assumptions about the amount of soot that will persist in the atmosphere, while others seem to have crept in more recently, contributing to a strange stability of their soot estimates in the face of cuts to the nuclear arsenal. All of this suggests that their work is driven more by an anti-nuclear agenda than the highest standards of science. While a large nuclear war would undoubtedly have some climatic impact, all available data suggests it would be dwarfed by the direct (and very bad) impacts of the nuclear war itself.

Acknowledgements

Thanks to Johannes Ackva for discussing concerns around the nuclear winter literature, which increased my interest in the topic.

  1. ^

    Calculated multiplying the factors given by Bean in the last table, converting intervals to the mean between their lower and upper bound.

  2. ^

    1 Tg corresponds to 1 million tonnes.

  3. ^

    Based on reported CO2 emissions of 715 Tg and a wood-to-CO2 ratio of 1 to 1.8.

  4. ^

    “We discovered that [the bombing of Hiroshima and Nagasaki] was actually the culmination of a genocidal U.S. bombing campaign.”

  5. ^

    A source I expect many of my readers will have looked at is Luisa Rodriguez’s writeup on nuclear winter for Effective Altruism organization Rethink Priorities. Her conclusion is that a US-Russia nuclear war is unlikely to be an existential risk, as she believes Robock and Toon, who form the basis of most of her analysis, have overestimated the soot from an actual war. It’s obvious that I agree that they have done so, and also think they have exaggerated the climatic consequences.