I think there are issues with this analysis. As it stands, it presents a model of nuclear winter if firestorms are unlikely in a future large scale nuclear conflict. That would be an optimistic take, and does not seem to be supported by the evidence:
In my post on the subject that you referenced, I discuss how there are serious issues with coming to a highly confident conclusion in relation to nuclear winter. There are only limited studies, which come at the issue from different angles, but to broadly summarize:
Rutgers are highly concerned about the threat of nuclear winter via soot lofting in firestorms. They look at fission and fusion weaponry.
Los Alamos find that firestorms are highly unlikely to form under nuclear detonations, even at very high fuel loads, and so lofting is negligible. They only look at fission scale weaponry.
Lawrence Livermore did not comment on the probability of firestorms forming, just that if they did form there is a significant probability that soot would be lofted. They only look at fission scale weaponry.
Comparing the estimates, the main cause of the differences in soot injection are if firestorms will form. Conditional on firestorms forming, my read of the literature is that at least significant lofting is likely to occur—this isn’t just from Rutgers.
We know that firestorms from nuclear weaponry are possible, we have seen one in Hiroshima and it had a plume that reached stratospheric levels (the anvil shaped cloud photograph is it reaching and breaching the stratospheric barrier). Los Alamos cannot replicate this in their model, even at high fuel loads they get nothing like our observations of the event. This failure to replicate observations makes me very cautious to weigh their results heavily versus the other two studies, as you implicitly do via a mean soot injection of 0.7 Tg following 100 detonations, which is a heavy skew towards “no firestorms”.
Fusion (Thermonuclear) weaponry is often at least an order of magnitude larger than the atomic bomb dropped on Hiroshima. This may well raise the probability of firestorms, although this is not easy to determine definitively. It is however another issue when projecting a study on the likelihood of firestorms under atomic bombs onto thermonuclear weaponry.
Not all detonations will cause firestorms—Nagasaki did not due to the location of the blast and local conditions, and this is likely to be true of a future war even with thermonuclear weaponry. However, given projected lofting if they do occur (which is only modeled in Rutgers and Lawrence Livermore as a full firestorm only forms in their models) you only need maybe 100 or so firestorms to cause a serious nuclear winter. This may not be a high bar to reach with so many weapons in play.
As a result, blending together the Los Alamos model with that of Rutgers doesn’t really work as a baseline, they’re based on a very different binary concerning firestorms and lofting and you exclude other relevant analysis, like that of Lawrence Livermore. Instead, you really need to come up with a distribution of firestorm risk—however you choose to do so—and use that to weight the expected soot injection. I would assume such analysis would seriously raise the projected soot that is injected and subsequent cooling versus your assumptions.
In addition, there are points to raise on the distribution of detonations—which seems very skewed towards the lower end for a future nuclear conflict between great powers with thousands of weapons in play and strong game theoretic reasons to “use or lose” much of their arsenals. However, we commented on that in your previous post, and as you say it matters less for your model than the sensitivity of soot lofted per detonation, which seems to be the main contention.
Hi Mike, thanks for taking the time to respond to another of my posts.
I think we might broadly agree on the main takeaway here, which is something like people should not assume that nuclear winter is proven—there are important uncertainties.
The rest is wrangling over details, which is important work but not essential reading for most people.
Comparing the estimates, the main cause of the differences in soot injection are if firestorms will form. Conditional on firestorms forming, my read of the literature is that at least significant lofting is likely to occur—this isn’t just from Rutgers.
Yes, I agree that the crux is whether firestorms will form. The difficulty is that we can only rely on very limited observations from Hiroshima and Nagasaki, plus modeling by various teams that may have political agendas.
I considered not modeling the detonation-soot relationship as a distribution, because the most important distinction is binary—would a modern-day countervalue nuclear exchange trigger firestorms? Unfortunately I could not figure out a way of converting the evidence base into a fair weighting of ‘yes’ vs. ‘no’, and the distributional approach I take is inevitably highly subjective.
Another approach I could have taken is modeling as a distribution the answer to the question “how specific do conditions have to be for firestorms to form?”. We know that a firestorm did form in a dense, wooden city hit by a small fission weapon in summer, with low winds. Firestorms are possible, but it is unclear how likely they are.
These charts are made up. The lower chart is an approximation of what my approach implies about firestorm conditions: most likely, firestorms are possible but relatively rare.
Los Alamos and Rutgers are not very helpful in forming this distribution: Los Alamos claim that firestorms are not possible anywhere. Rutgers claims that they are possible in dense cities under specific atmospheric conditions (and perhaps elsewhere). This gives us little to go on.
Fusion (Thermonuclear) weaponry is often at least an order of magnitude larger than the atomic bomb dropped on Hiroshima. This may well raise the probability of firestorms, although this is not easy to determine definitively.
Agreed. My understanding is that fusion weapons are not qualitatively different in any important way other than power.
Yet there is a lot of uncertainty—it has been proposed that large blast waves could smother much of the flammable materials with concrete rubble in modern cities. The height at which weapons are detonated also alters the effects of radiative heat vs blast, etc.
you only need maybe 100 or so firestorms to cause a serious nuclear winter. This may not be a high bar to reach with so many weapons in play.
Semi agree. Rutgers model the effects of a 100+ detonation conflict between India and Pakistan:
They find 1-2 degree cooling over cropland at the peak of the catastrophe. This would be unprecedented and really bad, but not serious compared to the nuclear winter we have in our imagination: I estimate about 100x less bad than the doomsday scenario with 10+ degrees cooling.
They are modeling 100 small fission weapons, so it would be worse with large weapons, or more detonations. But not as much worse as you might think: the 51st detonation is maybe 2-5x less damaging than the 5th.
Furthermore, they are assuming that each side targets for maximum firestorm damage. They assume that fuel loading is proportional to population density, and India & Pakistan have some of the world’s densest cities. So this is almost the worst damage you could do with 100 detonations.
Although 100 detonations sounds very small, this idealized conflict would be tapping into most of the firestorm potential of two countries in which 20% of the world’s population live—much more than live in the US and Russia.
It’s possible that 100 firestorms could trigger measurable cooling, but the conditions would have to be quite specific. 1000 firestorms seems much less likely still.
Conclusion
In the post I suggest that nuclear winter proponents may be guilty of inflating cooling effects by compounding a series of small exaggerations. I may be guilty of the same thing in the opposite direction!
I don’t see my model as a major step forward for the field of nuclear winter. It borrows results from proper climate models. But it is bolder than many other models, extending to annual risk and expected damage. And, unlike the papers which explore only the worst-case, it accounts for important factors like countervalue/force targeting and the number of detonations. I find that nuclear autumn is at least as great a threat as nuclear winter, with important implications for resilience-building.
The main thing I would like people to take away is that we remain uncertain what would be more damaging about a nuclear conflict: the direct destruction, or its climate-cooling effects.
The main thing I would like people to take away is that we remain uncertain what would be more damaging about a nuclear conflict: the direct destruction, or its climate-cooling effects.
I arrived at the same conclusion in my analysis, where I estimated the famine deaths due to the climatic effects of a large nuclear war would be 1.16 times the direct deaths.
Los Alamos find that firestorms are highly unlikely to form under nuclear detonations, even at very high fuel loads, and so lofting is negligible. They only look at fission scale weaponry.
I think this may well misrepresent Los Alamos’ view, as Reisner 2019does not find significantly more lofting, and they did model firestorms. I estimated 6.21 % of emitted soot being injected into the stratosphere in the 1st 40 min from the rubble case of Reisner 2018, which did not produce a firestorm. Robock 2019 criticised this study, as you did, for not producing a firestorm. In response, Reisner 2019 run:
Two simulations at higher fuel loading that are in the firestorm regime (Glasstone & Dolan, 1977): the first simulation (4X No-Rubble) uses a fuel load around the firestorm criterion (4 g/cm2) and the second simulation (Constant Fuel) is well above the limit (72 g/cm2).
Crucially, they say (emphasis mine):
Of note is that the Constant Fuel case is clearly in the firestorm regime with strong inward and upward motions of nearly 180 m/s during the fine-fuel burning phase. This simulation included no rubble, and since no greenery (trees do not produce rubble) is present, the inclusion of a rubble zone would significantly reduce BC production and the overall atmospheric response within the circular ring of fire.
These simulations led to a soot injected into the stratosphere in the 1st 40 min per emitted soot of 5.45 % (= 0.461/8.454) and 6.44 % (= 1.53/23.77), which are quite similar to the 6.21 % of Reisner 2018 for no firestorm I mentioned above. This suggests a firestorm is not a sufficient condition for a high soot injected into the stratosphere per emitted soot under Reisner’s view?
In my analysis, I multiplied the 6.21 % emitted soot that is injected into the stratosphere in the 1st 40 min from Reisner 2018 by 3.39 in order to account for soot injected afterwards, but this factor is based on estimates which do not involve firestorms. Are you implying the corrective factor should be higher for firestorms? I think Reisner 2019implicitly argues against this. Otherwise, they would have been dishonest by replying toRobock 2019 with an incomplete simulation whose results differ from that of the full simulation. In my analysis, I only adjusted the results from Reisner’s and Toon’s views in case there was explicit information to do so[1], i.e. I did not assume they concealed key results.
As a result, blending together the Los Alamos model with that of Rutgers doesn’t really work as a baseline, they’re based on a very different binary concerning firestorms and lofting and you exclude other relevant analysis, like that of Lawrence Livermore.
In my analysis, I also did not integrate evidence from Wagman 2020 (whose main author is affiliated with Lawrence Livermore National Laboratory) to estimate the soot injected into the stratosphere per countervalue yield. As far as I can tell, they do not offer independent evidence from Toon’s view. Rather than estimating the emitted soot as Reisner 2018 and Reisner 2019 did, they set it to the soot injected into the stratosphere in Toon 2007:
Finally, we choose to release 5 Tg (5·10^12 g) BC into the climate model per 100 fires, for consistency with the studies of Mills et al. (2008, 2014), Robock et al. (2007), Stenke et al. (2013), Toon et al. (2007), and Pausata et al. (2016). Those studies use an emission of 6.25 Tg BC and assume 20% is removed by rainout during the plume rise, resulting in 5 Tg BC remaining in the atmosphere.
For example, I adjusted downwards the soot injected into the stratosphere from Reisner 2019 (based on data from Denkenberger 2018), as it says (emphasis mine):
Table 1. Estimated BC Using an Idealized Diagnostic Relationship (BC Estimates Need to be Reduced by a Factor of 10–100) and Fuel Loadings From the Simulations Shown in Reisner et al. and Two New Simulations for 100 15-kt Detonations
Around one year after my post on the issue, another study was flagged to me: “Latent Heating Is Required for Firestorm Plumes to Reach the Stratosphere” (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022JD036667). The study raises another very important firestorm dynamic, that a dry firestorm plume has significantly less lofting versus a wet one due to the latent heat released as water moves from vapor to liquid—which is the primary process for generating large lofting storm cells. However, if significant moisture can be assumed in the plume (and this seems likely due to the conditions at its inception) lofting is therefore much higher and a nuclear winter more likely.
The Los Alamos analysis only assesses a dry plume—and this may be why they found so little risk of a nuclear winter—and in the words of the authors: “Our findings indicate that dry simulations should not be used to investigate firestorm plume lofting and cast doubt on the applicability of past research (e.g., Reisner et al., 2018) that neglected latent heating”.
This has pushed me further towards being concerned about nuclear winter as an issue, and should also be considered in the context of other analysis that relies upon the Reisner et al studies originating at Los Alamos (at least until they can add these dynamics to their models). I think this might have relevance for your assessments, and the article here in general.
Dear Stan.
I think there are issues with this analysis. As it stands, it presents a model of nuclear winter if firestorms are unlikely in a future large scale nuclear conflict. That would be an optimistic take, and does not seem to be supported by the evidence:
In my post on the subject that you referenced, I discuss how there are serious issues with coming to a highly confident conclusion in relation to nuclear winter. There are only limited studies, which come at the issue from different angles, but to broadly summarize:
Rutgers are highly concerned about the threat of nuclear winter via soot lofting in firestorms. They look at fission and fusion weaponry.
Los Alamos find that firestorms are highly unlikely to form under nuclear detonations, even at very high fuel loads, and so lofting is negligible. They only look at fission scale weaponry.
Lawrence Livermore did not comment on the probability of firestorms forming, just that if they did form there is a significant probability that soot would be lofted. They only look at fission scale weaponry.
Comparing the estimates, the main cause of the differences in soot injection are if firestorms will form. Conditional on firestorms forming, my read of the literature is that at least significant lofting is likely to occur—this isn’t just from Rutgers.
We know that firestorms from nuclear weaponry are possible, we have seen one in Hiroshima and it had a plume that reached stratospheric levels (the anvil shaped cloud photograph is it reaching and breaching the stratospheric barrier). Los Alamos cannot replicate this in their model, even at high fuel loads they get nothing like our observations of the event. This failure to replicate observations makes me very cautious to weigh their results heavily versus the other two studies, as you implicitly do via a mean soot injection of 0.7 Tg following 100 detonations, which is a heavy skew towards “no firestorms”.
Fusion (Thermonuclear) weaponry is often at least an order of magnitude larger than the atomic bomb dropped on Hiroshima. This may well raise the probability of firestorms, although this is not easy to determine definitively. It is however another issue when projecting a study on the likelihood of firestorms under atomic bombs onto thermonuclear weaponry.
Not all detonations will cause firestorms—Nagasaki did not due to the location of the blast and local conditions, and this is likely to be true of a future war even with thermonuclear weaponry. However, given projected lofting if they do occur (which is only modeled in Rutgers and Lawrence Livermore as a full firestorm only forms in their models) you only need maybe 100 or so firestorms to cause a serious nuclear winter. This may not be a high bar to reach with so many weapons in play.
As a result, blending together the Los Alamos model with that of Rutgers doesn’t really work as a baseline, they’re based on a very different binary concerning firestorms and lofting and you exclude other relevant analysis, like that of Lawrence Livermore. Instead, you really need to come up with a distribution of firestorm risk—however you choose to do so—and use that to weight the expected soot injection. I would assume such analysis would seriously raise the projected soot that is injected and subsequent cooling versus your assumptions.
In addition, there are points to raise on the distribution of detonations—which seems very skewed towards the lower end for a future nuclear conflict between great powers with thousands of weapons in play and strong game theoretic reasons to “use or lose” much of their arsenals. However, we commented on that in your previous post, and as you say it matters less for your model than the sensitivity of soot lofted per detonation, which seems to be the main contention.
Hi Mike, thanks for taking the time to respond to another of my posts.
I think we might broadly agree on the main takeaway here, which is something like people should not assume that nuclear winter is proven—there are important uncertainties.
The rest is wrangling over details, which is important work but not essential reading for most people.
Yes, I agree that the crux is whether firestorms will form. The difficulty is that we can only rely on very limited observations from Hiroshima and Nagasaki, plus modeling by various teams that may have political agendas.
I considered not modeling the detonation-soot relationship as a distribution, because the most important distinction is binary—would a modern-day countervalue nuclear exchange trigger firestorms? Unfortunately I could not figure out a way of converting the evidence base into a fair weighting of ‘yes’ vs. ‘no’, and the distributional approach I take is inevitably highly subjective.
Another approach I could have taken is modeling as a distribution the answer to the question “how specific do conditions have to be for firestorms to form?”. We know that a firestorm did form in a dense, wooden city hit by a small fission weapon in summer, with low winds. Firestorms are possible, but it is unclear how likely they are.
These charts are made up. The lower chart is an approximation of what my approach implies about firestorm conditions: most likely, firestorms are possible but relatively rare.
Los Alamos and Rutgers are not very helpful in forming this distribution: Los Alamos claim that firestorms are not possible anywhere. Rutgers claims that they are possible in dense cities under specific atmospheric conditions (and perhaps elsewhere). This gives us little to go on.
Agreed. My understanding is that fusion weapons are not qualitatively different in any important way other than power.
Yet there is a lot of uncertainty—it has been proposed that large blast waves could smother much of the flammable materials with concrete rubble in modern cities. The height at which weapons are detonated also alters the effects of radiative heat vs blast, etc.
Semi agree. Rutgers model the effects of a 100+ detonation conflict between India and Pakistan:
They find 1-2 degree cooling over cropland at the peak of the catastrophe. This would be unprecedented and really bad, but not serious compared to the nuclear winter we have in our imagination: I estimate about 100x less bad than the doomsday scenario with 10+ degrees cooling.
They are modeling 100 small fission weapons, so it would be worse with large weapons, or more detonations. But not as much worse as you might think: the 51st detonation is maybe 2-5x less damaging than the 5th.
Furthermore, they are assuming that each side targets for maximum firestorm damage. They assume that fuel loading is proportional to population density, and India & Pakistan have some of the world’s densest cities. So this is almost the worst damage you could do with 100 detonations.
Although 100 detonations sounds very small, this idealized conflict would be tapping into most of the firestorm potential of two countries in which 20% of the world’s population live—much more than live in the US and Russia.
It’s possible that 100 firestorms could trigger measurable cooling, but the conditions would have to be quite specific. 1000 firestorms seems much less likely still.
Conclusion
In the post I suggest that nuclear winter proponents may be guilty of inflating cooling effects by compounding a series of small exaggerations. I may be guilty of the same thing in the opposite direction!
I don’t see my model as a major step forward for the field of nuclear winter. It borrows results from proper climate models. But it is bolder than many other models, extending to annual risk and expected damage. And, unlike the papers which explore only the worst-case, it accounts for important factors like countervalue/force targeting and the number of detonations. I find that nuclear autumn is at least as great a threat as nuclear winter, with important implications for resilience-building.
The main thing I would like people to take away is that we remain uncertain what would be more damaging about a nuclear conflict: the direct destruction, or its climate-cooling effects.
Great points, Stan!
I am not confident this is the crux.
I arrived at the same conclusion in my analysis, where I estimated the famine deaths due to the climatic effects of a large nuclear war would be 1.16 times the direct deaths.
Hi Mike.
I think this may well misrepresent Los Alamos’ view, as Reisner 2019 does not find significantly more lofting, and they did model firestorms. I estimated 6.21 % of emitted soot being injected into the stratosphere in the 1st 40 min from the rubble case of Reisner 2018, which did not produce a firestorm. Robock 2019 criticised this study, as you did, for not producing a firestorm. In response, Reisner 2019 run:
Crucially, they say (emphasis mine):
These simulations led to a soot injected into the stratosphere in the 1st 40 min per emitted soot of 5.45 % (= 0.461/8.454) and 6.44 % (= 1.53/23.77), which are quite similar to the 6.21 % of Reisner 2018 for no firestorm I mentioned above. This suggests a firestorm is not a sufficient condition for a high soot injected into the stratosphere per emitted soot under Reisner’s view?
In my analysis, I multiplied the 6.21 % emitted soot that is injected into the stratosphere in the 1st 40 min from Reisner 2018 by 3.39 in order to account for soot injected afterwards, but this factor is based on estimates which do not involve firestorms. Are you implying the corrective factor should be higher for firestorms? I think Reisner 2019 implicitly argues against this. Otherwise, they would have been dishonest by replying to Robock 2019 with an incomplete simulation whose results differ from that of the full simulation. In my analysis, I only adjusted the results from Reisner’s and Toon’s views in case there was explicit information to do so[1], i.e. I did not assume they concealed key results.
In my analysis, I also did not integrate evidence from Wagman 2020 (whose main author is affiliated with Lawrence Livermore National Laboratory) to estimate the soot injected into the stratosphere per countervalue yield. As far as I can tell, they do not offer independent evidence from Toon’s view. Rather than estimating the emitted soot as Reisner 2018 and Reisner 2019 did, they set it to the soot injected into the stratosphere in Toon 2007:
For example, I adjusted downwards the soot injected into the stratosphere from Reisner 2019 (based on data from Denkenberger 2018), as it says (emphasis mine):
Hi Stan + others.
Around one year after my post on the issue, another study was flagged to me: “Latent Heating Is Required for Firestorm Plumes to Reach the Stratosphere” (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022JD036667). The study raises another very important firestorm dynamic, that a dry firestorm plume has significantly less lofting versus a wet one due to the latent heat released as water moves from vapor to liquid—which is the primary process for generating large lofting storm cells. However, if significant moisture can be assumed in the plume (and this seems likely due to the conditions at its inception) lofting is therefore much higher and a nuclear winter more likely.
The Los Alamos analysis only assesses a dry plume—and this may be why they found so little risk of a nuclear winter—and in the words of the authors: “Our findings indicate that dry simulations should not be used to investigate firestorm plume lofting and cast doubt on the applicability of past research (e.g., Reisner et al., 2018) that neglected latent heating”.
This has pushed me further towards being concerned about nuclear winter as an issue, and should also be considered in the context of other analysis that relies upon the Reisner et al studies originating at Los Alamos (at least until they can add these dynamics to their models). I think this might have relevance for your assessments, and the article here in general.