I want to second the skepticism towards modelling. I have myself gone pretty deep into including atmospheric physics in Computational Fluid Dynamics models and have seen how finicky these models are. Without statistically significant ground truth data, I have seen no one being able to beat the most simple of non-advanced models (and I have seen many failed attempts in my time deploying these models commercially in wind energy). And you have so many parameters to tweak, there is such a high uncertainty about how the atmosphere actually works on a scale smaller than around 1km so I would be very careful in relying on modelling. Even small scale tests are unlikely to be useful as they do not capture the intricate dynamics of the actual, full-scale and temporally dynamic atmosphere.
I would agree with you there in large part, but I don’t think that should necessarily reduce our estimate of the impact away from what I estimated above.
For example, the Los Alamos team did far more detailed fire modelling vs Rutgers, but the end result is a model that seems to be unable to replicate real fire conditions in situations like Hiroshima, Dresden and Hamberg → more detailed modeling isn’t in itself a guarantee of accuracy.
However, the models we have are basing their estimates at least in part on empirical observations, which potentially give us enough cause for concern:
-Soot can be lofted in firestorm plumes, for example at Hiroshima.
-Materials like SO2 in the atmosphere from volcanoes can be observed to disrupt the climate, and there is no reason to expect that this is different for soot.
-Materials in the atmosphere can persist for years, though the impact takes time to arrive due to inertia and will diminish over time.
The complexities of modeling you highlight raise the uncertainties with everything above, but they do not disprove nuclear winter. The complexities also seem raise more uncertainty for Los Alamos and the more skeptic side, who rely heavily on modeling, than Rutgers, who use modeling only where they cannot use an empirical heuristic like the conditions of past firestorms.
FWIW, Los Alamos claims they replicated Hiroshima and the Berkeley Hills Fire Smoke Plumes with their fire models to within 1 km of plume height. It’s pretty far into the presentation though, and most of their sessions are not public, so I can hardly blame anyone for not encountering this.
I have never worked with fire plume models nor looked at that presentation, but have done some of the most advanced work on understanding wind conditions on the 2km-100km scale. What I know from that, probably quite similar work, is that there are so many parameters in these type of models to tweak the output. And that often, unfortunately, practice is to keep re-running models while tweaking until the output looks like the experimental results. I am not saying this happened here, I am just encouraging anyone looking into this to really pay attention to this, especially if making important decisions. If they do not explicitly and clearly say that the model results are a first-try, “no tweak” run, I would assume they have done tweaking and would consider the results not of sufficient quality to support conclusions.
What we did in the wind speed work I was involved in was to look at performance on statistically significant numbers of never-seen-before cases, really sitting on our hands and avoiding the temptation to re-run the simulations with more “realistic” model settings.
Hi Mike, in similar fashion to my other comment, I think in my pursuit of brevity I really missed underlining how important I think it is to guard against nuclear war.
I absolutely do not think models’ shortcomings disprove nuclear winter. Instead, as you say, the lack of trust in modeling just increases the uncertainty, including of something much worse than what modelling shows. Thanks for letting me clarify!
(and the mantra of more detailed models → better accuracy is one I have seen first-hand touted but with really little to show for it, it is what details you include in the models that drove most of the impact in the models we dealt with which were about 30km x 30km x 5km and using a resolution of 20-200m)
I want to second the skepticism towards modelling. I have myself gone pretty deep into including atmospheric physics in Computational Fluid Dynamics models and have seen how finicky these models are. Without statistically significant ground truth data, I have seen no one being able to beat the most simple of non-advanced models (and I have seen many failed attempts in my time deploying these models commercially in wind energy). And you have so many parameters to tweak, there is such a high uncertainty about how the atmosphere actually works on a scale smaller than around 1km so I would be very careful in relying on modelling. Even small scale tests are unlikely to be useful as they do not capture the intricate dynamics of the actual, full-scale and temporally dynamic atmosphere.
Hi Ulrik,
I would agree with you there in large part, but I don’t think that should necessarily reduce our estimate of the impact away from what I estimated above.
For example, the Los Alamos team did far more detailed fire modelling vs Rutgers, but the end result is a model that seems to be unable to replicate real fire conditions in situations like Hiroshima, Dresden and Hamberg → more detailed modeling isn’t in itself a guarantee of accuracy.
However, the models we have are basing their estimates at least in part on empirical observations, which potentially give us enough cause for concern:
-Soot can be lofted in firestorm plumes, for example at Hiroshima.
-Materials like SO2 in the atmosphere from volcanoes can be observed to disrupt the climate, and there is no reason to expect that this is different for soot.
-Materials in the atmosphere can persist for years, though the impact takes time to arrive due to inertia and will diminish over time.
The complexities of modeling you highlight raise the uncertainties with everything above, but they do not disprove nuclear winter. The complexities also seem raise more uncertainty for Los Alamos and the more skeptic side, who rely heavily on modeling, than Rutgers, who use modeling only where they cannot use an empirical heuristic like the conditions of past firestorms.
FWIW, Los Alamos claims they replicated Hiroshima and the Berkeley Hills Fire Smoke Plumes with their fire models to within 1 km of plume height. It’s pretty far into the presentation though, and most of their sessions are not public, so I can hardly blame anyone for not encountering this.
I have never worked with fire plume models nor looked at that presentation, but have done some of the most advanced work on understanding wind conditions on the 2km-100km scale. What I know from that, probably quite similar work, is that there are so many parameters in these type of models to tweak the output. And that often, unfortunately, practice is to keep re-running models while tweaking until the output looks like the experimental results. I am not saying this happened here, I am just encouraging anyone looking into this to really pay attention to this, especially if making important decisions. If they do not explicitly and clearly say that the model results are a first-try, “no tweak” run, I would assume they have done tweaking and would consider the results not of sufficient quality to support conclusions.
What we did in the wind speed work I was involved in was to look at performance on statistically significant numbers of never-seen-before cases, really sitting on our hands and avoiding the temptation to re-run the simulations with more “realistic” model settings.
Hi Mike, in similar fashion to my other comment, I think in my pursuit of brevity I really missed underlining how important I think it is to guard against nuclear war.
I absolutely do not think models’ shortcomings disprove nuclear winter. Instead, as you say, the lack of trust in modeling just increases the uncertainty, including of something much worse than what modelling shows. Thanks for letting me clarify!
(and the mantra of more detailed models → better accuracy is one I have seen first-hand touted but with really little to show for it, it is what details you include in the models that drove most of the impact in the models we dealt with which were about 30km x 30km x 5km and using a resolution of 20-200m)