FWIW, Los Alamos claims they replicated Hiroshima and the Berkeley Hills Fire Smoke Plumes with their fire models to within 1 km of plume height. It’s pretty far into the presentation though, and most of their sessions are not public, so I can hardly blame anyone for not encountering this.
I have never worked with fire plume models nor looked at that presentation, but have done some of the most advanced work on understanding wind conditions on the 2km-100km scale. What I know from that, probably quite similar work, is that there are so many parameters in these type of models to tweak the output. And that often, unfortunately, practice is to keep re-running models while tweaking until the output looks like the experimental results. I am not saying this happened here, I am just encouraging anyone looking into this to really pay attention to this, especially if making important decisions. If they do not explicitly and clearly say that the model results are a first-try, “no tweak” run, I would assume they have done tweaking and would consider the results not of sufficient quality to support conclusions.
What we did in the wind speed work I was involved in was to look at performance on statistically significant numbers of never-seen-before cases, really sitting on our hands and avoiding the temptation to re-run the simulations with more “realistic” model settings.
FWIW, Los Alamos claims they replicated Hiroshima and the Berkeley Hills Fire Smoke Plumes with their fire models to within 1 km of plume height. It’s pretty far into the presentation though, and most of their sessions are not public, so I can hardly blame anyone for not encountering this.
I have never worked with fire plume models nor looked at that presentation, but have done some of the most advanced work on understanding wind conditions on the 2km-100km scale. What I know from that, probably quite similar work, is that there are so many parameters in these type of models to tweak the output. And that often, unfortunately, practice is to keep re-running models while tweaking until the output looks like the experimental results. I am not saying this happened here, I am just encouraging anyone looking into this to really pay attention to this, especially if making important decisions. If they do not explicitly and clearly say that the model results are a first-try, “no tweak” run, I would assume they have done tweaking and would consider the results not of sufficient quality to support conclusions.
What we did in the wind speed work I was involved in was to look at performance on statistically significant numbers of never-seen-before cases, really sitting on our hands and avoiding the temptation to re-run the simulations with more “realistic” model settings.