It would be good to hear from @Luisa_Rodriguez on this—my recollection is that she also became a lot more skeptical of the Robock estimates so I am not sure she would still endorse that figure.
For example, after the post you cite, she wrote (emphasis mine):
“I also added a bit more on the controversy behind the foundational nuclear winter research, which is quite controversial (for example, see: Singer, 1985; Seitz, 2011; Robock, 2011; Coupe et al., 2019; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018).[1] I hope to write more about this controversy in the future, but for the purposes of my estimations, I’ve assumed that the nuclear winter research comes to the right conclusion. However, if one discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.”
Given the conjunctive nature of the risk (many unlikely conditions all need to be true, e.g. reading the Reisner et al work), I would not be shocked at all if the risk from nuclear winter would be < 1⁄100 than the estimate of the Robock group.
In any case, my main point is more that if one looked into the nuclear winter literature with the same rigor that John has looked into climate risk, one would not come out anywhere close to the estimates of the Robock groupas the median and it is quite possible that a much larger downward adjustment then a 5x discount is warranted.
I agree with your last statement indeed, I was trying to sketch out that we would need a lot more work to come to a different conclusion.
It would be good to hear from @Luisa_Rodriguez on this—my recollection is that she also became a lot more skeptical of the Robock estimates so I am not sure she would still endorse that figure.
Agreed. FYI, I am using a baseline soot distribution equal to a lognormal whose 5th and 95th percentiles match the lower and upper bound of the 90 % confidence interval provided by Luisa in that post (highlighted below):
Additionally, my estimate of the amount of smoke that would be lofted into the atmosphere went up from 20 Tg of smoke (90%CI: 7.9 Tg to 39 Tg of smoke) to 30 Tg of smoke (90%CI: 14 Tg to 66 Tg of smoke). Given this, the probability that a US-Russia nuclear exchange would cause a severe nuclear winter — assuming 50 Tg of smoke is the threshold for severe nuclear winter — goes up from just under 1% to about 11%.
The median of my baseline soot distribution is 30.4 Tg (= (14*66)^0.5), which is 4.15 (= 30.4/7.32) times Metaculus’ median predicton for the “next nuclear conflict”. It is unclear what “next nuclear conflict” refers to, but it sounds less severe than “global thermonuclear war”, which is the term Metaculus uses here, and what I am interested in. I have asked for a clarification of the term “next nuclear conflict” in November (in the comments), but I have not heard back.
Agree with Johannes here on the bias in much of the nuclear winter work (and I say that as someone who thinks catastrophic risk from nuclear war is under-appreciated). The political motivations are fairly well-known and easy to spot in the papers
I find it interesting that, despite concerns about the extent to which Roblock’s group is truth-seeking, Open Philanthropy granted it 2.98 M$ in 2017, and 3.00 M$ in 2020. This does not mean Roblock’s estimates are unbiased, but, if they were systemically off by multiple orders of magnitude, Open Philanthropy would presumably not have made those grants.
I do not have a view on this, because I have not looked into the nuclear winter literature (besides very quickly skimming some articles).
Agreed. Carl Schuman at hour 1:02 at the 80k podcast even notes: https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/ Rob Wiblin: I see. So because there’s such a clear motivation for even an altruistic person to exaggerate the potential risk from nuclear winter, then people who haven’t looked into it might regard the work as not super credible because it could kind of be a tool for advocacy more than anything.
Carl Shulman: Yeah. And there was some concern of that sort, that people like Carl Sagan, who was both an anti-nuclear and antiwar activist and bringing these things up. So some people, particularly in the military establishment, might have more doubt about when their various choices in the statistical analysis and the projections and assumptions going into the models, are they biased in this way? And so for that reason, I’ve recommended and been supportive of funding, just work to elaborate on this. But then I have additionally especially valued critical work and support for things that would reveal this was wrong if it were, because establishing that kind of credibility seemed very important. And we were talking earlier about how salience and robustness and it being clear in the minds of policymakers and the public is important.
Note earlier in the conversation demonstrating Schulman influenced the funding decision for the Rutgers team from open philanthropy: ”Robert Wiblin: So, a couple years ago you worked at the Gates Foundation and then moved to the kind of GiveWell/Open Phil cluster that you’re helping now.”
Notably, Reisner is part of Los Alamos in the military establishment. They build nuclear weapons there. So both Reisner and Robock from Rutgers have their own biases.
It seems like a much less biased middle ground, and generally shows that nuclear winter is still really bad, on the order of 1⁄2 to 1⁄3 as “bad” as Rutgers tends to say it is.
It would be good to hear from @Luisa_Rodriguez on this—my recollection is that she also became a lot more skeptical of the Robock estimates so I am not sure she would still endorse that figure.
For example, after the post you cite, she wrote (emphasis mine):
Given the conjunctive nature of the risk (many unlikely conditions all need to be true, e.g. reading the Reisner et al work), I would not be shocked at all if the risk from nuclear winter would be < 1⁄100 than the estimate of the Robock group.
In any case, my main point is more that if one looked into the nuclear winter literature with the same rigor that John has looked into climate risk, one would not come out anywhere close to the estimates of the Robock group as the median and it is quite possible that a much larger downward adjustment then a 5x discount is warranted.
I agree with your last statement indeed, I was trying to sketch out that we would need a lot more work to come to a different conclusion.
Agreed. FYI, I am using a baseline soot distribution equal to a lognormal whose 5th and 95th percentiles match the lower and upper bound of the 90 % confidence interval provided by Luisa in that post (highlighted below):
The median of my baseline soot distribution is 30.4 Tg (= (14*66)^0.5), which is 4.15 (= 30.4/7.32) times Metaculus’ median predicton for the “next nuclear conflict”. It is unclear what “next nuclear conflict” refers to, but it sounds less severe than “global thermonuclear war”, which is the term Metaculus uses here, and what I am interested in. I have asked for a clarification of the term “next nuclear conflict” in November (in the comments), but I have not heard back.
Agree with Johannes here on the bias in much of the nuclear winter work (and I say that as someone who thinks catastrophic risk from nuclear war is under-appreciated). The political motivations are fairly well-known and easy to spot in the papers
Thanks for the feedback, Christian!
I find it interesting that, despite concerns about the extent to which Roblock’s group is truth-seeking, Open Philanthropy granted it 2.98 M$ in 2017, and 3.00 M$ in 2020. This does not mean Roblock’s estimates are unbiased, but, if they were systemically off by multiple orders of magnitude, Open Philanthropy would presumably not have made those grants.
I do not have a view on this, because I have not looked into the nuclear winter literature (besides very quickly skimming some articles).
I don’t think you can make that inference. Like, it is some evidence, but not extremely strong one.
Agreed. Carl Schuman at hour 1:02 at the 80k podcast even notes:
https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/
Rob Wiblin: I see. So because there’s such a clear motivation for even an altruistic person to exaggerate the potential risk from nuclear winter, then people who haven’t looked into it might regard the work as not super credible because it could kind of be a tool for advocacy more than anything.
Carl Shulman: Yeah. And there was some concern of that sort, that people like Carl Sagan, who was both an anti-nuclear and antiwar activist and bringing these things up. So some people, particularly in the military establishment, might have more doubt about when their various choices in the statistical analysis and the projections and assumptions going into the models, are they biased in this way? And so for that reason, I’ve recommended and been supportive of funding, just work to elaborate on this. But then I have additionally especially valued critical work and support for things that would reveal this was wrong if it were, because establishing that kind of credibility seemed very important. And we were talking earlier about how salience and robustness and it being clear in the minds of policymakers and the public is important.
Note earlier in the conversation demonstrating Schulman influenced the funding decision for the Rutgers team from open philanthropy:
”Robert Wiblin: So, a couple years ago you worked at the Gates Foundation and then moved to the kind of GiveWell/Open Phil cluster that you’re helping now.”
Notably, Reisner is part of Los Alamos in the military establishment. They build nuclear weapons there. So both Reisner and Robock from Rutgers have their own biases.
Here’s a peer-reviewed perspective that shows the flaws in both perspectives on nuclear winter as being too extreme:
https://www.tandfonline.com/doi/pdf/10.1080/25751654.2021.1882772
I recommend Lawrence livermore paper on the topic: https://www.osti.gov/biblio/1764313
It seems like a much less biased middle ground, and generally shows that nuclear winter is still really bad, on the order of 1⁄2 to 1⁄3 as “bad” as Rutgers tends to say it is.