To complement my initial comment, here are some things that struck me reviewing that I think should be considered in a more extensive treatment of this question (which is the main conclusion I agree with, this looks like a crucial consideration worth of study):
(1) Way too much trust in the nuclear winter literature. The primary paper cited (https://www.nature.com/articles/s43016-022-00573-0) comes out of the Robock group which is (i) very clearly politically motivated, (ii) in general produces scary looking predictions by combining a string of implausible worst case asumptions. When I looked into this for a couple of days in 2019, it was very clear that this was not a particularly truth-seeking literature, it was a couple of people working on nuclear winter since the 1980s citing each other, very clearly driven by the goal to reduce nuclear risk by uptalking nuclear winter. (iii) Work that looks, on the face of it, far more rigorous comes to vastly different conclusions on the plausibility of nuclear winter scenarios (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017JD027331).
Even if one puts some credence into the Robock group being right, one should heavily discount it given that the only other effort modeling this comes to vastly different conclusions.
(2) Comparing apples to oranges: The reason this is particularly problematic for this comparison is that the main sources for existential risk from climate are estimates that are definitely not driven by exaggerating the risk, but are either from forecasters, or from EA sources that, insofar as they have a bias, are likely to have the opposite (underplaying climate risk). So, what that gives you is something like comparing a 99% percentile on nuclear winter with something like a 50% (if unbiased) or 35% (if you assume some downward bias) for the climate estimate.
An apples to apples comparison here would be comparing the nuclear winter estimate to a climate change scenario that assumes an extreme emissions scenario (say RCP 8.5) and no adaptation at all.
(3) Getting to better estimates: I don’t think neither of the estimates in here are really derived with enough effort to be interpreted that combining them becomes meaningfully action-guiding. For the climate ones, which I know better, these appear like quick guesses to complement much broader qualitative analyses and for making points that seem qualitatively right (e.g. “climate x-risk is lower than commonly imagined”, “other x-risks are significantly larger”, etc.) but that aren’t really done in a way to support conclusions that require a granularity which would be required here to answer your question. Just to be clear, this is not meant to be fatalistic wrt ever comparing them, I think we should get to those estimates eventually, but we need better estimates of the constituent variables to the comparison first.
(1) Way too much trust in the nuclear winter literature. The primary paper cited (https://www.nature.com/articles/s43016-022-00573-0) comes out of the Robock group which is (i) very clearly politically motivated, (ii) in general produces scary looking predictions by combining a string of implausible worst case asumptions.
Luisa Rodriguez (which is an EA-aligned source) estimated 5.5 billion deaths for a US-Russia nuclear war:
By my estimation, a nuclear exchange between the US and Russia would lead to a famine that would kill 5.5 billion people in expectation (90% confidence interval: 2.7 billion to 7.5 billion people).
Xia 2022 gets 5 billion people without food at the end of year 2 (the worst) for the most severe nuclear winter of 150 Tg, so the expected death toll will be lower than Luisa’s (since 150 Tg is a worst case scenario).
(2) Comparing apples to oranges: The reason this is particularly problematic for this comparison is that the main sources for existential risk from climate are estimates that are definitely not driven by exaggerating the risk, but are either from forecasters, or from EA sources that, insofar as they have a bias, are likely to have the opposite (underplaying climate risk).
If I assume the risk from ASRSs is 1⁄4.83 as high, since my estimate for the risk from ASRSs is 4.83 times as high as Toby Ord’s estimate for the existential risk from nuclear war (an EA-aligned source), I get an optimal median global warming in 2100 of 2.3 ºC. Trusting this value, and the current predictions for global warming (roughly, 2 to 3 ºC by 2100), decreasing emissions would not be much better/worse than neutral.
(3) Getting to better estimates: I don’t think neither of the estimates in here are really derived with enough effort to be interpreted that combining them becomes meaningfully action-guiding.
Would you agree with the statement below?
[My sensitivity analysis indicates the optimal median global warming can range from 0.1 to 4.3 ºC, so:] The takeaway for me is that we do not really know whether additional GHG emissions are good/bad.
It would be good to hear from @Luisa_Rodriguez on this—my recollection is that she also became a lot more skeptical of the Robock estimates so I am not sure she would still endorse that figure.
For example, after the post you cite, she wrote (emphasis mine):
“I also added a bit more on the controversy behind the foundational nuclear winter research, which is quite controversial (for example, see: Singer, 1985; Seitz, 2011; Robock, 2011; Coupe et al., 2019; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018).[1] I hope to write more about this controversy in the future, but for the purposes of my estimations, I’ve assumed that the nuclear winter research comes to the right conclusion. However, if one discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.”
Given the conjunctive nature of the risk (many unlikely conditions all need to be true, e.g. reading the Reisner et al work), I would not be shocked at all if the risk from nuclear winter would be < 1⁄100 than the estimate of the Robock group.
In any case, my main point is more that if one looked into the nuclear winter literature with the same rigor that John has looked into climate risk, one would not come out anywhere close to the estimates of the Robock groupas the median and it is quite possible that a much larger downward adjustment then a 5x discount is warranted.
I agree with your last statement indeed, I was trying to sketch out that we would need a lot more work to come to a different conclusion.
It would be good to hear from @Luisa_Rodriguez on this—my recollection is that she also became a lot more skeptical of the Robock estimates so I am not sure she would still endorse that figure.
Agreed. FYI, I am using a baseline soot distribution equal to a lognormal whose 5th and 95th percentiles match the lower and upper bound of the 90 % confidence interval provided by Luisa in that post (highlighted below):
Additionally, my estimate of the amount of smoke that would be lofted into the atmosphere went up from 20 Tg of smoke (90%CI: 7.9 Tg to 39 Tg of smoke) to 30 Tg of smoke (90%CI: 14 Tg to 66 Tg of smoke). Given this, the probability that a US-Russia nuclear exchange would cause a severe nuclear winter — assuming 50 Tg of smoke is the threshold for severe nuclear winter — goes up from just under 1% to about 11%.
The median of my baseline soot distribution is 30.4 Tg (= (14*66)^0.5), which is 4.15 (= 30.4/7.32) times Metaculus’ median predicton for the “next nuclear conflict”. It is unclear what “next nuclear conflict” refers to, but it sounds less severe than “global thermonuclear war”, which is the term Metaculus uses here, and what I am interested in. I have asked for a clarification of the term “next nuclear conflict” in November (in the comments), but I have not heard back.
Agree with Johannes here on the bias in much of the nuclear winter work (and I say that as someone who thinks catastrophic risk from nuclear war is under-appreciated). The political motivations are fairly well-known and easy to spot in the papers
I find it interesting that, despite concerns about the extent to which Roblock’s group is truth-seeking, Open Philanthropy granted it 2.98 M$ in 2017, and 3.00 M$ in 2020. This does not mean Roblock’s estimates are unbiased, but, if they were systemically off by multiple orders of magnitude, Open Philanthropy would presumably not have made those grants.
I do not have a view on this, because I have not looked into the nuclear winter literature (besides very quickly skimming some articles).
Agreed. Carl Schuman at hour 1:02 at the 80k podcast even notes: https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/ Rob Wiblin: I see. So because there’s such a clear motivation for even an altruistic person to exaggerate the potential risk from nuclear winter, then people who haven’t looked into it might regard the work as not super credible because it could kind of be a tool for advocacy more than anything.
Carl Shulman: Yeah. And there was some concern of that sort, that people like Carl Sagan, who was both an anti-nuclear and antiwar activist and bringing these things up. So some people, particularly in the military establishment, might have more doubt about when their various choices in the statistical analysis and the projections and assumptions going into the models, are they biased in this way? And so for that reason, I’ve recommended and been supportive of funding, just work to elaborate on this. But then I have additionally especially valued critical work and support for things that would reveal this was wrong if it were, because establishing that kind of credibility seemed very important. And we were talking earlier about how salience and robustness and it being clear in the minds of policymakers and the public is important.
Note earlier in the conversation demonstrating Schulman influenced the funding decision for the Rutgers team from open philanthropy: ”Robert Wiblin: So, a couple years ago you worked at the Gates Foundation and then moved to the kind of GiveWell/Open Phil cluster that you’re helping now.”
Notably, Reisner is part of Los Alamos in the military establishment. They build nuclear weapons there. So both Reisner and Robock from Rutgers have their own biases.
It seems like a much less biased middle ground, and generally shows that nuclear winter is still really bad, on the order of 1⁄2 to 1⁄3 as “bad” as Rutgers tends to say it is.
Here is one attempt to compare apples to apples. According to the Metaculus’ community, the probability of global population decreasing by at least 95 % by 2100 is:
Due to nuclear war, 0.608 % (= 38 % * 32 % * 5 %). This is the product between:
38 % probability of population decreasing at least 10 %.
32 % probability of population decreasing at least 10 % due to nuclear war if population decreases at least 10 %.
5 % probability of population decreasing at least 95 % due to nuclear war if population decreases at least 10 % due to nuclear war.
Due to climate change, lower than 0.0228 % (= 38 % * 6 % * 1 %). I say lower because Metaculus’ probabilistic predictions have to be between 1 % and 99 %, which means 1 % can be anything from 0 to 1.5 %.
In agreement with these, I set the reduction in the value of the future for a median global warming in 2100 relative to 1880 of 2.4 ºC due to:
Climate change to a value 6.20 (= 2.28/0.365) times as high.
These resulted in an optimal median global warming in 2100 of 2.3 ºC[1], so Metaculus’ predictions suggest decreasing emissions is not much better/worse than neutral. This conclusion is not resilient, but is some evidence that comparing apples to apples does not lead to a median global warming in 2100 much different from the one we are heading towards.
It is unclear to me whether Metaculus’ community is overestimating/underestimating the risk of nuclear war relative to that of climate change. However, I think comparing the risk from climate change and nuclear war based on the probability of a reduction of at least 95 % of the global population will tend to underestimate the risk from nuclear war, because:
I believe the probability of a population loss greater than 95 % (e.g. 99.9 % or extinction) would be a better proxy for the reduction in the value of the future.
Metaculus’ community predictions suggest nuclear war is more likely relative to climate change for greater population loss. The probability of global population decreasing by 2100 by at least:
10 % due to nuclear war is 5.33 (= 32⁄6) times that due to climate change.
95 % due to nuclear war is 26.7 (= (32 * 5)/(6 * 1)) times that due to climate change.
Note this is likely an overestimate, since the 3rd factor I used to calculate the risk from climate change based on Metaculus’ predictions is artificially limited to 1 %.
This moves me somewhat, though I would still really love to see a serious examination of nuclear winter to get better estimates, getting to 5% of 95% population loss conditional on nuclear war seems really high esp given it does not seem to condition on great power war alone.
I say lower because Metaculus’ probabilistic predictions have to be between 1 % and 99 %, which means 1 % can be anything from 0 to 1.5 %.
I have recently noticed Metaculus allows for predictions as low as 0.1 %. I do not know when this was introduced, but, if long ago and forecasters are aware of it, 0.0228 % chance for a 95 % population loss due to climate change may not be an overestimate.
Thanks! In that case, 92.5 % (= 160⁄173) of the predictions for a population loss of 95 % due to climate change given a 10 % loss due to climate change were made with the 1 % lower limit. So I assume 0.0228 % chance for a 95 % population loss due to climate change is still an overestimate.
I think that last point is really quite important. There hasn’t really been any good quantification of climate and xrisk (John’s work is good but fwiw I think his standard and type of evidence he requires would mean that most meaningful contributors to xrisk would be lost bc they are low enough probability or not concrete enough to be captured in his analysis)
Lawrence Livermore National Laboratory (Wagman 2020).
National Academies of Sciences, Engineering, and Medicine, as mandated by the US Congress. The study is 6 months overdue.
The approaches of Mills 2014 and Reisner 2018 are compared in Hess 2021. Wagman 2020 also arrives to a significantly more optimistic conclusion that Robock’s group (emphasis mine):
Plain Language Summary
If the detonation of nuclear weapons causes large fires, the smoke emissions could block sunlight and affect the global climate. A commonly studied scenario is the climate impact that would be caused by the detonation of one hundred 15 kt nuclear weapons in a “regional nuclear exchange” between India and Pakistan (Mills et al., 2014, https://doi.org/10.1002/2013EF000205, Reisner et al., 2018, https://doi.org/10.1002/2017JD027331). We simulate the global climate impacts of this scenario using new models for predicting the fire plume and climate and find that, when smoke from the fires remains in the lower troposphere, it is quickly removed and the climate impact is minimal. Conversely, when fires inject smoke into the upper troposphere or higher, more smoke is transported to the stratosphere where enough light is blocked to cause global surface cooling. Our simulations show that the smoke from 100 simultaneous firestorms would block sunlight for about 4 yr, instead of the 8 to 15 yr predicted in other models. Climate impacts are also shown to be sensitive to assumptions about the composition of the smoke. Additionally, we show that the global effects of the fires are sensitive to fuel availability and consumption, factors which are uncertain for cities in India and Pakistan.
However, it is important to have in mind that an ASRS would also cause disruptions to the energy system, which would tend to make it worse.
(1) Way too much trust in the nuclear winter literature.
Relatedly, I recently came across this nice post by Naval Gazing (via XPT’s report, I think). The author’s conclusions suggest Toon 2008 (the study whose results are used to model the 150 Tg scenario in Xia 2022) overestimates the soot ejected into the stratosphere by a factor of 191 (= 1.5*2*2*(1 + 2)/2*(2 + 3)/2*(4 + 13)/2):
I have not fact-checked the post, but I encouraged the author to crosspost it to EA Forum, and offered to do it myself.
I think this would be v valuable to post as it own post given the situation that still a lot of trust is put into the nuclear winter literature from Robock, Toon et al.
Just a side note. The study you mention as especially rigorous in 1) iii) (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017JD027331) was made in Los Alamos Labs, an organization who job it is to make sure that the US has a large and working stockpile of nuclear weapons. It is financed by the US military and therefore has a very clear inventive to talk down the dangers of nuclear winter. Due to this reason this study has been mentioned as not to be trusted by several well connected people in the nuclear space I talked to.
Yes, I am aware of this and if this space was closer to my grantmaking, I’d be excited to fund a fully neutral study into these questions.
That said, the extremely obvious bias in the stuff of the Robock and Toon papers should still lead one to heavily discount their work.
As @christian.r who is a nuclear risk expert noted in another thread, the bias of Robock et al is also well-known among experts, yet many EAs still seem to take them quite seriously which I find puzzling and not really justifiable.
Yeah fair enough. I personally, view the Robock et al. papers as the “let’s assume that everything happens according to the absolute worst case” side of things. From this perspective they can be quite helpful in getting an understanding of what might happen. Not in the sense that it is likely, but in the sense of what is even remotely in the cards.
I am still a bit skeptical because I don’t think it would be surprising if the worst case of what can actually happen is actually much less worse than what Robock et al model. I think the search process for that literature was more “worst chain we can imagine and get published”, i.e. I don’t think it is really inherently bound to anything in the real world (different from, say, things that are credibly modeled by different groups and the differences are about plausibility of different parameter estimates).
Yes, since the case for nuclear winter is quite multiplicative, if too many pessimistic assumptions were stacked together, the final result would be super pessimistic. Luísa and Denkenberger 2018 modelled variables as distributions to mitigate this, and did arrive to more optimistic estimates. From Fig. 1 of Toon 2008, the soot ejected into the stratosphere accounting for only the US and Russia is 55.0 Tg (= 28.1 + 26.9). Luísa estimated 55 Tg to be the 92th percentile assuming “the nuclear winter research comes to the right conclusion”:
However, Luísa and Denkenberger 2018 still broadly relied on the nuclear winter literature. Johannes commented he “would not be shocked at all if the risk from nuclear winter would be < 1⁄100 than the estimate of the Robock group”, which would be in agreement with Bean’s BOTEC.
To complement my initial comment, here are some things that struck me reviewing that I think should be considered in a more extensive treatment of this question (which is the main conclusion I agree with, this looks like a crucial consideration worth of study):
(1) Way too much trust in the nuclear winter literature. The primary paper cited (https://www.nature.com/articles/s43016-022-00573-0) comes out of the Robock group which is (i) very clearly politically motivated, (ii) in general produces scary looking predictions by combining a string of implausible worst case asumptions. When I looked into this for a couple of days in 2019, it was very clear that this was not a particularly truth-seeking literature, it was a couple of people working on nuclear winter since the 1980s citing each other, very clearly driven by the goal to reduce nuclear risk by uptalking nuclear winter.
(iii) Work that looks, on the face of it, far more rigorous comes to vastly different conclusions on the plausibility of nuclear winter scenarios (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017JD027331).
Even if one puts some credence into the Robock group being right, one should heavily discount it given that the only other effort modeling this comes to vastly different conclusions.
(2) Comparing apples to oranges: The reason this is particularly problematic for this comparison is that the main sources for existential risk from climate are estimates that are definitely not driven by exaggerating the risk, but are either from forecasters, or from EA sources that, insofar as they have a bias, are likely to have the opposite (underplaying climate risk). So, what that gives you is something like comparing a 99% percentile on nuclear winter with something like a 50% (if unbiased) or 35% (if you assume some downward bias) for the climate estimate.
An apples to apples comparison here would be comparing the nuclear winter estimate to a climate change scenario that assumes an extreme emissions scenario (say RCP 8.5) and no adaptation at all.
(3) Getting to better estimates: I don’t think neither of the estimates in here are really derived with enough effort to be interpreted that combining them becomes meaningfully action-guiding. For the climate ones, which I know better, these appear like quick guesses to complement much broader qualitative analyses and for making points that seem qualitatively right (e.g. “climate x-risk is lower than commonly imagined”, “other x-risks are significantly larger”, etc.) but that aren’t really done in a way to support conclusions that require a granularity which would be required here to answer your question. Just to be clear, this is not meant to be fatalistic wrt ever comparing them, I think we should get to those estimates eventually, but we need better estimates of the constituent variables to the comparison first.
Great feedback, Johannes! Some thoughts below.
Luisa Rodriguez (which is an EA-aligned source) estimated 5.5 billion deaths for a US-Russia nuclear war:
Xia 2022 gets 5 billion people without food at the end of year 2 (the worst) for the most severe nuclear winter of 150 Tg, so the expected death toll will be lower than Luisa’s (since 150 Tg is a worst case scenario).
If I assume the risk from ASRSs is 1⁄4.83 as high, since my estimate for the risk from ASRSs is 4.83 times as high as Toby Ord’s estimate for the existential risk from nuclear war (an EA-aligned source), I get an optimal median global warming in 2100 of 2.3 ºC. Trusting this value, and the current predictions for global warming (roughly, 2 to 3 ºC by 2100), decreasing emissions would not be much better/worse than neutral.
Would you agree with the statement below?
It would be good to hear from @Luisa_Rodriguez on this—my recollection is that she also became a lot more skeptical of the Robock estimates so I am not sure she would still endorse that figure.
For example, after the post you cite, she wrote (emphasis mine):
Given the conjunctive nature of the risk (many unlikely conditions all need to be true, e.g. reading the Reisner et al work), I would not be shocked at all if the risk from nuclear winter would be < 1⁄100 than the estimate of the Robock group.
In any case, my main point is more that if one looked into the nuclear winter literature with the same rigor that John has looked into climate risk, one would not come out anywhere close to the estimates of the Robock group as the median and it is quite possible that a much larger downward adjustment then a 5x discount is warranted.
I agree with your last statement indeed, I was trying to sketch out that we would need a lot more work to come to a different conclusion.
Agreed. FYI, I am using a baseline soot distribution equal to a lognormal whose 5th and 95th percentiles match the lower and upper bound of the 90 % confidence interval provided by Luisa in that post (highlighted below):
The median of my baseline soot distribution is 30.4 Tg (= (14*66)^0.5), which is 4.15 (= 30.4/7.32) times Metaculus’ median predicton for the “next nuclear conflict”. It is unclear what “next nuclear conflict” refers to, but it sounds less severe than “global thermonuclear war”, which is the term Metaculus uses here, and what I am interested in. I have asked for a clarification of the term “next nuclear conflict” in November (in the comments), but I have not heard back.
Agree with Johannes here on the bias in much of the nuclear winter work (and I say that as someone who thinks catastrophic risk from nuclear war is under-appreciated). The political motivations are fairly well-known and easy to spot in the papers
Thanks for the feedback, Christian!
I find it interesting that, despite concerns about the extent to which Roblock’s group is truth-seeking, Open Philanthropy granted it 2.98 M$ in 2017, and 3.00 M$ in 2020. This does not mean Roblock’s estimates are unbiased, but, if they were systemically off by multiple orders of magnitude, Open Philanthropy would presumably not have made those grants.
I do not have a view on this, because I have not looked into the nuclear winter literature (besides very quickly skimming some articles).
I don’t think you can make that inference. Like, it is some evidence, but not extremely strong one.
Agreed. Carl Schuman at hour 1:02 at the 80k podcast even notes:
https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/
Rob Wiblin: I see. So because there’s such a clear motivation for even an altruistic person to exaggerate the potential risk from nuclear winter, then people who haven’t looked into it might regard the work as not super credible because it could kind of be a tool for advocacy more than anything.
Carl Shulman: Yeah. And there was some concern of that sort, that people like Carl Sagan, who was both an anti-nuclear and antiwar activist and bringing these things up. So some people, particularly in the military establishment, might have more doubt about when their various choices in the statistical analysis and the projections and assumptions going into the models, are they biased in this way? And so for that reason, I’ve recommended and been supportive of funding, just work to elaborate on this. But then I have additionally especially valued critical work and support for things that would reveal this was wrong if it were, because establishing that kind of credibility seemed very important. And we were talking earlier about how salience and robustness and it being clear in the minds of policymakers and the public is important.
Note earlier in the conversation demonstrating Schulman influenced the funding decision for the Rutgers team from open philanthropy:
”Robert Wiblin: So, a couple years ago you worked at the Gates Foundation and then moved to the kind of GiveWell/Open Phil cluster that you’re helping now.”
Notably, Reisner is part of Los Alamos in the military establishment. They build nuclear weapons there. So both Reisner and Robock from Rutgers have their own biases.
Here’s a peer-reviewed perspective that shows the flaws in both perspectives on nuclear winter as being too extreme:
https://www.tandfonline.com/doi/pdf/10.1080/25751654.2021.1882772
I recommend Lawrence livermore paper on the topic: https://www.osti.gov/biblio/1764313
It seems like a much less biased middle ground, and generally shows that nuclear winter is still really bad, on the order of 1⁄2 to 1⁄3 as “bad” as Rutgers tends to say it is.
Here is one attempt to compare apples to apples. According to the Metaculus’ community, the probability of global population decreasing by at least 95 % by 2100 is:
Due to nuclear war, 0.608 % (= 38 % * 32 % * 5 %). This is the product between:
38 % probability of population decreasing at least 10 %.
32 % probability of population decreasing at least 10 % due to nuclear war if population decreases at least 10 %.
5 % probability of population decreasing at least 95 % due to nuclear war if population decreases at least 10 % due to nuclear war.
Due to climate change, lower than 0.0228 % (= 38 % * 6 % * 1 %). I say lower because Metaculus’ probabilistic predictions have to be between 1 % and 99 %, which means 1 % can be anything from 0 to 1.5 %.
In agreement with these, I set the reduction in the value of the future for a median global warming in 2100 relative to 1880 of 2.4 ºC due to:
ASRSs to a value 1.74 (60.8/34.9) times as high.
Climate change to a value 6.20 (= 2.28/0.365) times as high.
These resulted in an optimal median global warming in 2100 of 2.3 ºC[1], so Metaculus’ predictions suggest decreasing emissions is not much better/worse than neutral. This conclusion is not resilient, but is some evidence that comparing apples to apples does not lead to a median global warming in 2100 much different from the one we are heading towards.
It is unclear to me whether Metaculus’ community is overestimating/underestimating the risk of nuclear war relative to that of climate change. However, I think comparing the risk from climate change and nuclear war based on the probability of a reduction of at least 95 % of the global population will tend to underestimate the risk from nuclear war, because:
I believe the probability of a population loss greater than 95 % (e.g. 99.9 % or extinction) would be a better proxy for the reduction in the value of the future.
Metaculus’ community predictions suggest nuclear war is more likely relative to climate change for greater population loss. The probability of global population decreasing by 2100 by at least:
10 % due to nuclear war is 5.33 (= 32⁄6) times that due to climate change.
95 % due to nuclear war is 26.7 (= (32 * 5)/(6 * 1)) times that due to climate change.
Note this is likely an overestimate, since the 3rd factor I used to calculate the risk from climate change based on Metaculus’ predictions is artificially limited to 1 %.
Thanks for doing this (upvoted!).
This moves me somewhat, though I would still really love to see a serious examination of nuclear winter to get better estimates, getting to 5% of 95% population loss conditional on nuclear war seems really high esp given it does not seem to condition on great power war alone.
Agreed!
I have clarified above the 5 % is conditional on a nuclear war causing a population loss of at least 10 % (not just a nuclear war).
Thanks, this makes a lot of sense then!
I have recently noticed Metaculus allows for predictions as low as 0.1 %. I do not know when this was introduced, but, if long ago and forecasters are aware of it, 0.0228 % chance for a 95 % population loss due to climate change may not be an overestimate.
It was less than 1 year ago, I would guess around 6 months ago.
Thanks! In that case, 92.5 % (= 160⁄173) of the predictions for a population loss of 95 % due to climate change given a 10 % loss due to climate change were made with the 1 % lower limit. So I assume 0.0228 % chance for a 95 % population loss due to climate change is still an overestimate.
I think that last point is really quite important. There hasn’t really been any good quantification of climate and xrisk (John’s work is good but fwiw I think his standard and type of evidence he requires would mean that most meaningful contributors to xrisk would be lost bc they are low enough probability or not concrete enough to be captured in his analysis)
For reference, I think these are the groups looking into this (you had already mentioned the 1st 2):
National Center for Atmospheric Research (NCAR), University of Colorado, and Rutgers University (Mills 2014), which is Robock’s group.
Los Alamos National Laboratory (Reisner 2018).
Lawrence Livermore National Laboratory (Wagman 2020).
National Academies of Sciences, Engineering, and Medicine, as mandated by the US Congress. The study is 6 months overdue.
The approaches of Mills 2014 and Reisner 2018 are compared in Hess 2021. Wagman 2020 also arrives to a significantly more optimistic conclusion that Robock’s group (emphasis mine):
However, it is important to have in mind that an ASRS would also cause disruptions to the energy system, which would tend to make it worse.
Relatedly, I recently came across this nice post by Naval Gazing (via XPT’s report, I think). The author’s conclusions suggest Toon 2008 (the study whose results are used to model the 150 Tg scenario in Xia 2022) overestimates the soot ejected into the stratosphere by a factor of 191 (= 1.5*2*2*(1 + 2)/2*(2 + 3)/2*(4 + 13)/2):
I have not fact-checked the post, but I encouraged the author to crosspost it to EA Forum, and offered to do it myself.
I think this would be v valuable to post as it own post given the situation that still a lot of trust is put into the nuclear winter literature from Robock, Toon et al.
Done!
Just a side note. The study you mention as especially rigorous in 1) iii) (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017JD027331) was made in Los Alamos Labs, an organization who job it is to make sure that the US has a large and working stockpile of nuclear weapons. It is financed by the US military and therefore has a very clear inventive to talk down the dangers of nuclear winter. Due to this reason this study has been mentioned as not to be trusted by several well connected people in the nuclear space I talked to.
An explanation of why it makes sense to talk down the risk of nuclear winter, if you want to have a working deterrence is describe here: https://www.jhuapl.edu/sites/default/files/2023-05/NuclearWinter-Strategy-Risk-WEB.pdf
Yes, I am aware of this and if this space was closer to my grantmaking, I’d be excited to fund a fully neutral study into these questions.
That said, the extremely obvious bias in the stuff of the Robock and Toon papers should still lead one to heavily discount their work.
As @christian.r who is a nuclear risk expert noted in another thread, the bias of Robock et al is also well-known among experts, yet many EAs still seem to take them quite seriously which I find puzzling and not really justifiable.
Here’s hoping that the new set of studies on this funded by FLI (~$4 million) will shed light on the issue within the next few years.
https://futureoflife.org/grant-program/nuclear-war-research/
Yeah fair enough. I personally, view the Robock et al. papers as the “let’s assume that everything happens according to the absolute worst case” side of things. From this perspective they can be quite helpful in getting an understanding of what might happen. Not in the sense that it is likely, but in the sense of what is even remotely in the cards.
Yeah, that seems the best use of these estimates.
I am still a bit skeptical because I don’t think it would be surprising if the worst case of what can actually happen is actually much less worse than what Robock et al model. I think the search process for that literature was more “worst chain we can imagine and get published”, i.e. I don’t think it is really inherently bound to anything in the real world (different from, say, things that are credibly modeled by different groups and the differences are about plausibility of different parameter estimates).
Yes, since the case for nuclear winter is quite multiplicative, if too many pessimistic assumptions were stacked together, the final result would be super pessimistic. Luísa and Denkenberger 2018 modelled variables as distributions to mitigate this, and did arrive to more optimistic estimates. From Fig. 1 of Toon 2008, the soot ejected into the stratosphere accounting for only the US and Russia is 55.0 Tg (= 28.1 + 26.9). Luísa estimated 55 Tg to be the 92th percentile assuming “the nuclear winter research comes to the right conclusion”:
Denkenberger 2018 estimated 55 Tg to be the 80th percentile:
However, Luísa and Denkenberger 2018 still broadly relied on the nuclear winter literature. Johannes commented he “would not be shocked at all if the risk from nuclear winter would be < 1⁄100 than the estimate of the Robock group”, which would be in agreement with Bean’s BOTEC.
Quick updated. I made a comment with estimates for the probability of the amounts of soot injected into the stratosphere studied in Xia 2022.