Thank you for this work. I appreciate the high-level transparency throughout (e.g what is an opinion, how many sources have been read/incorporated, reasons for assumptions etc.)!
I have few key (dis)agreements and considerations. Disclaimer: I work for ALLFED (Alliance to Feed the Earth in Disasters) where we look at preparedness and response to nuclear winter among other things.
1) Opportunity Costs
I think it is not necessary for work on either preventing the worst nuclear conflicts or work on preparedness/response to be mutually exclusive with preventing nuclear conflict in general.
My intuition is that if you are working on preventing the worst nuclear conflicts then you (also) have to work on escalation steps. And understanding of how wars escalate and what we can do about it seems to be very useful generally no matter if we go from a war using 0 to ~10 nukes or from a war escalating from 10 to 100 nukes. At each step we would want to intervene. I do not know how a specialization would look like that is only relevant at the 100 to 1000 nukes step. I know me not being able to imagine such a specialization is only a weak argument but I am also not aware of anyone only looking at such a niche problem.
Additionally, preparedness/response work has multiple uses. Nuclear winter is only one source for an abrupt sunlight reduction scenario (ASRS), the others being super volcanic eruptions and asteroid impacts (though one can argue that nuclear winter is the most likely out of the 3). Having ‘slack’ critical infrastructure (either through storage or the capacity to quickly scale-up post-catastrophe) is also helpful in many scenarios. Examples: resilient communication tech is helpful if communication is being disrupted either by war or by say a solar storm (same goes for electricity and water supply). The ability to scale-up food production is useful if we have Multiple Bread Basket Failures due to coinciding extreme weather events or if we have an agricultural shortfall due to a nuclear winter. In both cases we would want to know how feasible it is to quickly ramp up greenhouses (one example).
Lastly, I expect these different interventions to require different skillsets (e.g civil engineers vs. policy scholars). (Not always, surely there will be overlap.) So the opportunity costs would be more on the funding side of the cause area, less so on the talent side.
2) Neglectedness
I agree that the cause area as a whole is neglected and share the concerns around reduced funding. But within the broader cause area of ‘nuclear conflict’ the tail-risks and the preparedness/response are even more neglected. Barely anyone is working on this and I think this is one strength of the EA community to look into highly neglected areas and add more value per person working on the problem. I don’t have numbers but I would expect there to be at least 100 times more people working on preventing nuclear war and informing policy makers about the potential negative consequences because as you rightly stated that one does not need to be utilitarian, consequentialist, or longtermist to not want nukes to be used under any circumstances.
and 3) High uncertainty around interventions
Exactly because of the uncertainty you mentioned I think we should not rely on a narrow set of interventions and go broader. You can discount the likelihood, run your own numbers and advocate your ideal funding distribution between interventions but I think that we can not rule out nuclear winter happening and therefore some funding should go to response.
For context: Some put the probability of nuclear war causing extinction at (only) 0.3% this century. Or here is ALLFED’s cost-effectiveness model looking at ‘agriculture shortfalls’ and their longterm impact, making the case for the marginal dollar in this area being extremely valuable.
In general I strongly agree with your argument that more efforts should go into prevention of any kind of nuclear war. I do not agree that this should happen at the expense of other interventions (such as working on response/preparedness).
4) Premise 1 --> Civilizational Collapse (through escalation after a single nuke)
You write that a nuclear attack could cause a global conflict (agree) which could then escalate to civilizational collapse (and therefore pose an xrisk) even if no further nukes are being used (strong disagree).
I do not see a plausible pathway to that. Even in an all out physical/typical explosives kind of war (I would expect our supply chains to fail and us running out of things to throw at each other way before civilization itself collapses). Am I missing something here?
Tongue in cheek related movie quote:
A: “Eye for an eye and the world goes blind.”
B: “No it doesn’t. There’ll be one guy left with one eye.”
But I do not think it changes much of what you write here even if you cut-out this one consideration. It is only a minor point. Not a crux. Agree on the aspect that a single nuke can cause significant escalation though.
5) Desensitizing / Language around size of events
I am also saddened to hear that someone was dismissive about an India/Pakistan nuclear exchange. I agree that that is worrisome.
I think that Nuclear Autumns (up to ~25 Tg (million tons) of soot entering the atmosphere) still pose a significant risk and could cause ~1 billion deaths through famines + cascading effects, that is if we do not prepare. So dismissing such a scenario seems pretty bad to me
Thanks for taking the time to read through the whole thing and leaving this well-considered comment! :)
In response to your points:
1) Opportunity costs
“I do not know how a specialization would look like that is only relevant at the 100 to 1000 nukes step. I know me not being able to imagine such a specialization is only a weak argument but I am also not aware of anyone only looking at such a niche problem.”—If this is true and if people who express concern mainly/only for the worst kinds of nuclear war are actually keen on interventions that are equally relevant for preventing any deployment of nuclear weapons, then I agree that the opportunity cost argument is largely moot. I hope your impressions of the (EA) field in this regard are more accurate than mine!
My main concern with preparedness interventions is that they may give us a false sense of ameliorating the danger of nuclear escalation (i.e., “we’ve done all these things to prepare for nuclear winter, so now the prospect of nuclear escalation is not quite as scary and unthinkable anymore”). So I guess I’m less concerned about these interventions the more they are framed as attempts to increase general global resilience, because that seems to de-emphasize the idea that they are effective means to substantially reduce the harms incurred by nuclear escalation. Overall, this is a point that I keep debating in my own mind and where I haven’t come to a very strong conclusion yet: There is a tension in my mind between the value of system slack (which is large, imo) and the possible moral hazard of preparing for an event that we should simply never allow to occur in the first place (i.e.: preparation might reduce the urgency and fervor with which we try to prevent the bad outcome in the first place).
I mostly disagree on the point about skillsets: I think both intervention targets (focus on tail risks vs. preventing any nuclear deployment) are big enough to require input from people with very diverse skillsets, so I think it will be relatively rare for a person to be able to only meaningfully contribute to either of the two. In particular, I believe that both problems are in need of policy scholars, activists, and policymakers and a focus on the preparation side might lead people in those fields to focus less on the preventing any kind of nuclear deployment goal.
2) Neglectedness:
I think you’re empirically right about the relative neglectedness of tail-ends & preparedness within the nuclear risk field.
(I’d argue that this becomes less pronounced as you look at neglectedness not just as “number of people-hours” or “amount of money” dedicated to a problem, but also factor in how capable those people are and how effectively the money is spent (I believe that epistemically rigorous work on nuclear issues is severely neglected and I have the hope that EA engagement in the field could help ameliorate that).)
That said, I must admit that the matter of neglectedness is a very small factor in convincing me of my stance on the prioritization question here. As explained in the post, I think that a focus on the tail risks and/or on preparedness is plausibly net negative because of the intractability of working on them and because of the plausible adverse consequences. In that sense, I am glad that those two are neglected and my post is a plea for keeping things that way.
3) High uncertainty around interventions: Similar thoughts to those expressed above. I have an unresolved tension in my mind when it comes to the value of preparedness interventions. I’m sympathetic to the case you’re making (heck, I even advocated (as a co-author) for general resilience interventions in a different post a few months ago); but, at the moment, I’m not exactly sure I know how to square that sympathy with the concerns I simultaneously have about preparedness rhetoric and action (at least in the nuclear risk field, where the danger of such rhetoric being misused seems particularly acute, given vested interests in maintaining the system and status-quo).
4) Civilizational Collapse:
My claim about civilization collapse in the absence of the deployment of multiple nukes is based on the belief that civilizations can collapse for reasons other than weapons-induced physical destruction.
Some half-baked, very fuzzy ideas of how this could happen are: destruction of communities’ social fabric and breakdown of governance regimes; economic damage, breakdown of trade and financial systems, and attendant social and political consequences; cyber warfare, and attendant social, economic, and political consequences.
I have not spent much time trying to map out the pathways to civilizational collapse and it could be that such a scenario is much less conceivable than I currently imagine. I think I’m currently working on the heuristic that societies and societal functioning is hyper-complex and that I have little ability to actually imagine how big disruptions (like a nuclear conflict) would affect them, which is why I shouldn’t rule out the chance that such disruptions cascade into collapse (through chains of events that I cannot anticipate now).
(While writing this response, I just found myself staring at the screen for a solid 5 minutes and wondering whether using this heuristic is bad reasoning or a sound approach on my part; I lean towards the latter, but might come back to edit this comment if, upon reflection, I decide it’s actually more the former)
On moral hazard, I did some analysis in a journal article of ours:
“Moral hazard would be if awareness of a food backup plan makes nuclear war more likely or more intense. It is unlikely that, in the heat of the moment, the decision to go to nuclear war (whether accidental, inadvertent, or intentional) would give much consideration to the nontarget countries. However, awareness of a backup plan could result in increased arsenals relative to business as usual, as awareness of the threat of nuclear winter likely contributed to the reduction in arsenals [74]. Mikhail Gorbachev stated that a reason for reducing the nuclear arsenal of the USSR was the studies predicting nuclear winter and therefore destruction outside of the target countries [75]. One can look at how much nuclear arsenals changed while the Cold War was still in effect (after the Cold War, reduced tensions were probably the main reason for reduction in stockpiles). This was ~20% [76]. The perceived consequences of nuclear war changed from hundreds of millions of dead to billions of dead, so roughly an order of magnitude. The reduction in damage from reducing the number of warheads by 20% is significantly lower than 20% because of marginal nuclear weapons targeting lower population and fuel loading density areas. Therefore, the reduction in impact might have been around 10%. Therefore, with an increase in damage with the perception of nuclear winter of approximately 1000% and a reduction in the damage potential due to a smaller arsenal of 10%, the elasticity would be roughly 0.01. Therefore, the moral hazard term of loss in net effectiveness of the interventions would be 1%.”
Also, as Aron pointed out, resilience protects against other catastrophes, such as supervolcanic eruptions and asteroid/comet impacts. Similarly, there is some evidence that people drive less safely if they are wearing a seatbelt, but overall we are better off with a seatbelt. So I don’t think moral hazard is a significant argument against resilience.
I think direct cost-effectiveness analyses like this journal article are more robust, especially for interventions, than Importance, Neglectedness and Tractability. But it is interesting to think about tractability separately. It is true that there is a lot of uncertainty of what the environment would be like post catastrophe. However, we have calculated that resilient foods would greatly improve the situation with and without global food trade, so I think they are a robust intervention. Also, I think if you look at the state of resilience to nuclear winter pre-2014, it was basically to store up more food, which would cost tens of trillions of dollars, would not protect you right away, and if you did it fast, it would raise prices and exacerbate current malnutrition. In 2014, we estimated that resilient foods could be scaled up to feed everyone technically. And then in the last eight years, we have done research estimating that it could also be done affordably for most people. So I think there has been a lot of progress with just a few million dollars spent, indicating tractability.
I mostly disagree on the point about skillsets: I think both intervention targets (focus on tail risks vs. preventing any nuclear deployment) are big enough to require input from people with very diverse skillsets, so I think it will be relatively rare for a person to be able to only meaningfully contribute to either of the two. In particular, I believe that both problems are in need of policy scholars, activists, and policymakers and a focus on the preparation side might lead people in those fields to focus less on the preventing any kind of nuclear deployment goal.
I think that Aron was talking about prevention versus resilience. Resilience requires more engineering.
Thanks for your comment and for adding to Aron’s response to my post!
Before reacting point-by-point, one more overarching warning/clarification/observation: My views on the disvalue of numerical reasoning and the use of BOTECs in deeply uncertain situations are quite unusual within the EA community (though not unheard of, see for instance this EA Forum post on “Potential downsides of using explicit probabilities” and this GiveWell blog post on “Why we can’t take expected value estimates literally (even when they’re unbiased)” which acknowledge some of the concerns that motivate my skeptical stance). I can imagine that this is a heavy crux between us and that it makes advances/convergence on more concrete questions (esp. through a forum comments discussion) rather difficult (which is not at all meant to discourage engagement or to suggest I find your comments unhelpful (quite the contrary); just noting this in an attempt to avoid us arguing past each other).
On moral hazards:
In general, my deep-seated worries about moral hazard and other normative adverse effects feel somewhat inaccessible to numerical/empirical reasoning (at least until we come up with much better empirical research strategies for studying complex situations). To be completely honest, I can’t really imagine arguments or evidence that would be able to substantially dissolve the worries I have. That is not because I’m consciously dogmatic and unwilling to budge from my conclusions, but rather because I don’t think we have the means to know empirically to what extent these adverse effects actually exist/occur. It thus seems that we are forced to rely on fundamental worldview-level beliefs (or intuitions) when deciding on our credences for their importance. This is a very frustrating situation, but I just don’t find attempts to escape it (through relatively arbitrary BOTECs or plausibility arguments) in any sense convincing; they usually seem to me to be trying to come up with elaborate cognitive schemes to diffuse a level of deep empirical uncertainty that simply cannot be diffused (given the structure of the world and the research methods we know of).
To illustrate my thinking, here’s my response to your example:
I don’t think that we really know anything about the moral hazard effects that interventions to prepare for nuclear winter would have had on nuclear policy and outcomes in the Cold War era.
I don’t think we have a sufficiently strong reason to assign the 20% reduction in nuclear weapons to the difference in perceived costs of nuclear escalation after research on nuclear winter surfaced.
I don’t think we have any defensible basis for making a guess about how this reduction in weapons stocks would have been different had there been efforts to prepare for nuclear winter in the 1980s.
I don’t think it is legitimate to simply claim that fear of nuclear-winter-type events has no plausible effect on decision-making in crisis situations (either consciously or sub-consciously, through normative effects such as those of the nuclear taboo). At the same time, I don’t think we have a defensible basis for guessing the expected strength of this effect of fear (or “taking expected costs seriously”) on decision-making, nor for expected changes in the level of fear given interventions to prepare for the worst case.
In short, I don’t think it is anywhere close to feasible or useful to attempt to calculate “the moral hazard term of loss in net effectiveness of the [nuclear winter preparation] interventions”.
On the cost-benefit analysis and tractability of food resilience interventions:
As a general reaction, I’m quite wary of cost-effectiveness analyses for interventions into complex systems. That is because such analyses require that we identify all relevant consequences (and assign value and probability estimates to each), which I believe is extremely hard once you take indirect/second-order effects seriously. (In addition, I’m worried that cost-effectiveness analyses distract analysts and readers from the difficult task of mapping out consequences comprehensively, instead focusing their attention on the quantification of a narrow set of direct consequences.)
That said, I think there sometimes is informational value in cost-effectiveness analyses in such situations, if their results are very stark and robust to changes in the numbers used. I think the article you link is an example of such a case, and accept this as an argument in favor of food resilience interventions.
I also accept your case for the tractability of food resilience interventions (in the US) as sound.
As far as the core argument in my post is concerned, my concern is that the majority of post-nuclear war conditions gets ignored in your response. I.e., if we have sound reasons to think that we can cost-effectively/tractably prepare for post-nuclear war food shortages but don’t have good reasons to think that we know how to cost-effectively/tractably prepare for most of the other plausible consequences of nuclear deployment (many of which we might have thus far failed to identify in the first place), then I would still argue that the tractability of preparing for a post-nuclear war world is concerningly low. I would thus continue to maintain that preventing nuclear deployment should be the primary priority (in other words: your arguments in favor of preparation interventions don’t address the challenge of preparing for the full range of possible consequences, which is why I still think avoiding the consequences ought to be the first priority).
I agree that the cause area as a whole is neglected and share the concerns around reduced funding. But within the broader cause area of ‘nuclear conflict’ the tail-risks and the preparedness/response are even more neglected. Barely anyone is working on this and I think this is one strength of the EA community to look into highly neglected areas and add more value per person working on the problem. I don’t have numbers but I would expect there to be at least 100 times more people working on preventing nuclear war and informing policy makers about the potential negative consequences because as you rightly stated that one does not need to be utilitarian, consequentialist, or longtermist to not want nukes to be used under any circumstances.
I think nuclear tail risk may be fairly neglected because their higher severity may be more than outweighted by their lower likelihood. To illustrate, in the context of conventional wars:
Deaths follow a power law whose tail index is “1.35 to 1.74, with a mean of 1.60”. So the probability density function (PDF) of the deaths is proportional to “deaths”^-2.6 (= “deaths”^-(“tail index” + 1)), which means a conventional war exactly 10 times as deadly is 0.251 % (= 10^-2.6) as likely[1].
As a result, the expected value density of the deaths (“PDF of the deaths”*”deaths”) is proportional to “deaths”^-1.6 (= “deaths”^-2.6*“deaths”).
I think spending by war severity should a priori be proportional to expected deaths, i.e. to “deaths”^-1.6. If so, spending to save lives in wars exactly 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as high.
Nuclear wars arguably scale much faster than conventional ones (i.e. have a lower tail index), so I guess spending on nuclear wars involving 1 k nuclear detonations should be higher than 0.00158 % of the spending on ones involving a single detonation. However, it is not obvious to me whether it should be higher than e.g. 1 % (respecting the multiplier you mentioned of 100). I estimated the expected value density of the 90th, 99th and 99.9th percentile famine deaths due to the climatic effects of a large nuclear war are 17.0 %, 2.19 %, and 0.309 % that of the median deaths, which suggests spending on the 90th, 99th and 99.9th percentile large nuclear war should be 17.0 %, 2.19 %, and 0.309 % that on the median large nuclear war.
Note the tail distribution is proportional to “deaths”^-1.6 (= “deaths”^-”tail index”), so a conventional war at least 10 times as deadly is 2.51 % (= 10^-1.6) as likely.
Thank you for this work. I appreciate the high-level transparency throughout (e.g what is an opinion, how many sources have been read/incorporated, reasons for assumptions etc.)!
I have few key (dis)agreements and considerations. Disclaimer: I work for ALLFED (Alliance to Feed the Earth in Disasters) where we look at preparedness and response to nuclear winter among other things.
1) Opportunity Costs
I think it is not necessary for work on either preventing the worst nuclear conflicts or work on preparedness/response to be mutually exclusive with preventing nuclear conflict in general.
My intuition is that if you are working on preventing the worst nuclear conflicts then you (also) have to work on escalation steps. And understanding of how wars escalate and what we can do about it seems to be very useful generally no matter if we go from a war using 0 to ~10 nukes or from a war escalating from 10 to 100 nukes. At each step we would want to intervene. I do not know how a specialization would look like that is only relevant at the 100 to 1000 nukes step. I know me not being able to imagine such a specialization is only a weak argument but I am also not aware of anyone only looking at such a niche problem.
Additionally, preparedness/response work has multiple uses. Nuclear winter is only one source for an abrupt sunlight reduction scenario (ASRS), the others being super volcanic eruptions and asteroid impacts (though one can argue that nuclear winter is the most likely out of the 3). Having ‘slack’ critical infrastructure (either through storage or the capacity to quickly scale-up post-catastrophe) is also helpful in many scenarios. Examples: resilient communication tech is helpful if communication is being disrupted either by war or by say a solar storm (same goes for electricity and water supply). The ability to scale-up food production is useful if we have Multiple Bread Basket Failures due to coinciding extreme weather events or if we have an agricultural shortfall due to a nuclear winter. In both cases we would want to know how feasible it is to quickly ramp up greenhouses (one example).
Lastly, I expect these different interventions to require different skillsets (e.g civil engineers vs. policy scholars). (Not always, surely there will be overlap.) So the opportunity costs would be more on the funding side of the cause area, less so on the talent side.
2) Neglectedness
I agree that the cause area as a whole is neglected and share the concerns around reduced funding. But within the broader cause area of ‘nuclear conflict’ the tail-risks and the preparedness/response are even more neglected. Barely anyone is working on this and I think this is one strength of the EA community to look into highly neglected areas and add more value per person working on the problem. I don’t have numbers but I would expect there to be at least 100 times more people working on preventing nuclear war and informing policy makers about the potential negative consequences because as you rightly stated that one does not need to be utilitarian, consequentialist, or longtermist to not want nukes to be used under any circumstances.
and 3) High uncertainty around interventions
Exactly because of the uncertainty you mentioned I think we should not rely on a narrow set of interventions and go broader. You can discount the likelihood, run your own numbers and advocate your ideal funding distribution between interventions but I think that we can not rule out nuclear winter happening and therefore some funding should go to response.
For context: Some put the probability of nuclear war causing extinction at (only) 0.3% this century. Or here is ALLFED’s cost-effectiveness model looking at ‘agriculture shortfalls’ and their longterm impact, making the case for the marginal dollar in this area being extremely valuable.
In general I strongly agree with your argument that more efforts should go into prevention of any kind of nuclear war. I do not agree that this should happen at the expense of other interventions (such as working on response/preparedness).
4) Premise 1 --> Civilizational Collapse (through escalation after a single nuke)
You write that a nuclear attack could cause a global conflict (agree) which could then escalate to civilizational collapse (and therefore pose an xrisk) even if no further nukes are being used (strong disagree).
I do not see a plausible pathway to that. Even in an all out physical/typical explosives kind of war (I would expect our supply chains to fail and us running out of things to throw at each other way before civilization itself collapses). Am I missing something here?
Tongue in cheek related movie quote:
A: “Eye for an eye and the world goes blind.”
B: “No it doesn’t. There’ll be one guy left with one eye.”
But I do not think it changes much of what you write here even if you cut-out this one consideration. It is only a minor point. Not a crux. Agree on the aspect that a single nuke can cause significant escalation though.
5) Desensitizing / Language around size of events
I am also saddened to hear that someone was dismissive about an India/Pakistan nuclear exchange. I agree that that is worrisome.
I think that Nuclear Autumns (up to ~25 Tg (million tons) of soot entering the atmosphere) still pose a significant risk and could cause ~1 billion deaths through famines + cascading effects, that is if we do not prepare. So dismissing such a scenario seems pretty bad to me
Thanks for taking the time to read through the whole thing and leaving this well-considered comment! :)
In response to your points:
1) Opportunity costs
“I do not know how a specialization would look like that is only relevant at the 100 to 1000 nukes step. I know me not being able to imagine such a specialization is only a weak argument but I am also not aware of anyone only looking at such a niche problem.”—If this is true and if people who express concern mainly/only for the worst kinds of nuclear war are actually keen on interventions that are equally relevant for preventing any deployment of nuclear weapons, then I agree that the opportunity cost argument is largely moot. I hope your impressions of the (EA) field in this regard are more accurate than mine!
My main concern with preparedness interventions is that they may give us a false sense of ameliorating the danger of nuclear escalation (i.e., “we’ve done all these things to prepare for nuclear winter, so now the prospect of nuclear escalation is not quite as scary and unthinkable anymore”). So I guess I’m less concerned about these interventions the more they are framed as attempts to increase general global resilience, because that seems to de-emphasize the idea that they are effective means to substantially reduce the harms incurred by nuclear escalation. Overall, this is a point that I keep debating in my own mind and where I haven’t come to a very strong conclusion yet: There is a tension in my mind between the value of system slack (which is large, imo) and the possible moral hazard of preparing for an event that we should simply never allow to occur in the first place (i.e.: preparation might reduce the urgency and fervor with which we try to prevent the bad outcome in the first place).
I mostly disagree on the point about skillsets: I think both intervention targets (focus on tail risks vs. preventing any nuclear deployment) are big enough to require input from people with very diverse skillsets, so I think it will be relatively rare for a person to be able to only meaningfully contribute to either of the two. In particular, I believe that both problems are in need of policy scholars, activists, and policymakers and a focus on the preparation side might lead people in those fields to focus less on the preventing any kind of nuclear deployment goal.
2) Neglectedness:
I think you’re empirically right about the relative neglectedness of tail-ends & preparedness within the nuclear risk field.
(I’d argue that this becomes less pronounced as you look at neglectedness not just as “number of people-hours” or “amount of money” dedicated to a problem, but also factor in how capable those people are and how effectively the money is spent (I believe that epistemically rigorous work on nuclear issues is severely neglected and I have the hope that EA engagement in the field could help ameliorate that).)
That said, I must admit that the matter of neglectedness is a very small factor in convincing me of my stance on the prioritization question here. As explained in the post, I think that a focus on the tail risks and/or on preparedness is plausibly net negative because of the intractability of working on them and because of the plausible adverse consequences. In that sense, I am glad that those two are neglected and my post is a plea for keeping things that way.
3) High uncertainty around interventions: Similar thoughts to those expressed above. I have an unresolved tension in my mind when it comes to the value of preparedness interventions. I’m sympathetic to the case you’re making (heck, I even advocated (as a co-author) for general resilience interventions in a different post a few months ago); but, at the moment, I’m not exactly sure I know how to square that sympathy with the concerns I simultaneously have about preparedness rhetoric and action (at least in the nuclear risk field, where the danger of such rhetoric being misused seems particularly acute, given vested interests in maintaining the system and status-quo).
4) Civilizational Collapse:
My claim about civilization collapse in the absence of the deployment of multiple nukes is based on the belief that civilizations can collapse for reasons other than weapons-induced physical destruction.
Some half-baked, very fuzzy ideas of how this could happen are: destruction of communities’ social fabric and breakdown of governance regimes; economic damage, breakdown of trade and financial systems, and attendant social and political consequences; cyber warfare, and attendant social, economic, and political consequences.
I have not spent much time trying to map out the pathways to civilizational collapse and it could be that such a scenario is much less conceivable than I currently imagine. I think I’m currently working on the heuristic that societies and societal functioning is hyper-complex and that I have little ability to actually imagine how big disruptions (like a nuclear conflict) would affect them, which is why I shouldn’t rule out the chance that such disruptions cascade into collapse (through chains of events that I cannot anticipate now).
(While writing this response, I just found myself staring at the screen for a solid 5 minutes and wondering whether using this heuristic is bad reasoning or a sound approach on my part; I lean towards the latter, but might come back to edit this comment if, upon reflection, I decide it’s actually more the former)
On moral hazard, I did some analysis in a journal article of ours:
“Moral hazard would be if awareness of a food backup plan makes nuclear war more likely or more intense. It is unlikely that, in the heat of the moment, the decision to go to nuclear war (whether accidental, inadvertent, or intentional) would give much consideration to the nontarget countries. However, awareness of a backup plan could result in increased arsenals relative to business as usual, as awareness of the threat of nuclear winter likely contributed to the reduction in arsenals [74]. Mikhail Gorbachev stated that a reason for reducing the nuclear arsenal of the USSR was the studies predicting nuclear winter and therefore destruction outside of the target countries [75]. One can look at how much nuclear arsenals changed while the Cold War was still in effect (after the Cold War, reduced tensions were probably the main reason for reduction in stockpiles). This was ~20% [76]. The perceived consequences of nuclear war changed from hundreds of millions of dead to billions of dead, so roughly an order of magnitude. The reduction in damage from reducing the number of warheads by 20% is significantly lower than 20% because of marginal nuclear weapons targeting lower population and fuel loading density areas. Therefore, the reduction in impact might have been around 10%. Therefore, with an increase in damage with the perception of nuclear winter of approximately 1000% and a reduction in the damage potential due to a smaller arsenal of 10%, the elasticity would be roughly 0.01. Therefore, the moral hazard term of loss in net effectiveness of the interventions would be 1%.”
Also, as Aron pointed out, resilience protects against other catastrophes, such as supervolcanic eruptions and asteroid/comet impacts. Similarly, there is some evidence that people drive less safely if they are wearing a seatbelt, but overall we are better off with a seatbelt. So I don’t think moral hazard is a significant argument against resilience.
I think direct cost-effectiveness analyses like this journal article are more robust, especially for interventions, than Importance, Neglectedness and Tractability. But it is interesting to think about tractability separately. It is true that there is a lot of uncertainty of what the environment would be like post catastrophe. However, we have calculated that resilient foods would greatly improve the situation with and without global food trade, so I think they are a robust intervention. Also, I think if you look at the state of resilience to nuclear winter pre-2014, it was basically to store up more food, which would cost tens of trillions of dollars, would not protect you right away, and if you did it fast, it would raise prices and exacerbate current malnutrition. In 2014, we estimated that resilient foods could be scaled up to feed everyone technically. And then in the last eight years, we have done research estimating that it could also be done affordably for most people. So I think there has been a lot of progress with just a few million dollars spent, indicating tractability.
I think that Aron was talking about prevention versus resilience. Resilience requires more engineering.
Thanks for your comment and for adding to Aron’s response to my post!
Before reacting point-by-point, one more overarching warning/clarification/observation: My views on the disvalue of numerical reasoning and the use of BOTECs in deeply uncertain situations are quite unusual within the EA community (though not unheard of, see for instance this EA Forum post on “Potential downsides of using explicit probabilities” and this GiveWell blog post on “Why we can’t take expected value estimates literally (even when they’re unbiased)” which acknowledge some of the concerns that motivate my skeptical stance). I can imagine that this is a heavy crux between us and that it makes advances/convergence on more concrete questions (esp. through a forum comments discussion) rather difficult (which is not at all meant to discourage engagement or to suggest I find your comments unhelpful (quite the contrary); just noting this in an attempt to avoid us arguing past each other).
On moral hazards:
In general, my deep-seated worries about moral hazard and other normative adverse effects feel somewhat inaccessible to numerical/empirical reasoning (at least until we come up with much better empirical research strategies for studying complex situations). To be completely honest, I can’t really imagine arguments or evidence that would be able to substantially dissolve the worries I have. That is not because I’m consciously dogmatic and unwilling to budge from my conclusions, but rather because I don’t think we have the means to know empirically to what extent these adverse effects actually exist/occur. It thus seems that we are forced to rely on fundamental worldview-level beliefs (or intuitions) when deciding on our credences for their importance. This is a very frustrating situation, but I just don’t find attempts to escape it (through relatively arbitrary BOTECs or plausibility arguments) in any sense convincing; they usually seem to me to be trying to come up with elaborate cognitive schemes to diffuse a level of deep empirical uncertainty that simply cannot be diffused (given the structure of the world and the research methods we know of).
To illustrate my thinking, here’s my response to your example:
I don’t think that we really know anything about the moral hazard effects that interventions to prepare for nuclear winter would have had on nuclear policy and outcomes in the Cold War era.
I don’t think we have a sufficiently strong reason to assign the 20% reduction in nuclear weapons to the difference in perceived costs of nuclear escalation after research on nuclear winter surfaced.
I don’t think we have any defensible basis for making a guess about how this reduction in weapons stocks would have been different had there been efforts to prepare for nuclear winter in the 1980s.
I don’t think it is legitimate to simply claim that fear of nuclear-winter-type events has no plausible effect on decision-making in crisis situations (either consciously or sub-consciously, through normative effects such as those of the nuclear taboo). At the same time, I don’t think we have a defensible basis for guessing the expected strength of this effect of fear (or “taking expected costs seriously”) on decision-making, nor for expected changes in the level of fear given interventions to prepare for the worst case.
In short, I don’t think it is anywhere close to feasible or useful to attempt to calculate “the moral hazard term of loss in net effectiveness of the [nuclear winter preparation] interventions”.
On the cost-benefit analysis and tractability of food resilience interventions:
As a general reaction, I’m quite wary of cost-effectiveness analyses for interventions into complex systems. That is because such analyses require that we identify all relevant consequences (and assign value and probability estimates to each), which I believe is extremely hard once you take indirect/second-order effects seriously. (In addition, I’m worried that cost-effectiveness analyses distract analysts and readers from the difficult task of mapping out consequences comprehensively, instead focusing their attention on the quantification of a narrow set of direct consequences.)
That said, I think there sometimes is informational value in cost-effectiveness analyses in such situations, if their results are very stark and robust to changes in the numbers used. I think the article you link is an example of such a case, and accept this as an argument in favor of food resilience interventions.
I also accept your case for the tractability of food resilience interventions (in the US) as sound.
As far as the core argument in my post is concerned, my concern is that the majority of post-nuclear war conditions gets ignored in your response. I.e., if we have sound reasons to think that we can cost-effectively/tractably prepare for post-nuclear war food shortages but don’t have good reasons to think that we know how to cost-effectively/tractably prepare for most of the other plausible consequences of nuclear deployment (many of which we might have thus far failed to identify in the first place), then I would still argue that the tractability of preparing for a post-nuclear war world is concerningly low. I would thus continue to maintain that preventing nuclear deployment should be the primary priority (in other words: your arguments in favor of preparation interventions don’t address the challenge of preparing for the full range of possible consequences, which is why I still think avoiding the consequences ought to be the first priority).
Thanks for the detailed comment, Aron!
I think nuclear tail risk may be fairly neglected because their higher severity may be more than outweighted by their lower likelihood. To illustrate, in the context of conventional wars:
Deaths follow a power law whose tail index is “1.35 to 1.74, with a mean of 1.60”. So the probability density function (PDF) of the deaths is proportional to “deaths”^-2.6 (= “deaths”^-(“tail index” + 1)), which means a conventional war exactly 10 times as deadly is 0.251 % (= 10^-2.6) as likely[1].
As a result, the expected value density of the deaths (“PDF of the deaths”*”deaths”) is proportional to “deaths”^-1.6 (= “deaths”^-2.6*“deaths”).
I think spending by war severity should a priori be proportional to expected deaths, i.e. to “deaths”^-1.6. If so, spending to save lives in wars exactly 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as high.
Nuclear wars arguably scale much faster than conventional ones (i.e. have a lower tail index), so I guess spending on nuclear wars involving 1 k nuclear detonations should be higher than 0.00158 % of the spending on ones involving a single detonation. However, it is not obvious to me whether it should be higher than e.g. 1 % (respecting the multiplier you mentioned of 100). I estimated the expected value density of the 90th, 99th and 99.9th percentile famine deaths due to the climatic effects of a large nuclear war are 17.0 %, 2.19 %, and 0.309 % that of the median deaths, which suggests spending on the 90th, 99th and 99.9th percentile large nuclear war should be 17.0 %, 2.19 %, and 0.309 % that on the median large nuclear war.
Note the tail distribution is proportional to “deaths”^-1.6 (= “deaths”^-”tail index”), so a conventional war at least 10 times as deadly is 2.51 % (= 10^-1.6) as likely.