On moral hazard, I did some analysis in a journal article of ours:
âMoral hazard would be if awareness of a food backup plan makes nuclear war more likely or more intense. It is unlikely that, in the heat of the moment, the decision to go to nuclear war (whether accidental, inadvertent, or intentional) would give much consideration to the nontarget countries. However, awareness of a backup plan could result in increased arsenals relative to business as usual, as awareness of the threat of nuclear winter likely contributed to the reduction in arsenals [74]. Mikhail Gorbachev stated that a reason for reducing the nuclear arsenal of the USSR was the studies predicting nuclear winter and therefore destruction outside of the target countries [75]. One can look at how much nuclear arsenals changed while the Cold War was still in effect (after the Cold War, reduced tensions were probably the main reason for reduction in stockpiles). This was ~20% [76]. The perceived consequences of nuclear war changed from hundreds of millions of dead to billions of dead, so roughly an order of magnitude. The reduction in damage from reducing the number of warheads by 20% is significantly lower than 20% because of marginal nuclear weapons targeting lower population and fuel loading density areas. Therefore, the reduction in impact might have been around 10%. Therefore, with an increase in damage with the perception of nuclear winter of approximately 1000% and a reduction in the damage potential due to a smaller arsenal of 10%, the elasticity would be roughly 0.01. Therefore, the moral hazard term of loss in net effectiveness of the interventions would be 1%.â
Also, as Aron pointed out, resilience protects against other catastrophes, such as supervolcanic eruptions and asteroid/âcomet impacts. Similarly, there is some evidence that people drive less safely if they are wearing a seatbelt, but overall we are better off with a seatbelt. So I donât think moral hazard is a significant argument against resilience.
I think direct cost-effectiveness analyses like this journal article are more robust, especially for interventions, than Importance, Neglectedness and Tractability. But it is interesting to think about tractability separately. It is true that there is a lot of uncertainty of what the environment would be like post catastrophe. However, we have calculated that resilient foods would greatly improve the situation with and without global food trade, so I think they are a robust intervention. Also, I think if you look at the state of resilience to nuclear winter pre-2014, it was basically to store up more food, which would cost tens of trillions of dollars, would not protect you right away, and if you did it fast, it would raise prices and exacerbate current malnutrition. In 2014, we estimated that resilient foods could be scaled up to feed everyone technically. And then in the last eight years, we have done research estimating that it could also be done affordably for most people. So I think there has been a lot of progress with just a few million dollars spent, indicating tractability.
I mostly disagree on the point about skillsets: I think both intervention targets (focus on tail risks vs. preventing any nuclear deployment) are big enough to require input from people with very diverse skillsets, so I think it will be relatively rare for a person to be able to only meaningfully contribute to either of the two. In particular, I believe that both problems are in need of policy scholars, activists, and policymakers and a focus on the preparation side might lead people in those fields to focus less on the preventing any kind of nuclear deployment goal.
I think that Aron was talking about prevention versus resilience. Resilience requires more engineering.
Thanks for your comment and for adding to Aronâs response to my post!
Before reacting point-by-point, one more overarching warning/âclarification/âobservation: My views on the disvalue of numerical reasoning and the use of BOTECs in deeply uncertain situations are quite unusual within the EA community (though not unheard of, see for instance this EA Forum post on âPotential downsides of using explicit probabilitiesâ and this GiveWell blog post on âWhy we canât take expected value estimates literally (even when theyâre unbiased)â which acknowledge some of the concerns that motivate my skeptical stance). I can imagine that this is a heavy crux between us and that it makes advances/âconvergence on more concrete questions (esp. through a forum comments discussion) rather difficult (which is not at all meant to discourage engagement or to suggest I find your comments unhelpful (quite the contrary); just noting this in an attempt to avoid us arguing past each other).
On moral hazards:
In general, my deep-seated worries about moral hazard and other normative adverse effects feel somewhat inaccessible to numerical/âempirical reasoning (at least until we come up with much better empirical research strategies for studying complex situations). To be completely honest, I canât really imagine arguments or evidence that would be able to substantially dissolve the worries I have. That is not because Iâm consciously dogmatic and unwilling to budge from my conclusions, but rather because I donât think we have the means to know empirically to what extent these adverse effects actually exist/âoccur. It thus seems that we are forced to rely on fundamental worldview-level beliefs (or intuitions) when deciding on our credences for their importance. This is a very frustrating situation, but I just donât find attempts to escape it (through relatively arbitrary BOTECs or plausibility arguments) in any sense convincing; they usually seem to me to be trying to come up with elaborate cognitive schemes to diffuse a level of deep empirical uncertainty that simply cannot be diffused (given the structure of the world and the research methods we know of).
To illustrate my thinking, hereâs my response to your example:
I donât think that we really know anything about the moral hazard effects that interventions to prepare for nuclear winter would have had on nuclear policy and outcomes in the Cold War era.
I donât think we have a sufficiently strong reason to assign the 20% reduction in nuclear weapons to the difference in perceived costs of nuclear escalation after research on nuclear winter surfaced.
I donât think we have any defensible basis for making a guess about how this reduction in weapons stocks would have been different had there been efforts to prepare for nuclear winter in the 1980s.
I donât think it is legitimate to simply claim that fear of nuclear-winter-type events has no plausible effect on decision-making in crisis situations (either consciously or sub-consciously, through normative effects such as those of the nuclear taboo). At the same time, I donât think we have a defensible basis for guessing the expected strength of this effect of fear (or âtaking expected costs seriouslyâ) on decision-making, nor for expected changes in the level of fear given interventions to prepare for the worst case.
In short, I donât think it is anywhere close to feasible or useful to attempt to calculate âthe moral hazard term of loss in net effectiveness of the [nuclear winter preparation] interventionsâ.
On the cost-benefit analysis and tractability of food resilience interventions:
As a general reaction, Iâm quite wary of cost-effectiveness analyses for interventions into complex systems. That is because such analyses require that we identify all relevant consequences (and assign value and probability estimates to each), which I believe is extremely hard once you take indirect/âsecond-order effects seriously. (In addition, Iâm worried that cost-effectiveness analyses distract analysts and readers from the difficult task of mapping out consequences comprehensively, instead focusing their attention on the quantification of a narrow set of direct consequences.)
That said, I think there sometimes is informational value in cost-effectiveness analyses in such situations, if their results are very stark and robust to changes in the numbers used. I think the article you link is an example of such a case, and accept this as an argument in favor of food resilience interventions.
I also accept your case for the tractability of food resilience interventions (in the US) as sound.
As far as the core argument in my post is concerned, my concern is that the majority of post-nuclear war conditions gets ignored in your response. I.e., if we have sound reasons to think that we can cost-effectively/âtractably prepare for post-nuclear war food shortages but donât have good reasons to think that we know how to cost-effectively/âtractably prepare for most of the other plausible consequences of nuclear deployment (many of which we might have thus far failed to identify in the first place), then I would still argue that the tractability of preparing for a post-nuclear war world is concerningly low. I would thus continue to maintain that preventing nuclear deployment should be the primary priority (in other words: your arguments in favor of preparation interventions donât address the challenge of preparing for the full range of possible consequences, which is why I still think avoiding the consequences ought to be the first priority).
On moral hazard, I did some analysis in a journal article of ours:
âMoral hazard would be if awareness of a food backup plan makes nuclear war more likely or more intense. It is unlikely that, in the heat of the moment, the decision to go to nuclear war (whether accidental, inadvertent, or intentional) would give much consideration to the nontarget countries. However, awareness of a backup plan could result in increased arsenals relative to business as usual, as awareness of the threat of nuclear winter likely contributed to the reduction in arsenals [74]. Mikhail Gorbachev stated that a reason for reducing the nuclear arsenal of the USSR was the studies predicting nuclear winter and therefore destruction outside of the target countries [75]. One can look at how much nuclear arsenals changed while the Cold War was still in effect (after the Cold War, reduced tensions were probably the main reason for reduction in stockpiles). This was ~20% [76]. The perceived consequences of nuclear war changed from hundreds of millions of dead to billions of dead, so roughly an order of magnitude. The reduction in damage from reducing the number of warheads by 20% is significantly lower than 20% because of marginal nuclear weapons targeting lower population and fuel loading density areas. Therefore, the reduction in impact might have been around 10%. Therefore, with an increase in damage with the perception of nuclear winter of approximately 1000% and a reduction in the damage potential due to a smaller arsenal of 10%, the elasticity would be roughly 0.01. Therefore, the moral hazard term of loss in net effectiveness of the interventions would be 1%.â
Also, as Aron pointed out, resilience protects against other catastrophes, such as supervolcanic eruptions and asteroid/âcomet impacts. Similarly, there is some evidence that people drive less safely if they are wearing a seatbelt, but overall we are better off with a seatbelt. So I donât think moral hazard is a significant argument against resilience.
I think direct cost-effectiveness analyses like this journal article are more robust, especially for interventions, than Importance, Neglectedness and Tractability. But it is interesting to think about tractability separately. It is true that there is a lot of uncertainty of what the environment would be like post catastrophe. However, we have calculated that resilient foods would greatly improve the situation with and without global food trade, so I think they are a robust intervention. Also, I think if you look at the state of resilience to nuclear winter pre-2014, it was basically to store up more food, which would cost tens of trillions of dollars, would not protect you right away, and if you did it fast, it would raise prices and exacerbate current malnutrition. In 2014, we estimated that resilient foods could be scaled up to feed everyone technically. And then in the last eight years, we have done research estimating that it could also be done affordably for most people. So I think there has been a lot of progress with just a few million dollars spent, indicating tractability.
I think that Aron was talking about prevention versus resilience. Resilience requires more engineering.
Thanks for your comment and for adding to Aronâs response to my post!
Before reacting point-by-point, one more overarching warning/âclarification/âobservation: My views on the disvalue of numerical reasoning and the use of BOTECs in deeply uncertain situations are quite unusual within the EA community (though not unheard of, see for instance this EA Forum post on âPotential downsides of using explicit probabilitiesâ and this GiveWell blog post on âWhy we canât take expected value estimates literally (even when theyâre unbiased)â which acknowledge some of the concerns that motivate my skeptical stance). I can imagine that this is a heavy crux between us and that it makes advances/âconvergence on more concrete questions (esp. through a forum comments discussion) rather difficult (which is not at all meant to discourage engagement or to suggest I find your comments unhelpful (quite the contrary); just noting this in an attempt to avoid us arguing past each other).
On moral hazards:
In general, my deep-seated worries about moral hazard and other normative adverse effects feel somewhat inaccessible to numerical/âempirical reasoning (at least until we come up with much better empirical research strategies for studying complex situations). To be completely honest, I canât really imagine arguments or evidence that would be able to substantially dissolve the worries I have. That is not because Iâm consciously dogmatic and unwilling to budge from my conclusions, but rather because I donât think we have the means to know empirically to what extent these adverse effects actually exist/âoccur. It thus seems that we are forced to rely on fundamental worldview-level beliefs (or intuitions) when deciding on our credences for their importance. This is a very frustrating situation, but I just donât find attempts to escape it (through relatively arbitrary BOTECs or plausibility arguments) in any sense convincing; they usually seem to me to be trying to come up with elaborate cognitive schemes to diffuse a level of deep empirical uncertainty that simply cannot be diffused (given the structure of the world and the research methods we know of).
To illustrate my thinking, hereâs my response to your example:
I donât think that we really know anything about the moral hazard effects that interventions to prepare for nuclear winter would have had on nuclear policy and outcomes in the Cold War era.
I donât think we have a sufficiently strong reason to assign the 20% reduction in nuclear weapons to the difference in perceived costs of nuclear escalation after research on nuclear winter surfaced.
I donât think we have any defensible basis for making a guess about how this reduction in weapons stocks would have been different had there been efforts to prepare for nuclear winter in the 1980s.
I donât think it is legitimate to simply claim that fear of nuclear-winter-type events has no plausible effect on decision-making in crisis situations (either consciously or sub-consciously, through normative effects such as those of the nuclear taboo). At the same time, I donât think we have a defensible basis for guessing the expected strength of this effect of fear (or âtaking expected costs seriouslyâ) on decision-making, nor for expected changes in the level of fear given interventions to prepare for the worst case.
In short, I donât think it is anywhere close to feasible or useful to attempt to calculate âthe moral hazard term of loss in net effectiveness of the [nuclear winter preparation] interventionsâ.
On the cost-benefit analysis and tractability of food resilience interventions:
As a general reaction, Iâm quite wary of cost-effectiveness analyses for interventions into complex systems. That is because such analyses require that we identify all relevant consequences (and assign value and probability estimates to each), which I believe is extremely hard once you take indirect/âsecond-order effects seriously. (In addition, Iâm worried that cost-effectiveness analyses distract analysts and readers from the difficult task of mapping out consequences comprehensively, instead focusing their attention on the quantification of a narrow set of direct consequences.)
That said, I think there sometimes is informational value in cost-effectiveness analyses in such situations, if their results are very stark and robust to changes in the numbers used. I think the article you link is an example of such a case, and accept this as an argument in favor of food resilience interventions.
I also accept your case for the tractability of food resilience interventions (in the US) as sound.
As far as the core argument in my post is concerned, my concern is that the majority of post-nuclear war conditions gets ignored in your response. I.e., if we have sound reasons to think that we can cost-effectively/âtractably prepare for post-nuclear war food shortages but donât have good reasons to think that we know how to cost-effectively/âtractably prepare for most of the other plausible consequences of nuclear deployment (many of which we might have thus far failed to identify in the first place), then I would still argue that the tractability of preparing for a post-nuclear war world is concerningly low. I would thus continue to maintain that preventing nuclear deployment should be the primary priority (in other words: your arguments in favor of preparation interventions donât address the challenge of preparing for the full range of possible consequences, which is why I still think avoiding the consequences ought to be the first priority).