Thanks, Vasco. I totally forgot to reply to your comment on my previous post—my apologies!
I think you raise a good general point that we’d expect societal spending after a catastrophe to be high, especially given the funder behavior we see for newsworthy humanitarian disasters.
There are a few related considerations here, all of them touching on the issue you also raise: “Coming up with good interventions in little time may be harder.”
Fast-Moving Catastrophes—I would expect many nuclear wars to escalate very quickly, far outpacing the timelines that funders and policymakers operate on. Escalation management tools (e.g. better hotlines, relevant changes in nuclear posture and targeting policy) should be implemented before such a catastrophe. That being said, I think the problem of protracted great power wars (including slow-moving nuclear wars) is underappreciated, so there are a few other considerations in the cases where the catastrophe moves more slowly…
Post-Catastrophe Funder Resources—Aside from the Patient Philanthropy Fund, I expect most funders will not have thought about the impact of global catastrophes on their portfolios. I’d expect even a regionally limited nuclear war to cause a severe decline in the portfolios of most funders, and possibly a total collapse of the financial infrastructure that funders rely on. So there might not be any liquid funds to move!
Post-Catastrophe Funder Additionality—The counterfactual value of farsighted private funders is higher pre-catastrophe; after a catastrophe, we’d expect governments and small-dollar or traditional donors to flood the philanthropic market with humanitarian aid. Pandemic-preparedness and -response spending pre-2020 was more attractive in retrospect than COVID-relief funding 2020-2022.
I think there’s a related point here about the emotional resonance of some classes of disaster-relief spending that probably contribute to the character and allocation of post-catastrophe funding.
Pre-Catastrophe Funder Leverage—Relatedly, right now, a funder can beneficially shape the direction of the entire field for less than $10 million. After a catastrophe, that “smart money” might be an unnoticeable drop in the bucket, and would have far less leverage.
R&D Timelines—Some “right of boom” interventions have long R&D lead times, especially if they involve more speculative technologies. I’m thinking, e.g., about the development and implementation of technologies for resilient food systems.
Policy Implementation Timelines—Similarly, many interventions designed to keep limited war from turning into all-out thermonuclear exchange probably need to go through a fairly slow policy process.
Thanks again for the thoughtful comment! I hope this partly answers it.
Thanks for elaborating! I can see that right-of-boom spending before the nuclear war is most likely more effective than after it.
I do not know what is the overall multiplier accounting for all of this, and I am not confident it favours right-of-boom spending at the current margin.
To clarify, by “all of this” I meant not just considerations about whether it is better to spend before or after the nuclear war, but also about the expected spending on left- and right-of-boom interventions. I am thinking along these lines:
Left-of-boom spending is currently at 30 M$/year.
The expected right-of-boom spending is 1 G$/year, for a probability of 0.1 %/year of a global nuclear war leading to 1 T$ being invested in right-of-boom spending.
Right-of-boom spending before nuclear war is 30 times as effective as after the nuclear war, for the reasons you mentioned.
So the expected right-of-boom spending (adjusted for effectiveness) is equivalent to 30 M$/year (= 1000⁄30) of right-of-boom spending before the nuclear.
Therefore it is not obvious to me that right-of-boom spending before nuclear war is way more neglected than left-of-boom spending (I got 30 M$/year for both above), even if right-of-boom spending before the nuclear war is most likely more effective than after it.
Basically, what I am saying is that, even if right-of-boom spending after the nuclear war is much less effective, it would be so large that the expected right-of-boom spending adjusted for effectiveness could still be comparable with current left-of-boom spending. Does this make sense?
Note I am not claiming that left-of-boom spending is more/less effective than right-of-boom spending before nuclear war. I am just suggesting that left- and right-of-boom spending may not have super different levels of neglectedness.
To me, a discount of 30x seems vastly too low of a discount.
It seems true that in a right-of-boom situation massive resources will be mobilized (if they are still available), but effects like the ones that Christian mentions are probably an argument for much larger efficiency of preemptive spending than a factor of 30x.
I don’t have time to estimate this (but would be curious if you tried, Vasco!), but I think factors underlying Christian’s arguments like path dependency causing much larger investments over time than initially committed, non-accelerability of physical constraints around the speed of production or technological change, necessary conditions that exist now but maybe not in a right-of-boom situation (silly example: in a right-of-boom situation you can’t establish a red telephone between Washington and Moscow if the right of boom situation is a nuclear conflict between the two) together seem like a discount probably in the 1000s, maybe even infinite for some interventions (where no amount of money can buy a given desired outcome in a right-of-boom situation).
To add, I think if we thought the difference in efficiency were only 30x then societally the optimal response to most catastrophic risks would be to essentially not prepare at all.
And, philanthropically, things like investing in protection against engineered or natural pandemics, AI risk, nuclear war (in general, independent of side of boom), etc. would all seem like fairly bad ideas as well (given the 30x needs to be adjusted for low probability of events).
So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.
So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.
I do not think there is a contradiction. The multiplier of 30 would only suggest that left-of-boom and right-of-boom interventions are similarly neglected, and therefore similarly effective neglecting other considerations. However, it could still be the case that the marginal cost-effectiveness of left-of-boom and right-of-boom interventions is much higher that that of governments.
Thank you, Vasco! I am not sure and I might very well be missing something here this being the end of a long week.
In my head right-of-boom thinking is just applying expected value thinking within a catastrophic scenario whereas the motivation for GCR work generally comes from applying it at the cause level.
So, to me there seems a parallel between the multiplier for preparatory work on GCR in general and the multiplier/differentiator within a catastrophic risk scenario.
That makes sense to me. The overall neglectedness of post-catastrophe interventions in area A depends on the neglectedness of area A, and the neglectedness of post-catastrophe interventions within area A. The higher each of these 2 neglectednesses, the higher the cost-effectiveness of such interventions.
What I meant with my previous comment was that, even if right-of-boom interventions to decrease nuclear risk were as neglected as left-of-boom ones, it could still be the case that nuclear risk is super neglected in society.
Oh yeah, that is true and I think both Christian and I think that even left-of-boom nuclear security philanthropy is super-neglected (as I like to say, it is more than 2 OOM lower than climate philanthropy, which seems crazy to me).
general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction
It is unclear to me whether this belief is correct. To illustrate:
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to “deaths”^-alpha, where alpha is the tail index.
“deaths”^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is “1.35 to 1.74, with a mean of 1.60”, which means spending should a priori be proportional to “deaths”^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
Thanks, Vasco. I totally forgot to reply to your comment on my previous post—my apologies!
I think you raise a good general point that we’d expect societal spending after a catastrophe to be high, especially given the funder behavior we see for newsworthy humanitarian disasters.
There are a few related considerations here, all of them touching on the issue you also raise: “Coming up with good interventions in little time may be harder.”
Fast-Moving Catastrophes—I would expect many nuclear wars to escalate very quickly, far outpacing the timelines that funders and policymakers operate on. Escalation management tools (e.g. better hotlines, relevant changes in nuclear posture and targeting policy) should be implemented before such a catastrophe. That being said, I think the problem of protracted great power wars (including slow-moving nuclear wars) is underappreciated, so there are a few other considerations in the cases where the catastrophe moves more slowly…
Post-Catastrophe Funder Resources—Aside from the Patient Philanthropy Fund, I expect most funders will not have thought about the impact of global catastrophes on their portfolios. I’d expect even a regionally limited nuclear war to cause a severe decline in the portfolios of most funders, and possibly a total collapse of the financial infrastructure that funders rely on. So there might not be any liquid funds to move!
Post-Catastrophe Funder Additionality—The counterfactual value of farsighted private funders is higher pre-catastrophe; after a catastrophe, we’d expect governments and small-dollar or traditional donors to flood the philanthropic market with humanitarian aid. Pandemic-preparedness and -response spending pre-2020 was more attractive in retrospect than COVID-relief funding 2020-2022.
I think there’s a related point here about the emotional resonance of some classes of disaster-relief spending that probably contribute to the character and allocation of post-catastrophe funding.
Pre-Catastrophe Funder Leverage—Relatedly, right now, a funder can beneficially shape the direction of the entire field for less than $10 million. After a catastrophe, that “smart money” might be an unnoticeable drop in the bucket, and would have far less leverage.
R&D Timelines—Some “right of boom” interventions have long R&D lead times, especially if they involve more speculative technologies. I’m thinking, e.g., about the development and implementation of technologies for resilient food systems.
Policy Implementation Timelines—Similarly, many interventions designed to keep limited war from turning into all-out thermonuclear exchange probably need to go through a fairly slow policy process.
Thanks again for the thoughtful comment! I hope this partly answers it.
Thanks for elaborating! I can see that right-of-boom spending before the nuclear war is most likely more effective than after it.
To clarify, by “all of this” I meant not just considerations about whether it is better to spend before or after the nuclear war, but also about the expected spending on left- and right-of-boom interventions. I am thinking along these lines:
Left-of-boom spending is currently at 30 M$/year.
The expected right-of-boom spending is 1 G$/year, for a probability of 0.1 %/year of a global nuclear war leading to 1 T$ being invested in right-of-boom spending.
Right-of-boom spending before nuclear war is 30 times as effective as after the nuclear war, for the reasons you mentioned.
So the expected right-of-boom spending (adjusted for effectiveness) is equivalent to 30 M$/year (= 1000⁄30) of right-of-boom spending before the nuclear.
Therefore it is not obvious to me that right-of-boom spending before nuclear war is way more neglected than left-of-boom spending (I got 30 M$/year for both above), even if right-of-boom spending before the nuclear war is most likely more effective than after it.
Basically, what I am saying is that, even if right-of-boom spending after the nuclear war is much less effective, it would be so large that the expected right-of-boom spending adjusted for effectiveness could still be comparable with current left-of-boom spending. Does this make sense?
Note I am not claiming that left-of-boom spending is more/less effective than right-of-boom spending before nuclear war. I am just suggesting that left- and right-of-boom spending may not have super different levels of neglectedness.
To me, a discount of 30x seems vastly too low of a discount.
It seems true that in a right-of-boom situation massive resources will be mobilized (if they are still available), but effects like the ones that Christian mentions are probably an argument for much larger efficiency of preemptive spending than a factor of 30x.
I don’t have time to estimate this (but would be curious if you tried, Vasco!), but I think factors underlying Christian’s arguments like path dependency causing much larger investments over time than initially committed, non-accelerability of physical constraints around the speed of production or technological change, necessary conditions that exist now but maybe not in a right-of-boom situation (silly example: in a right-of-boom situation you can’t establish a red telephone between Washington and Moscow if the right of boom situation is a nuclear conflict between the two) together seem like a discount probably in the 1000s, maybe even infinite for some interventions (where no amount of money can buy a given desired outcome in a right-of-boom situation).
To add, I think if we thought the difference in efficiency were only 30x then societally the optimal response to most catastrophic risks would be to essentially not prepare at all.
And, philanthropically, things like investing in protection against engineered or natural pandemics, AI risk, nuclear war (in general, independent of side of boom), etc. would all seem like fairly bad ideas as well (given the 30x needs to be adjusted for low probability of events).
So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.
Thanks for the fair feedback, Johannes!
Just one note on:
I do not think there is a contradiction. The multiplier of 30 would only suggest that left-of-boom and right-of-boom interventions are similarly neglected, and therefore similarly effective neglecting other considerations. However, it could still be the case that the marginal cost-effectiveness of left-of-boom and right-of-boom interventions is much higher that that of governments.
Thank you, Vasco! I am not sure and I might very well be missing something here this being the end of a long week.
In my head right-of-boom thinking is just applying expected value thinking within a catastrophic scenario whereas the motivation for GCR work generally comes from applying it at the cause level.
So, to me there seems a parallel between the multiplier for preparatory work on GCR in general and the multiplier/differentiator within a catastrophic risk scenario.
That makes sense to me. The overall neglectedness of post-catastrophe interventions in area A depends on the neglectedness of area A, and the neglectedness of post-catastrophe interventions within area A. The higher each of these 2 neglectednesses, the higher the cost-effectiveness of such interventions.
What I meant with my previous comment was that, even if right-of-boom interventions to decrease nuclear risk were as neglected as left-of-boom ones, it could still be the case that nuclear risk is super neglected in society.
Oh yeah, that is true and I think both Christian and I think that even left-of-boom nuclear security philanthropy is super-neglected (as I like to say, it is more than 2 OOM lower than climate philanthropy, which seems crazy to me).
Hi Johannes,
It is unclear to me whether this belief is correct. To illustrate:
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to “deaths”^-alpha, where alpha is the tail index.
“deaths”^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is “1.35 to 1.74, with a mean of 1.60”, which means spending should a priori be proportional to “deaths”^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
Johannes, as he often does, said it better than I could!