To add, I think if we thought the difference in efficiency were only 30x then societally the optimal response to most catastrophic risks would be to essentially not prepare at all.
And, philanthropically, things like investing in protection against engineered or natural pandemics, AI risk, nuclear war (in general, independent of side of boom), etc. would all seem like fairly bad ideas as well (given the 30x needs to be adjusted for low probability of events).
So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.
So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.
I do not think there is a contradiction. The multiplier of 30 would only suggest that left-of-boom and right-of-boom interventions are similarly neglected, and therefore similarly effective neglecting other considerations. However, it could still be the case that the marginal cost-effectiveness of left-of-boom and right-of-boom interventions is much higher that that of governments.
Thank you, Vasco! I am not sure and I might very well be missing something here this being the end of a long week.
In my head right-of-boom thinking is just applying expected value thinking within a catastrophic scenario whereas the motivation for GCR work generally comes from applying it at the cause level.
So, to me there seems a parallel between the multiplier for preparatory work on GCR in general and the multiplier/differentiator within a catastrophic risk scenario.
That makes sense to me. The overall neglectedness of post-catastrophe interventions in area A depends on the neglectedness of area A, and the neglectedness of post-catastrophe interventions within area A. The higher each of these 2 neglectednesses, the higher the cost-effectiveness of such interventions.
What I meant with my previous comment was that, even if right-of-boom interventions to decrease nuclear risk were as neglected as left-of-boom ones, it could still be the case that nuclear risk is super neglected in society.
Oh yeah, that is true and I think both Christian and I think that even left-of-boom nuclear security philanthropy is super-neglected (as I like to say, it is more than 2 OOM lower than climate philanthropy, which seems crazy to me).
general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction
It is unclear to me whether this belief is correct. To illustrate:
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to “deaths”^-alpha, where alpha is the tail index.
“deaths”^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is “1.35 to 1.74, with a mean of 1.60”, which means spending should a priori be proportional to “deaths”^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
To add, I think if we thought the difference in efficiency were only 30x then societally the optimal response to most catastrophic risks would be to essentially not prepare at all.
And, philanthropically, things like investing in protection against engineered or natural pandemics, AI risk, nuclear war (in general, independent of side of boom), etc. would all seem like fairly bad ideas as well (given the 30x needs to be adjusted for low probability of events).
So, it seems to me that a 30x estimate seems strongly at odds with the general belief underlying most longtermist effort that societally we are predictably underinvesting in low-probability catastrophic / existential risk reduction.
Thanks for the fair feedback, Johannes!
Just one note on:
I do not think there is a contradiction. The multiplier of 30 would only suggest that left-of-boom and right-of-boom interventions are similarly neglected, and therefore similarly effective neglecting other considerations. However, it could still be the case that the marginal cost-effectiveness of left-of-boom and right-of-boom interventions is much higher that that of governments.
Thank you, Vasco! I am not sure and I might very well be missing something here this being the end of a long week.
In my head right-of-boom thinking is just applying expected value thinking within a catastrophic scenario whereas the motivation for GCR work generally comes from applying it at the cause level.
So, to me there seems a parallel between the multiplier for preparatory work on GCR in general and the multiplier/differentiator within a catastrophic risk scenario.
That makes sense to me. The overall neglectedness of post-catastrophe interventions in area A depends on the neglectedness of area A, and the neglectedness of post-catastrophe interventions within area A. The higher each of these 2 neglectednesses, the higher the cost-effectiveness of such interventions.
What I meant with my previous comment was that, even if right-of-boom interventions to decrease nuclear risk were as neglected as left-of-boom ones, it could still be the case that nuclear risk is super neglected in society.
Oh yeah, that is true and I think both Christian and I think that even left-of-boom nuclear security philanthropy is super-neglected (as I like to say, it is more than 2 OOM lower than climate philanthropy, which seems crazy to me).
Hi Johannes,
It is unclear to me whether this belief is correct. To illustrate:
If the goal is saving lives, spending should a priori be proportional to the product between deaths and their probability density function (PDF). If this follows a Pareto distribution, such a product will be proportional to “deaths”^-alpha, where alpha is the tail index.
“deaths”^-alpha decreases as deaths increase, so there should be less spending on more severe catastrophes. Consequently, I do not think one can argue for greater spending on more severe catastrophes just based on it currently being much smaller than that on milder ones.
For example, for conflict deaths, alpha is “1.35 to 1.74, with a mean of 1.60”, which means spending should a priori be proportional to “deaths”^-1.6. This suggests spending to decrease deaths in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
Johannes, as he often does, said it better than I could!