It is worth noting most of the expected value of reducing existential risk comes from worlds where the time of perils hypothesis (TOP) is true, and the post-peril risk is low (the longterm future should be discounted at the ~lowest possible rate). In this case, a reduction in existential risk in the next 100 years would not differ much from a reduction in total existential risk, and therefore the mistakes you mention do not apply.
To give an example. If existential risk is 10 % per century for 3 centuries[1], and then drops to roughly 0, the risk in the next 3 centuries is 27.1000 % (= 1 - (1 − 0.1)^3). If one decreases bio risk by 1 % for 1 century, from 1 % to 0.99 % (i.e. 0.01 pp), the new risk for the next century would be 9.99 % (= 10 − 0.01). So the new risk for the next 3 centuries would be 27.0919 % (= 1 - (1 − 0.0999)*(1 − 0.1)^2). Therefore the reduction of the total risk would be 0.008 pp (= 27.1000 − 27.0919), i.e. very similar to the reduction of bio risk during the next century of 0.01 pp.
As a result, under TOP, I think reducing bio existential risk by 0.01 pp roughly decreases total existential risk by 0.01 pp. For the conservative estimate of 10^28 expected future lives given in Newberry 2021 (Table 3), that would mean saving 10^24 (= 10^(28 − 4)) lives, or 4*10^12 life/$ (= 10^24/(250*10^9)). If TOP only has 1 in a trillion chance of being true, the cost-effectiveness would be 4 life/$, over 4 OOMs better than GiveWell’s top charities cost-effectiveness of 2.5*10^-4 life/$ (= 1⁄4000).
On the one hand, I am very uncertain about how high is bio existential risk this century. If it is something like 10^-6 (i.e. 0.01 % of what I assumed above), the cost-effectiveness of reducing bio risk would be similar to that of GiveWell’s top charities. On the other hand, 1 in a trillion chance for TOP being true sounds too low, and a future value of 10^28 lives is probably an underestimate. Overall, I guess longtermist interventions will tend to be much more cost-effective.
FWIW, I liked David’s series on Existential risk pessimism and the time of perils. I agree there is a tension between high existential risk this century, and TOP being reasonably likely. I guess existential risk is not as high as commonly assumed, because superintelligent AI disempowering humans does not have to lead to loss of value under moral realism, but I do not know.
Thanks Vasco! Yes, as in my previous paper, though (a) most of the points I’m making get some traction against models in which the time of perils hypothesis is true, (b) they get much more traction if the Time of Perils is false.
For example, on the first mistake, the gap between cumulative and per-unit risk is lower if risk is concentrated in a few centuries (time of perils) whereas if it’s spread across many centuries. And on the second mistake, the the importance of background risk is reduced if that background risk is going to be around for only a few centuries at a meaningful level.
I think that the third mistake (ignoring population dynamics) should retain much of its importance on time of perils models. Actually, it might be more important insofar as those models tend to give higher probability to large-population scenarios coming about. I’d be interested to see how the numbers work out here, though.
Great post!
It is worth noting most of the expected value of reducing existential risk comes from worlds where the time of perils hypothesis (TOP) is true, and the post-peril risk is low (the longterm future should be discounted at the ~lowest possible rate). In this case, a reduction in existential risk in the next 100 years would not differ much from a reduction in total existential risk, and therefore the mistakes you mention do not apply.
To give an example. If existential risk is 10 % per century for 3 centuries[1], and then drops to roughly 0, the risk in the next 3 centuries is 27.1000 % (= 1 - (1 − 0.1)^3). If one decreases bio risk by 1 % for 1 century, from 1 % to 0.99 % (i.e. 0.01 pp), the new risk for the next century would be 9.99 % (= 10 − 0.01). So the new risk for the next 3 centuries would be 27.0919 % (= 1 - (1 − 0.0999)*(1 − 0.1)^2). Therefore the reduction of the total risk would be 0.008 pp (= 27.1000 − 27.0919), i.e. very similar to the reduction of bio risk during the next century of 0.01 pp.
As a result, under TOP, I think reducing bio existential risk by 0.01 pp roughly decreases total existential risk by 0.01 pp. For the conservative estimate of 10^28 expected future lives given in Newberry 2021 (Table 3), that would mean saving 10^24 (= 10^(28 − 4)) lives, or 4*10^12 life/$ (= 10^24/(250*10^9)). If TOP only has 1 in a trillion chance of being true, the cost-effectiveness would be 4 life/$, over 4 OOMs better than GiveWell’s top charities cost-effectiveness of 2.5*10^-4 life/$ (= 1⁄4000).
On the one hand, I am very uncertain about how high is bio existential risk this century. If it is something like 10^-6 (i.e. 0.01 % of what I assumed above), the cost-effectiveness of reducing bio risk would be similar to that of GiveWell’s top charities. On the other hand, 1 in a trillion chance for TOP being true sounds too low, and a future value of 10^28 lives is probably an underestimate. Overall, I guess longtermist interventions will tend to be much more cost-effective.
FWIW, I liked David’s series on Existential risk pessimism and the time of perils. I agree there is a tension between high existential risk this century, and TOP being reasonably likely. I guess existential risk is not as high as commonly assumed, because superintelligent AI disempowering humans does not have to lead to loss of value under moral realism, but I do not know.
In The Precipice, Toby Ord guesses total existential risk to be 3 times (= (1/2)/(1/6)) that from 2021 to 2120.
Thanks Vasco! Yes, as in my previous paper, though (a) most of the points I’m making get some traction against models in which the time of perils hypothesis is true, (b) they get much more traction if the Time of Perils is false.
For example, on the first mistake, the gap between cumulative and per-unit risk is lower if risk is concentrated in a few centuries (time of perils) whereas if it’s spread across many centuries. And on the second mistake, the the importance of background risk is reduced if that background risk is going to be around for only a few centuries at a meaningful level.
I think that the third mistake (ignoring population dynamics) should retain much of its importance on time of perils models. Actually, it might be more important insofar as those models tend to give higher probability to large-population scenarios coming about. I’d be interested to see how the numbers work out here, though.