I was just pointing out that if we take an action to increase near-term extinction risk to 100% (i.e. we deliberately go extinct), then we reduce the expected value of the future to zero. That’s an undeniable way that a change to near-term extinction risk can have an astronomical effect on the expected value of the future, provided only that the future has astronomical expected value before we make the intervention.
Agreed. However, I would argue that increasing the nearterm risk of human extinction to 100 % would be astronomically difficult/​costly. In the framework of my previous comment, that would eventually require moving probability mass from world 100 to 0, which I believe is as super hard as moving mass world 0 to 100.
Here’s another way of explaining it. In this case the probability p_100 of U_100 is given by the huge product:
P(making it through next year) X P(making it through the year after given we make it through year 1) X …..… etc
Changing near-term extinction risk is influencing the first factor in this product, so it would be weird if it didn’t change p_100 as well. The same logic doesn’t apply to the global health interventions that you’re citing as an analogy, and makes existential risk special.
One can make a similar argument for the effect size of global health and development interventions. Assuming the effect size is strictly decreasing, denoting by Xi the effect size at year i, P(XN>x)=P(X1>x)P(X2>x|X1>x)...P(XN>x|XN−1>x). Ok, P(XN>x) increases with P(X1>x) on priors. However, it could still be the case that the effect size will decay to practically 0 within a few decades or centuries.
It is not a strict requirement, but it is an arguably reasonable assumption. Are there any interventions whose estimates of (posterior) counterfactual impact, in terms of expected total hedonistic utility (not e.g. preventing the extinction of a random species), do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent/​increasing effects.
Sure, but once you’ve assumed that already, you don’t need to rely any more on an argument about shifts to P(X_1 > x) being cancelled out by shifts to P(X_n > x) for larger n (which if I understand correctly is the argument you’re making about existential risk).
If P(X_N > x) is very small to begin with for some large N, then it will stay small, even if we adjust P(X_1 > x) by a lot (we can’t make it bigger than 1!) So we can safely say under your assumption that adjusting the P(X_1 > x) factor by a large amount does influence P(X_N > x) as well, it’s just that it can’t make it not small.
The existential risk set-up is fundamentally different. We are assuming the future has astronomical value to begin with, before we intervene. That now means non-tiny changes to P(Making it through the next year) must have astronomical value too (unless there is some weird conspiracy among the probability of making it through later years which precisely cancels this out, but that seems very weird, and not something you can justify by pointing to global health as an analogy).
Thanks for the discussion, Toby. I do not plan to follow up further, but, for reference/​transparency, I maintain my guess that the future is astronomically valuable, but that no interventions are astronomically cost-effective.
Agreed. However, I would argue that increasing the nearterm risk of human extinction to 100 % would be astronomically difficult/​costly. In the framework of my previous comment, that would eventually require moving probability mass from world 100 to 0, which I believe is as super hard as moving mass world 0 to 100.
One can make a similar argument for the effect size of global health and development interventions. Assuming the effect size is strictly decreasing, denoting by Xi the effect size at year i, P(XN>x)=P(X1>x)P(X2>x|X1>x)...P(XN>x|XN−1>x). Ok, P(XN>x) increases with P(X1>x) on priors. However, it could still be the case that the effect size will decay to practically 0 within a few decades or centuries.
I don’t see why the same argument holds for global health interventions....?
Why should X_N > x require X_1 > x....?
It is not a strict requirement, but it is an arguably reasonable assumption. Are there any interventions whose estimates of (posterior) counterfactual impact, in terms of expected total hedonistic utility (not e.g. preventing the extinction of a random species), do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent/​increasing effects.
Sure, but once you’ve assumed that already, you don’t need to rely any more on an argument about shifts to P(X_1 > x) being cancelled out by shifts to P(X_n > x) for larger n (which if I understand correctly is the argument you’re making about existential risk).
If P(X_N > x) is very small to begin with for some large N, then it will stay small, even if we adjust P(X_1 > x) by a lot (we can’t make it bigger than 1!) So we can safely say under your assumption that adjusting the P(X_1 > x) factor by a large amount does influence P(X_N > x) as well, it’s just that it can’t make it not small.
The existential risk set-up is fundamentally different. We are assuming the future has astronomical value to begin with, before we intervene. That now means non-tiny changes to P(Making it through the next year) must have astronomical value too (unless there is some weird conspiracy among the probability of making it through later years which precisely cancels this out, but that seems very weird, and not something you can justify by pointing to global health as an analogy).
Thanks for the discussion, Toby. I do not plan to follow up further, but, for reference/​transparency, I maintain my guess that the future is astronomically valuable, but that no interventions are astronomically cost-effective.