hroughCorrect me if I am wrong, but I think you are suggesting something like the following. If there is a 99 % chance we are in future 100 (U_100 = 10^100), and a 1 % (= 1 − 0.99) chance we are in future 0 (U_0 = 0), i.e. if it is very likely we are in an astronomically valuable world[1], we can astronomically increase the expected value of the future by decreasing the chance of future 0. I do not agree. Even if the chance of future 0 is decreased by 100 %, I would say all its probability mass (1 pp) would be moved to nearby worlds whose value is not astronomical. For example, the expected value of the future would only increase by 0.09 (= 0.01*9) if all the probability mass was moved to future 1 (U_1 = 9).
The claim you quoted here was a lot simpler than this.
I was just pointing out that if we take an action to increase near-term extinction risk to 100% (i.e. we deliberately go extinct), then we reduce the expected value of the future to zero. That’s an undeniable way that a change to near-term extinction risk can have an astronomical effect on the expected value of the future, provided only that the future has astronomical expected value before we make the intervention.
It is not that I expect us to get worse at mitigation.
But this is more or less a consequence of your claims isn’t it?
The cost of moving physical mass increases with distance, and I guess the cost of moving probability mass increases (maybe exponentially) with value-distance (difference between the value of the worlds).
I don’t see any basis for this assumption. For example, it is contradicted by my example above, where we deliberately go extinct, and therefore move all of the probability weight from U_100 to U_0, despite their huge value difference.
Or I suppose maybe I do agree with your assumption (as can’t think of any counter-examples I would actually endorse in practice) I just disagree with how you’re explaining its consequences. I would say it means the future does not have astronomical expected value, not that it does have astronomical value but that we can’t influence it (since it seems clear we can if it does).
(If I remember our exchange on the Toby Ord post correctly, I think you made some claim along the lines of: there are no conceivable interventions which would allow us to increase extinction risk to ~100%. This seems like an unlikely claim to me, but it’s also I think a different argument to the one you’re making in this post anyway.)
Here’s another way of explaining it. In this case the probability p_100 of U_100 is given by the huge product:
P(making it through next year) X P(making it through the year after given we make it through year 1) X …..… etc
Changing near-term extinction risk is influencing the first factor in this product, so it would be weird if it didn’t change p_100 as well. The same logic doesn’t apply to the global health interventions that you’re citing as an analogy, and makes existential risk special.
In fact I would say it is your claim (that the later factors get modified too in just such a special way as to cancel out the drop in the first factor) which involves near-term interventions having implausible effects on the future that we shouldn’t a priori expect them to have.
I was just pointing out that if we take an action to increase near-term extinction risk to 100% (i.e. we deliberately go extinct), then we reduce the expected value of the future to zero. That’s an undeniable way that a change to near-term extinction risk can have an astronomical effect on the expected value of the future, provided only that the future has astronomical expected value before we make the intervention.
Agreed. However, I would argue that increasing the nearterm risk of human extinction to 100 % would be astronomically difficult/​costly. In the framework of my previous comment, that would eventually require moving probability mass from world 100 to 0, which I believe is as super hard as moving mass world 0 to 100.
Here’s another way of explaining it. In this case the probability p_100 of U_100 is given by the huge product:
P(making it through next year) X P(making it through the year after given we make it through year 1) X …..… etc
Changing near-term extinction risk is influencing the first factor in this product, so it would be weird if it didn’t change p_100 as well. The same logic doesn’t apply to the global health interventions that you’re citing as an analogy, and makes existential risk special.
One can make a similar argument for the effect size of global health and development interventions. Assuming the effect size is strictly decreasing, denoting by Xi the effect size at year i, P(XN>x)=P(X1>x)P(X2>x|X1>x)...P(XN>x|XN−1>x). Ok, P(XN>x) increases with P(X1>x) on priors. However, it could still be the case that the effect size will decay to practically 0 within a few decades or centuries.
It is not a strict requirement, but it is an arguably reasonable assumption. Are there any interventions whose estimates of (posterior) counterfactual impact, in terms of expected total hedonistic utility (not e.g. preventing the extinction of a random species), do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent/​increasing effects.
Sure, but once you’ve assumed that already, you don’t need to rely any more on an argument about shifts to P(X_1 > x) being cancelled out by shifts to P(X_n > x) for larger n (which if I understand correctly is the argument you’re making about existential risk).
If P(X_N > x) is very small to begin with for some large N, then it will stay small, even if we adjust P(X_1 > x) by a lot (we can’t make it bigger than 1!) So we can safely say under your assumption that adjusting the P(X_1 > x) factor by a large amount does influence P(X_N > x) as well, it’s just that it can’t make it not small.
The existential risk set-up is fundamentally different. We are assuming the future has astronomical value to begin with, before we intervene. That now means non-tiny changes to P(Making it through the next year) must have astronomical value too (unless there is some weird conspiracy among the probability of making it through later years which precisely cancels this out, but that seems very weird, and not something you can justify by pointing to global health as an analogy).
Thanks for the discussion, Toby. I do not plan to follow up further, but, for reference/​transparency, I maintain my guess that the future is astronomically valuable, but that no interventions are astronomically cost-effective.
The claim you quoted here was a lot simpler than this.
I was just pointing out that if we take an action to increase near-term extinction risk to 100% (i.e. we deliberately go extinct), then we reduce the expected value of the future to zero. That’s an undeniable way that a change to near-term extinction risk can have an astronomical effect on the expected value of the future, provided only that the future has astronomical expected value before we make the intervention.
But this is more or less a consequence of your claims isn’t it?
I don’t see any basis for this assumption. For example, it is contradicted by my example above, where we deliberately go extinct, and therefore move all of the probability weight from U_100 to U_0, despite their huge value difference.
Or I suppose maybe I do agree with your assumption (as can’t think of any counter-examples I would actually endorse in practice) I just disagree with how you’re explaining its consequences. I would say it means the future does not have astronomical expected value, not that it does have astronomical value but that we can’t influence it (since it seems clear we can if it does).
(If I remember our exchange on the Toby Ord post correctly, I think you made some claim along the lines of: there are no conceivable interventions which would allow us to increase extinction risk to ~100%. This seems like an unlikely claim to me, but it’s also I think a different argument to the one you’re making in this post anyway.)
Here’s another way of explaining it. In this case the probability p_100 of U_100 is given by the huge product:
P(making it through next year) X P(making it through the year after given we make it through year 1) X …..… etc
Changing near-term extinction risk is influencing the first factor in this product, so it would be weird if it didn’t change p_100 as well. The same logic doesn’t apply to the global health interventions that you’re citing as an analogy, and makes existential risk special.
In fact I would say it is your claim (that the later factors get modified too in just such a special way as to cancel out the drop in the first factor) which involves near-term interventions having implausible effects on the future that we shouldn’t a priori expect them to have.
Agreed. However, I would argue that increasing the nearterm risk of human extinction to 100 % would be astronomically difficult/​costly. In the framework of my previous comment, that would eventually require moving probability mass from world 100 to 0, which I believe is as super hard as moving mass world 0 to 100.
One can make a similar argument for the effect size of global health and development interventions. Assuming the effect size is strictly decreasing, denoting by Xi the effect size at year i, P(XN>x)=P(X1>x)P(X2>x|X1>x)...P(XN>x|XN−1>x). Ok, P(XN>x) increases with P(X1>x) on priors. However, it could still be the case that the effect size will decay to practically 0 within a few decades or centuries.
I don’t see why the same argument holds for global health interventions....?
Why should X_N > x require X_1 > x....?
It is not a strict requirement, but it is an arguably reasonable assumption. Are there any interventions whose estimates of (posterior) counterfactual impact, in terms of expected total hedonistic utility (not e.g. preventing the extinction of a random species), do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent/​increasing effects.
Sure, but once you’ve assumed that already, you don’t need to rely any more on an argument about shifts to P(X_1 > x) being cancelled out by shifts to P(X_n > x) for larger n (which if I understand correctly is the argument you’re making about existential risk).
If P(X_N > x) is very small to begin with for some large N, then it will stay small, even if we adjust P(X_1 > x) by a lot (we can’t make it bigger than 1!) So we can safely say under your assumption that adjusting the P(X_1 > x) factor by a large amount does influence P(X_N > x) as well, it’s just that it can’t make it not small.
The existential risk set-up is fundamentally different. We are assuming the future has astronomical value to begin with, before we intervene. That now means non-tiny changes to P(Making it through the next year) must have astronomical value too (unless there is some weird conspiracy among the probability of making it through later years which precisely cancels this out, but that seems very weird, and not something you can justify by pointing to global health as an analogy).
Thanks for the discussion, Toby. I do not plan to follow up further, but, for reference/​transparency, I maintain my guess that the future is astronomically valuable, but that no interventions are astronomically cost-effective.