Formalizing Extinction Risk Reduction vs. Longtermism
Edit: As a commenter pointed out, I mean extinction risk rather than x-risk in this post. Double edit: I’m not even sure exactly what I meant, and I think the whole x-risk terminology needs to be cleaned up alot.
There have been a string of recent posts about X-risk extinction risk reduction and longtermism. Why they are basically the same. Why they are different. I tried to write up a more formal outline that generalizes the problem (crossposted from a previous comment)
Confidence: Moderate. I can’t identify specific parts where I could be wrong (though ironing out a definition of surviving would be good), but I also haven’t talked to many people about this.
Definitions
EV[lightcone] is the current expected utility in our lightcone.
EV[survivecone] is the expected utility in our lightcone if we “survive”[1] as a society.
EV[deathcone] is the expected utility in our lightcone if we “die”.
P(survive) + P(die) = 1
Take
x-riskextinction risk reduction to mean increasing P(survive)
Lemma
EV[lightcone]=P(survive)EV[survivecone] + P(die)EV[deathcone]
equivalently
EV[survivecone] = EV[lightcone | survive]
EV[deathcone] = EV[lightcone | death]
(thanks kasey)
Theorem
If EV[survivecone] < EV[deathcone],
x-riskextinction risk reduction is negative EV.[2]If EV[survivecone] > EV[deathcone], then
x-riskextinction risk reduction is positive EV.
Corollary
If Derivative[3](p(survive)) x EV_future < p(survive) x Derivative(EV_future), it’s more effective to work on improving EV[survivecone].[4]
If Derivative(p(survive)) x EV_future > p(survive) x Derivative(EV_future), it’s more effective to reduce
existentialextinction risks.
- ^
I like to think of surviving as meaning becoming a grabby civilization, but maybe there is a better way to think of it.
- ^
Here I’m just assuming x-risk reduction doesn’t affect EV’s, obviously not true but for simplicity.
- ^
Where we are differentiating with respect to effort put into each respective cause
- ^
This could be true even if the future was in expectation positive although it would be a very peculiar situation if that were the case (which is sort of the reason we ended up on x-risk reduction).
- 31 Oct 2022 5:17 UTC; 2 points) 's comment on A proposed hierarchy of longtermist concepts by (
Might be better to be more explicit about extinction risk reduction vs existential risk reduction. If EV[survivecone] < EV[deathcone], then extinction risk reduction seems negative EV (ignoring acausal stuff), but increasing the probability of extinction would plausibly reduce existential risk and be positive EV according to your simplified model, and there may be other ways (non-extinction-related ways) to reduce s-risks that are existential while also being positive EV to pursue.
Yep completely agree, good catch.