Formalizing Extinction Risk Reduction vs. Longtermism

Edit: As a commenter pointed out, I mean extinction risk rather than x-risk in this post. Double edit: I’m not even sure exactly what I meant, and I think the whole x-risk terminology needs to be cleaned up alot.

There have been a string of recent posts about X-risk extinction risk reduction and longtermism. Why they are basically the same. Why they are different. I tried to write up a more formal outline that generalizes the problem (crossposted from a previous comment)

Confidence: Moderate. I can’t identify specific parts where I could be wrong (though ironing out a definition of surviving would be good), but I also haven’t talked to many people about this.

Definitions

  • EV[lightcone] is the current expected utility in our lightcone.

  • EV[survivecone] is the expected utility in our lightcone if we “survive”[1] as a society.

  • EV[deathcone] is the expected utility in our lightcone if we “die”.

  • P(survive) + P(die) = 1

  • Take x-risk extinction risk reduction to mean increasing P(survive)

Lemma

  • EV[lightcone]=P(survive)EV[survivecone] + P(die)EV[deathcone]

equivalently

  • EV[survivecone] = EV[lightcone | survive]

  • EV[deathcone] = EV[lightcone | death]

(thanks kasey)

Theorem

  • If EV[survivecone] < EV[deathcone], x-risk extinction risk reduction is negative EV.[2]

  • If EV[survivecone] > EV[deathcone], then x-risk extinction risk reduction is positive EV.

Corollary

  • If Derivative[3](p(survive)) x EV_future < p(survive) x Derivative(EV_future), it’s more effective to work on improving EV[survivecone].[4]

  • If Derivative(p(survive)) x EV_future > p(survive) x Derivative(EV_future), it’s more effective to reduce existential extinction risks.

  1. ^

    I like to think of surviving as meaning becoming a grabby civilization, but maybe there is a better way to think of it.

  2. ^

    Here I’m just assuming x-risk reduction doesn’t affect EV’s, obviously not true but for simplicity.

  3. ^

    Where we are differentiating with respect to effort put into each respective cause

  4. ^

    This could be true even if the future was in expectation positive although it would be a very peculiar situation if that were the case (which is sort of the reason we ended up on x-risk reduction).