One potential argument against(your first bullet point): reducing human civilization extinction risk increases the expected influence humans have over the future. If human influence over the future is expected to make the future better (worse) we want to increase(decrease) it.
If EV[survivecone] < EV[deathcone], x-risk reduction is negative EV.
If EV[survivecone] > EV[deathcone], then x-risk reduction is positive EV.
Corollary
If Derivative(p(survive)) x EV_future < p(survive) x Derivative(EV_future), it’s more effective to work on improving EV[survivecone].
If Derivative(p(survive)) x EV_future < p(survive) x Derivative(EV_future), it’s more effective to reduce existential risks.
And this could be true even if the future was in expectation positive although it would be a very peculiar situation if that were the case (which is sort of the reason we ended up on x-risk reduction).
One potential argument against(your first bullet point): reducing human civilization extinction risk increases the expected influence humans have over the future. If human influence over the future is expected to make the future better (worse) we want to increase(decrease) it.
A post with some potential reasons as to why the future might not be so great (+ a fermi estimate).
A more formalized version below (which probably doesn’t add anything substantive beyond what I already said).
Definitions
EV[lightcone] is the current expected utility in our lightcone.
EV[survivecone] is the expected utility in our lightcone if we “survive” as a species.
EV[deathcone] is the expected utility in our lightcone if we “die”.
P(survive) + P(die) = 1
Take x-risk reduction to mean increasing P(survive)
*I like to think of surviving as meaning becoming a grabby civilization, but maybe there is a better way to think of it.
Lemma
EV[lightcone]=P(survive)EV[survivecone] + P(die)EV[deathcone]
equivalently
EV[survivecone] = EV[lightcone | survive]
EV[deathcone] = EV[lightcone | death]
(thanks kasey)
Theorem
If EV[survivecone] < EV[deathcone], x-risk reduction is negative EV.
If EV[survivecone] > EV[deathcone], then x-risk reduction is positive EV.
Corollary
If Derivative(p(survive)) x EV_future < p(survive) x Derivative(EV_future), it’s more effective to work on improving EV[survivecone].
If Derivative(p(survive)) x EV_future < p(survive) x Derivative(EV_future), it’s more effective to reduce existential risks.
And this could be true even if the future was in expectation positive although it would be a very peculiar situation if that were the case (which is sort of the reason we ended up on x-risk reduction).