I think that the expected payoff and the reduction in P(extinction) are just equivalent. Like, a 1% chance of saving 25b is the same as reducing P(extinction) from 7% to 6%, that’s what a “1% chance of saving” means, because:
p(extinction) = 1 - p(extinction reduction from me) - p(extinction reduction from all other causes)
So, if I had a 100% chance of saving 25b lives, then that’d be a 100% reduction in extinction risk.
Of course, what we care about is the counterfactual, so if there’s already only a 50% chance of extinction, then you could say colloquially that I brought P(extinction) from 0.5 to 0, and there I had a “100% chance of saving 25b lives” but that’s not quite right, because I should only get credit for reducing it from 0.5 to 0, so it would be better in that scenario to say that I had a 50% chance of saving 25b, and that’s just as high as that can get.
I think that the expected payoff and the reduction in P(extinction) are just equivalent. Like, a 1% chance of saving 25b is the same as reducing P(extinction) from 7% to 6%, that’s what a “1% chance of saving” means, because:
p(extinction) = 1 - p(extinction reduction from me) - p(extinction reduction from all other causes)
So, if I had a 100% chance of saving 25b lives, then that’d be a 100% reduction in extinction risk.
Of course, what we care about is the counterfactual, so if there’s already only a 50% chance of extinction, then you could say colloquially that I brought P(extinction) from 0.5 to 0, and there I had a “100% chance of saving 25b lives” but that’s not quite right, because I should only get credit for reducing it from 0.5 to 0, so it would be better in that scenario to say that I had a 50% chance of saving 25b, and that’s just as high as that can get.