How much would I personally have to reduce X-risk to make this the optimal decision?
Shouldn’t this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/25B. But this isn’t answering your question of personally reducing x-risk.
Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns.
Hey thank you for this comment. We actually started by thinking about P(extinction) but came to believe that it wasn’t relevant, because in terms of expected value, reducing P(extinction) from 95% to 94% is equivalent to reducing it from 3% to 2%, or from any other amount to any other amount (keeping the difference the same). All that matters is the change in P(extinction).
Also, in terms of marginal expected value, that would be the next step in this process. I’m not saying with this post “Go work on X-Risk because it’s marginal EV is likely to be X” I’m rather saying, “You should go work on X-Risk if it’s marginal EV is above X.” But to be honest, I have no idea how to figure the first question out. I’d really like to, but I don’t know of anyone who has even attempted to give an estimate on how much a particular intervention might reduce x-risk (please, forum, tell me where I can find this.)
I think that the expected payoff and the reduction in P(extinction) are just equivalent. Like, a 1% chance of saving 25b is the same as reducing P(extinction) from 7% to 6%, that’s what a “1% chance of saving” means, because:
p(extinction) = 1 - p(extinction reduction from me) - p(extinction reduction from all other causes)
So, if I had a 100% chance of saving 25b lives, then that’d be a 100% reduction in extinction risk.
Of course, what we care about is the counterfactual, so if there’s already only a 50% chance of extinction, then you could say colloquially that I brought P(extinction) from 0.5 to 0, and there I had a “100% chance of saving 25b lives” but that’s not quite right, because I should only get credit for reducing it from 0.5 to 0, so it would be better in that scenario to say that I had a 50% chance of saving 25b, and that’s just as high as that can get.
Shouldn’t this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/25B. But this isn’t answering your question of personally reducing x-risk.
Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns.
Hey thank you for this comment. We actually started by thinking about P(extinction) but came to believe that it wasn’t relevant, because in terms of expected value, reducing P(extinction) from 95% to 94% is equivalent to reducing it from 3% to 2%, or from any other amount to any other amount (keeping the difference the same). All that matters is the change in P(extinction).
Also, in terms of marginal expected value, that would be the next step in this process. I’m not saying with this post “Go work on X-Risk because it’s marginal EV is likely to be X” I’m rather saying, “You should go work on X-Risk if it’s marginal EV is above X.” But to be honest, I have no idea how to figure the first question out. I’d really like to, but I don’t know of anyone who has even attempted to give an estimate on how much a particular intervention might reduce x-risk (please, forum, tell me where I can find this.)
I’m not sure I follow this exercise. Here’s how I’m thinking about it:
Option A: spend your career on malaria.
Cost: one career
Payoff: save 20k lives with probability 1.
Option B: spend your career on x-risk.
Cost: one career
Payoff: save 25B lives with probability p (=P(prevent extinction)), save 0 lives with probability 1-p.
Expected payoff: 25B*p.
Since the costs are the same, we can ignore them. Then you’re indifferent between A and B if p=8x10^-7, and B is better if p>8x10^-7.
But I’m not sure how this maps to a reduction in P(extinction).
I think that the expected payoff and the reduction in P(extinction) are just equivalent. Like, a 1% chance of saving 25b is the same as reducing P(extinction) from 7% to 6%, that’s what a “1% chance of saving” means, because:
p(extinction) = 1 - p(extinction reduction from me) - p(extinction reduction from all other causes)
So, if I had a 100% chance of saving 25b lives, then that’d be a 100% reduction in extinction risk.
Of course, what we care about is the counterfactual, so if there’s already only a 50% chance of extinction, then you could say colloquially that I brought P(extinction) from 0.5 to 0, and there I had a “100% chance of saving 25b lives” but that’s not quite right, because I should only get credit for reducing it from 0.5 to 0, so it would be better in that scenario to say that I had a 50% chance of saving 25b, and that’s just as high as that can get.