Minor readability suggestion: for very small probabilities, e.g. smaller than 0.01%, just state them as N out of 10^k, where N is between 1 and 10. Or as N*10^k, where k is negative.
I think numbers smaller than 0.01% are more intuitive when presented these other ways than as percentages. I’d normally have to do the conversion out of percentage first to get an intuitive grasp of their magnitude.
Thanks for writing this! I agree that this is a useful exercise.
Some other considerations that may count in favour of neartermist interventions:
Nonhuman animals. If we go extinct, factory farming ends, which is good for these farmed animals if their lives are bad on average, which seems to be the case. Impacts on wild animals could go either way depending on ethical and empirical assumptions. EA animal work is also plausibly much more cost-effective than EA global health and development work; my guess is hundreds or thousands of times more cost-effective based on estimates for corporate chicken welfare campaigns and GiveWell recommendations.
More speculatively, sentient beings in simulated worlds may be disproportionately in short-lived simulations. Altruistic agents in these simulations will have more impact in these simulations if they focus on the near term (since their influence will be cut short with the end of the simulation), and if their actions are acausally correlated with our own, we can choose for them to focus on the near term if we ourselves focus on the near term. This can multiply neartermist impact. (Of course, there are also other acausal considerations, like acausal trade. That might not favour neartermist work.)
Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.)
Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it).
Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.)
I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better.
On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind.
How much would I personally have to reduce X-risk to make this the optimal decision?
Shouldn’t this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/25B. But this isn’t answering your question of personally reducing x-risk.
Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns.
Hey thank you for this comment. We actually started by thinking about P(extinction) but came to believe that it wasn’t relevant, because in terms of expected value, reducing P(extinction) from 95% to 94% is equivalent to reducing it from 3% to 2%, or from any other amount to any other amount (keeping the difference the same). All that matters is the change in P(extinction).
Also, in terms of marginal expected value, that would be the next step in this process. I’m not saying with this post “Go work on X-Risk because it’s marginal EV is likely to be X” I’m rather saying, “You should go work on X-Risk if it’s marginal EV is above X.” But to be honest, I have no idea how to figure the first question out. I’d really like to, but I don’t know of anyone who has even attempted to give an estimate on how much a particular intervention might reduce x-risk (please, forum, tell me where I can find this.)
I think that the expected payoff and the reduction in P(extinction) are just equivalent. Like, a 1% chance of saving 25b is the same as reducing P(extinction) from 7% to 6%, that’s what a “1% chance of saving” means, because:
p(extinction) = 1 - p(extinction reduction from me) - p(extinction reduction from all other causes)
So, if I had a 100% chance of saving 25b lives, then that’d be a 100% reduction in extinction risk.
Of course, what we care about is the counterfactual, so if there’s already only a 50% chance of extinction, then you could say colloquially that I brought P(extinction) from 0.5 to 0, and there I had a “100% chance of saving 25b lives” but that’s not quite right, because I should only get credit for reducing it from 0.5 to 0, so it would be better in that scenario to say that I had a 50% chance of saving 25b, and that’s just as high as that can get.
Minor readability suggestion: for very small probabilities, e.g. smaller than 0.01%, just state them as N out of 10^k, where N is between 1 and 10. Or as N*10^k, where k is negative.
I think numbers smaller than 0.01% are more intuitive when presented these other ways than as percentages. I’d normally have to do the conversion out of percentage first to get an intuitive grasp of their magnitude.
Thanks for writing this! I agree that this is a useful exercise.
Some other considerations that may count in favour of neartermist interventions:
Nonhuman animals. If we go extinct, factory farming ends, which is good for these farmed animals if their lives are bad on average, which seems to be the case. Impacts on wild animals could go either way depending on ethical and empirical assumptions. EA animal work is also plausibly much more cost-effective than EA global health and development work; my guess is hundreds or thousands of times more cost-effective based on estimates for corporate chicken welfare campaigns and GiveWell recommendations.
More speculatively, sentient beings in simulated worlds may be disproportionately in short-lived simulations. Altruistic agents in these simulations will have more impact in these simulations if they focus on the near term (since their influence will be cut short with the end of the simulation), and if their actions are acausally correlated with our own, we can choose for them to focus on the near term if we ourselves focus on the near term. This can multiply neartermist impact. (Of course, there are also other acausal considerations, like acausal trade. That might not favour neartermist work.)
Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.)
Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it).
Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.)
I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better.
On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind.
Here are some examples making the near-term case for working on global catastrophes: Catastrophe: Risk and Response (book), Carl Shulman’s 80,000 Hours Podcast, my first 80,000 Hours Podcast, global perspective on 10% agricultural shortfalls (journal article), and US perspective on nuclear winter (journal article).
Shouldn’t this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/25B. But this isn’t answering your question of personally reducing x-risk.
Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns.
Hey thank you for this comment. We actually started by thinking about P(extinction) but came to believe that it wasn’t relevant, because in terms of expected value, reducing P(extinction) from 95% to 94% is equivalent to reducing it from 3% to 2%, or from any other amount to any other amount (keeping the difference the same). All that matters is the change in P(extinction).
Also, in terms of marginal expected value, that would be the next step in this process. I’m not saying with this post “Go work on X-Risk because it’s marginal EV is likely to be X” I’m rather saying, “You should go work on X-Risk if it’s marginal EV is above X.” But to be honest, I have no idea how to figure the first question out. I’d really like to, but I don’t know of anyone who has even attempted to give an estimate on how much a particular intervention might reduce x-risk (please, forum, tell me where I can find this.)
I’m not sure I follow this exercise. Here’s how I’m thinking about it:
Option A: spend your career on malaria.
Cost: one career
Payoff: save 20k lives with probability 1.
Option B: spend your career on x-risk.
Cost: one career
Payoff: save 25B lives with probability p (=P(prevent extinction)), save 0 lives with probability 1-p.
Expected payoff: 25B*p.
Since the costs are the same, we can ignore them. Then you’re indifferent between A and B if p=8x10^-7, and B is better if p>8x10^-7.
But I’m not sure how this maps to a reduction in P(extinction).
I think that the expected payoff and the reduction in P(extinction) are just equivalent. Like, a 1% chance of saving 25b is the same as reducing P(extinction) from 7% to 6%, that’s what a “1% chance of saving” means, because:
p(extinction) = 1 - p(extinction reduction from me) - p(extinction reduction from all other causes)
So, if I had a 100% chance of saving 25b lives, then that’d be a 100% reduction in extinction risk.
Of course, what we care about is the counterfactual, so if there’s already only a 50% chance of extinction, then you could say colloquially that I brought P(extinction) from 0.5 to 0, and there I had a “100% chance of saving 25b lives” but that’s not quite right, because I should only get credit for reducing it from 0.5 to 0, so it would be better in that scenario to say that I had a 50% chance of saving 25b, and that’s just as high as that can get.