Thanks for writing this! I agree that this is a useful exercise.
Some other considerations that may count in favour of neartermist interventions:
Nonhuman animals. If we go extinct, factory farming ends, which is good for these farmed animals if their lives are bad on average, which seems to be the case. Impacts on wild animals could go either way depending on ethical and empirical assumptions. EA animal work is also plausibly much more cost-effective than EA global health and development work; my guess is hundreds or thousands of times more cost-effective based on estimates for corporate chicken welfare campaigns and GiveWell recommendations.
More speculatively, sentient beings in simulated worlds may be disproportionately in short-lived simulations. Altruistic agents in these simulations will have more impact in these simulations if they focus on the near term (since their influence will be cut short with the end of the simulation), and if their actions are acausally correlated with our own, we can choose for them to focus on the near term if we ourselves focus on the near term. This can multiply neartermist impact. (Of course, there are also other acausal considerations, like acausal trade. That might not favour neartermist work.)
Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.)
Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it).
Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.)
I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better.
On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind.
Thanks for writing this! I agree that this is a useful exercise.
Some other considerations that may count in favour of neartermist interventions:
Nonhuman animals. If we go extinct, factory farming ends, which is good for these farmed animals if their lives are bad on average, which seems to be the case. Impacts on wild animals could go either way depending on ethical and empirical assumptions. EA animal work is also plausibly much more cost-effective than EA global health and development work; my guess is hundreds or thousands of times more cost-effective based on estimates for corporate chicken welfare campaigns and GiveWell recommendations.
More speculatively, sentient beings in simulated worlds may be disproportionately in short-lived simulations. Altruistic agents in these simulations will have more impact in these simulations if they focus on the near term (since their influence will be cut short with the end of the simulation), and if their actions are acausally correlated with our own, we can choose for them to focus on the near term if we ourselves focus on the near term. This can multiply neartermist impact. (Of course, there are also other acausal considerations, like acausal trade. That might not favour neartermist work.)
Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.)
Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it).
Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.)
I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better.
On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind.