Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.)
Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it).
Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.)
I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better.
On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind.
Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.)
Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it).
Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.)
I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better.
On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind.