Thanks for all this, Hamish. For what it’s worth, I don’t think we did a great job communicating the results of the Moral Weight Project.
As you rightly observe, welfare ranges aren’t moral weights without some key philosophical assumptions. Although we did discuss the significance of those assumptions in independentposts, we could have done a much better job explaining how those assumptions should affect the interpretation of our point estimates.
Speaking of the point estimates, I regret leading with them: as we said, they’re really just placeholders in the face of deep uncertainty. We should have led with our actual conclusions, the basics of which are that the relevant vertebrates are probably within an OOM of humans and shrimps and the relevant adult insects are probably within two OOMs of the vertebrates. My guess is that you and I disagree less than you might think about the range of reasonable moral weights across species, even if the centers of my probability masses are higher than yours.
I agree that our methodology is complex and hard to understand. But it would be surprising if there were a simple, easy-to-understand way to estimate the possible differences in the intensities of valenced states across species. Likewise, I agree that “there are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence.” But there are also tons of assumptions and biases that go into our intuitive assessments of the relative moral importance of various kinds of nonhuman animals. So, a lot comes down to how much stock you put in your intuitions. As you might guess, I think we have lots of reasons not to trust them once we take on key moral assumptions like utilitiarianism. So, I take much of the value of the Moral Weight Project to be in the mere fact that it tries to reach moral weights from first principles.
It’s time to do some serious surveying to get a better sense of the community’s moral weights. I also think there’s a bunch of good work to do on the significance of philosophical / moral uncertainty here. I If anyone wants to support this work, please let me know!
Thanks for responding to my hot takes with patience and good humour!
Your defenses and caveats all sound very reasonable.
the relevant vertebrates are probably within an OOM of humans
So given this, you’d agree with the conclusion of the original piece? At least if we take the “number of chickens affected per dollar” input as correct?
I agree with Ariel that OP should probably be spending more on animals (and I really appreciate all the work he’s done to push this conversation forward). I don’t know whether OP should allocate most neartermist funding to AW as I haven’t looked into lots of the relevant issues. Most obviously, while the return curves for at least some human-focused neartermist options are probably pretty flat (just think of GiveDirectly), the curves for various sorts of animal spending may drop precipitously. Ariel may well be right that, even if so, the returns probably don’t fall off so much that animal work loses to global health work, but I haven’t investigated this myself. The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now. (That being said, I’d love to see more extensive investigation into field building for animals! If EA field building in general is cost-competitive with other causes, then I’d expect animal field building to look pretty good.)
I should also say that OP’s commitment to worldview diversification complicates any conclusions about what OP should do from its own perspective. Even if it’s true that a straightforward utilitarian analysis would favor spending a lot more on animals, it’s pretty clear that some key stakeholders have deep reservations about straightforward utilitarian analyses. And because worldview diversification doesn’t include a clear procedure for generating a specific allocation, it’s hard to know what people who are committed to worldview diversification should do by their own lights.
The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now.
I haven’t read this in a ton of detail, but I liked this post from last year trying to answer this exact question (what are potentially effective ways to deploy >$10M in projects for animals).
Thanks for all this, Hamish. For what it’s worth, I don’t think we did a great job communicating the results of the Moral Weight Project.
As you rightly observe, welfare ranges aren’t moral weights without some key philosophical assumptions. Although we did discuss the significance of those assumptions in independent posts, we could have done a much better job explaining how those assumptions should affect the interpretation of our point estimates.
Speaking of the point estimates, I regret leading with them: as we said, they’re really just placeholders in the face of deep uncertainty. We should have led with our actual conclusions, the basics of which are that the relevant vertebrates are probably within an OOM of humans and shrimps and the relevant adult insects are probably within two OOMs of the vertebrates. My guess is that you and I disagree less than you might think about the range of reasonable moral weights across species, even if the centers of my probability masses are higher than yours.
I agree that our methodology is complex and hard to understand. But it would be surprising if there were a simple, easy-to-understand way to estimate the possible differences in the intensities of valenced states across species. Likewise, I agree that “there are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence.” But there are also tons of assumptions and biases that go into our intuitive assessments of the relative moral importance of various kinds of nonhuman animals. So, a lot comes down to how much stock you put in your intuitions. As you might guess, I think we have lots of reasons not to trust them once we take on key moral assumptions like utilitiarianism. So, I take much of the value of the Moral Weight Project to be in the mere fact that it tries to reach moral weights from first principles.
It’s time to do some serious surveying to get a better sense of the community’s moral weights. I also think there’s a bunch of good work to do on the significance of philosophical / moral uncertainty here. I If anyone wants to support this work, please let me know!
Thanks for responding to my hot takes with patience and good humour!
Your defenses and caveats all sound very reasonable.
So given this, you’d agree with the conclusion of the original piece? At least if we take the “number of chickens affected per dollar” input as correct?
I agree with Ariel that OP should probably be spending more on animals (and I really appreciate all the work he’s done to push this conversation forward). I don’t know whether OP should allocate most neartermist funding to AW as I haven’t looked into lots of the relevant issues. Most obviously, while the return curves for at least some human-focused neartermist options are probably pretty flat (just think of GiveDirectly), the curves for various sorts of animal spending may drop precipitously. Ariel may well be right that, even if so, the returns probably don’t fall off so much that animal work loses to global health work, but I haven’t investigated this myself. The upshot: I have no idea whether there are good ways of spending an additional $100M on animals right now. (That being said, I’d love to see more extensive investigation into field building for animals! If EA field building in general is cost-competitive with other causes, then I’d expect animal field building to look pretty good.)
I should also say that OP’s commitment to worldview diversification complicates any conclusions about what OP should do from its own perspective. Even if it’s true that a straightforward utilitarian analysis would favor spending a lot more on animals, it’s pretty clear that some key stakeholders have deep reservations about straightforward utilitarian analyses. And because worldview diversification doesn’t include a clear procedure for generating a specific allocation, it’s hard to know what people who are committed to worldview diversification should do by their own lights.
I haven’t read this in a ton of detail, but I liked this post from last year trying to answer this exact question (what are potentially effective ways to deploy >$10M in projects for animals).