That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.
Like, I think the correct defense is to just be straightforward and say “look, I think different people are basically worth the same, since cognitive variance just isn’t that high”. I just don’t think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it’s not guaranteed).
I personally don’t find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work Rethink is doing since I still think it helps me think about how to answer this question in-general.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.
Like, I think the correct defense is to just be straightforward and say “look, I think different people are basically worth the same, since cognitive variance just isn’t that high”. I just don’t think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it’s not guaranteed).
I personally don’t find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work Rethink is doing since I still think it helps me think about how to answer this question in-general.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
Just to clarify, I am a utilitarian, approximately, just not a hedonic utilitarian.