My sequence might also be helpful. I didn’t come up with too many directly useful estimates, but I looked into implications of desire-based and preference-based theories for moral weights and prioritization, and I would probably still prioritize nonhuman animals on such views. I guess most importantly:
For endorsed/reflective/cognitive/belief-like desires or preferences, like life satisfaction and responses to hypotheticals like QALY tradeoff questions, I’m pretty skeptical of interpersonal utility comparisons in general, even between humans. I’m somewhat skeptical of comparisons for hedonic states between different species. I’m sympathetic to comparisons for “felt desires” across species, based on how attention is affected (motivational salience) and “how much attention” different beings have.[1] (More here, partly in footnotes)
Perhaps surprisingly and controversially, I suspect many animals have simple versions of endorsed/reflective/cognitive/belief-like desires or preferences. It’s not obvious they matter (much) less for being simpler, but this could go either way. (More here and here)
Humans plausibly have many more preferences and desires, and about many more things than other animals, but this doesn’t clearly dramatically favour humans.
If we measure the intensity of preferences and desires by their effects on attention, then the number of them doesn’t really seem to matter. Often our preferences and desires are dominated by a few broad terminal ones, like spending time with and the welfare of loved ones, being happy and free from suffering, career aspirations.
I’m not aware of particularly plausible/attractive ways to ground interpersonal comparisons otherwise.
Normalization approaches not grounding interpersonal comparisons don’t usually even favour humans at all, but some specific ones might.
Uncertainty about moral weights favours nonhumans, because we understand and value things by reference to our own experiences, so should normalize moral weights by the value we assign to our own experiences and can take expected values over that (More here).
We could assume that how much we believe (or act like) our own suffering (or hedonic states or felt desires) matters is proportional to the intensity of our suffering (e.g. based on attention), across moral patients, including humans and other animals. I could see humans coming out quite far ahead this way, based on things like how much parents care about their children, people’s ethical beliefs (utilitarian, deontological, religious), other important goals, and people’s apparently greater willingness to suffer for these than other animals’ willingness to suffer for anything.
There’s some intuitive appeal to this approach, but the motivating assumption seems probably wrong to me, and reasonably likely to not be even be justifiable as a rough approximation.[2]
It also could lead to large discrepancies between humans, because some humans are much more willing to suffer for things than others. The most fanatical humans might dominate. That could be pretty morally repugnant.
The quantity of attention, in roughly the most extreme case in my view, could scale proportionally with the number of (relevant) neurons, so humans would have, as a first guess, ~400 times as much moral weight as chickens. OTOH, I’d actually guess there are decreasing marginal returns to additional neurons, e.g. it could scale more like with the logarithm or the square root of the number of neurons. And it might not really scale with the number of neurons at all.
People probably just have different beliefs about how much their own suffering matters, and these beliefs are plausibly not interpersonally comparable at all.
Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can’t easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.
We can also imagine moral patients with conscious preferences who can’t suffer at all, so we’d have to find something else to normalize by to make interpersonal comparisons with them.
My sequence might also be helpful. I didn’t come up with too many directly useful estimates, but I looked into implications of desire-based and preference-based theories for moral weights and prioritization, and I would probably still prioritize nonhuman animals on such views. I guess most importantly:
For endorsed/reflective/cognitive/belief-like desires or preferences, like life satisfaction and responses to hypotheticals like QALY tradeoff questions, I’m pretty skeptical of interpersonal utility comparisons in general, even between humans. I’m somewhat skeptical of comparisons for hedonic states between different species. I’m sympathetic to comparisons for “felt desires” across species, based on how attention is affected (motivational salience) and “how much attention” different beings have.[1] (More here, partly in footnotes)
Perhaps surprisingly and controversially, I suspect many animals have simple versions of endorsed/reflective/cognitive/belief-like desires or preferences. It’s not obvious they matter (much) less for being simpler, but this could go either way. (More here and here)
Humans plausibly have many more preferences and desires, and about many more things than other animals, but this doesn’t clearly dramatically favour humans.
If we measure the intensity of preferences and desires by their effects on attention, then the number of them doesn’t really seem to matter. Often our preferences and desires are dominated by a few broad terminal ones, like spending time with and the welfare of loved ones, being happy and free from suffering, career aspirations.
I’m not aware of particularly plausible/attractive ways to ground interpersonal comparisons otherwise.
Normalization approaches not grounding interpersonal comparisons don’t usually even favour humans at all, but some specific ones might.
Uncertainty about moral weights favours nonhumans, because we understand and value things by reference to our own experiences, so should normalize moral weights by the value we assign to our own experiences and can take expected values over that (More here).
We could assume that how much we believe (or act like) our own suffering (or hedonic states or felt desires) matters is proportional to the intensity of our suffering (e.g. based on attention), across moral patients, including humans and other animals. I could see humans coming out quite far ahead this way, based on things like how much parents care about their children, people’s ethical beliefs (utilitarian, deontological, religious), other important goals, and people’s apparently greater willingness to suffer for these than other animals’ willingness to suffer for anything.
There’s some intuitive appeal to this approach, but the motivating assumption seems probably wrong to me, and reasonably likely to not be even be justifiable as a rough approximation.[2]
It also could lead to large discrepancies between humans, because some humans are much more willing to suffer for things than others. The most fanatical humans might dominate. That could be pretty morally repugnant.
The quantity of attention, in roughly the most extreme case in my view, could scale proportionally with the number of (relevant) neurons, so humans would have, as a first guess, ~400 times as much moral weight as chickens. OTOH, I’d actually guess there are decreasing marginal returns to additional neurons, e.g. it could scale more like with the logarithm or the square root of the number of neurons. And it might not really scale with the number of neurons at all.
People probably just have different beliefs about how much their own suffering matters, and these beliefs are plausibly not interpersonally comparable at all.
Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can’t easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires, so they might even care more about their own suffering in absolute terms on average.
We can also imagine moral patients with conscious preferences who can’t suffer at all, so we’d have to find something else to normalize by to make interpersonal comparisons with them.