Marginal animal welfare cost effectiveness seems to robustly beat global health interventions. … Using welfare ranges based roughly on Rethink Priorities’ results
I don’t think this is as robust as it seems. One could easily have moral weights many orders of magnitude away from RP’s. For example, if you value one human more than the population of one beehive that’s three orders of magnitude lower than what RP gives (more)
The question is, how do you generate these weights otherwise ?
The issue is, the way I seen most people do it is basically go “the conclusion that animals have a similar capacity for pain than humans feels wrong, so, hm, let’s say that they morally weight 1000 or 10000 times less”.
It’s often conveniently in the range where people don’t have to change their behavior about the topic. I’m skeptical of that.
For most people, the beehive example invokes a response close to ‘oh this feels wrong so the conclusion must be wrong’. They don’t consider the option ‘wow, despite being small, maybe bees have a capacity to feel love, and pleasure when they find flowers and make honey and danse, and feel pain when their organs are destroyed by pesticides’, which may be also likely.
RP’s work is the most complete work I’ve seen on this topic, comparatively.
Bees feel like an easy case for thinking RP might be wildly wrong in a way that doesn’t generalise to all animal interventions, since bees might not be conscious at all, whereas it’s much less likely that pigs or even chickens aren’t. (I’m actually a bit more sympathetic to pigs not being conscious than most people are, but I still think its >50% likely that they are conscious enough to count as moral patients.)
I don’t think this is as robust as it seems. One could easily have moral weights many orders of magnitude away from RP’s. For example, if you value one human more than the population of one beehive that’s three orders of magnitude lower than what RP gives (more)
The question is, how do you generate these weights otherwise ?
The issue is, the way I seen most people do it is basically go “the conclusion that animals have a similar capacity for pain than humans feels wrong, so, hm, let’s say that they morally weight 1000 or 10000 times less”.
It’s often conveniently in the range where people don’t have to change their behavior about the topic. I’m skeptical of that.
For most people, the beehive example invokes a response close to ‘oh this feels wrong so the conclusion must be wrong’. They don’t consider the option ‘wow, despite being small, maybe bees have a capacity to feel love, and pleasure when they find flowers and make honey and danse, and feel pain when their organs are destroyed by pesticides’, which may be also likely.
RP’s work is the most complete work I’ve seen on this topic, comparatively.
Bees feel like an easy case for thinking RP might be wildly wrong in a way that doesn’t generalise to all animal interventions, since bees might not be conscious at all, whereas it’s much less likely that pigs or even chickens aren’t. (I’m actually a bit more sympathetic to pigs not being conscious than most people are, but I still think its >50% likely that they are conscious enough to count as moral patients.)