This sort of work is very sensitive to your choices for moral weights, and while I do appreciate you showing your input weights clearly in a table I think it’s worth emphasizing up front how unusual they are. For example, I’d predict an overwhelming majority of humans would rather see an extra year of good life for one human than four chickens, twelve carp, or thirty three shrimp. And, eyeballing your calculations, if you used more conventional moral weights your bottom-line conclusion would be that net global welfare was positive and increasing.
I’m skeptical of anchoring on people’s initial intuitions about cross-species tradeoffs as a default for moral weights, as there are strong reasons to expect that those intuitions are inappropriately biased. The weights I use are far from perfect and are not robust enough to allow confident conclusions to be drawn, but I do think they’re the best ones available for this kind of analysis by a decent margin.
There are a ton of judgement calls in coming up with moral weights. I’m worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased
There are a ton of judgement calls in coming up with moral weights.I’m worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased
I agree there’s such a problem. But I think it is important to also point out that there is the same problem for people who tend to think they “do not make judgement calls about moral weights”, but have nonetheless effectively came up with their own judgement calls when they live their daily lives which “by the way” affect animals (eat animals, live in buildings that require constructions that kill millions of animals, gardening, which harms and give rise to many animals, etc).
Also, I think it is equally, maybe more, important to recognize those people who make such judgement calls without explicitly thinking about moral weights, let alone go into tedious research projects, are people who intuitively care pretty little about animals, and so their “effective intuition about moral weights” (intuitive because they didn’t want to use research to back it up) backing up their actions end up pretty biased.
I think I intuitively worry about the bias of those who do not particularly feel strongly about animals’ suffering (even those caused by them), than the bias of those who care pretty strongly about animals. And of course, disclaimer: I think I lie within the latter group.
Sure! I’d love to see a group of people who don’t start out caring about animals much more than average try to tackle this research problem. And then maybe an adversarial collaboration?
Ah, interesting! I like both the terminology and and idea of “adversarial collaboration”. For instance, I think incorporating debates into this research might actually move us closer to the truth.
But I am also wary that if we use a classical way of deciding who wins debate, the losing side would aljmost always be the group who assigned higher (even just slightly higher than average) “moral weights” to animals (not relative to humans, but relative to the debate opponent). So I think maybe if we use debate as a way to push closer to the truth, we probably use the classical ways of deciding debates.
I’m concerned about that dynamic too and think it’s important to keep in mind, especially in the general case of researchers’ intuitions tending to bias their work, even when attempting objectivity. However, I’m also concerned about the dismissal of results like RP’s welfare ranges on the basis of speculation about the researchers’ priors and/or the counterintuitive conclusions, rather than on the merits of the analyses themselves.
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They’re applied to animals, but I think they’re really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
“Best available” doesn’t imply that you should use them to create a first order answer instead of, for example, inputting the extremes of a range of plausible values to see what changes. And even then, the analytic choices you make are both cruxes, and deeply debated.
What if you weight them by number of neurons? (Though we don’t actually know whether capacity to generate qualia scales with neuron count; it could be that it’s easy to do, and we suffer no more than chickens or even ants, for example.)
I’d be very skeptical as well of the views of the majority of humans, since we tend to be extremely biased to favor our own species, for evolutionary, cultural and biological reasons. We also benefit directly from a society that treats humans correctly, and benefit directly from animal exploitation. Some studies indicate that we put a lower moral weight to cows when there’s beef at lunch.
Plus, few people though about the topic seriously, and we are just pretty bad at imagining the happiness of other beings. We put the moral weight of a dog much higher than that of a pig (despite pigs being smarter than dogs), because we are closer to them.
There’s also a strong social stigma against those that dare to suggest otherwise.
Imagine that we were able to ask carps how much they’d weigh their own lives compared to that of humans. It would be pretty unlikely that they’d say “well, I disagree with your 12 to 1 human/carp ratio, I rather think that it’s worth sacrificing a hundred of us for one human life, definitely”.
I think the question “would you rather see additional one human life-year or 3 chicken life-years” conflates the hedonic comparison with special obligations to help human beings. One might prefer human experiences vs non-human experiences even when they are hedonically equivalent because of special obligations. If we’re exclusively interested in welfare I think a better thought experiment would be how would you feel about having these experiences yourself.
If God offered you an opportunity to have an extra year of average human life, and on top of that, 1 year of average layer hen life, 1 year of average broiler chicken life, 10 years of average farmed fish life, and 100 years of farmed shrimp life, would you accept that offer? Of course that experiment is too artificial, but people go through extreme illnesses that cause them have mental capacities similar to a chicken. I sometimes think about how afraid I would be about being reincarnated after my death, going through some mental changes to get my mental capacities equivalent to that of a chicken, and going through all the average chicken experiences. I personally wouldn’t take that risk in exchange of one additional year of human life.
Yeah I agree that it is not the most natural and straightforward thought-experiment. Unfortunately hedonic comparisons make most sense to me when I can ask “would I prefer experience A or B” and asking this question is much more difficult when you try to compare experiences for the animals.
But at least it should be physically imaginable to get me lobotomised to have mental capacities equivalent to that of a chicken. I’m much less likely to care about what happens to future me if my mental capacities were altered to be similar to that of an ant. But if my brain was altered to be similar to a chicken brain, I’m much more afraid of getting boiled alive, being crammed in a cage etc.
There are two factors mixed up here:@kyle_fish writes about an (objective) amount of animal welfare. The concept @Jeff Kaufman refers to instead includes the weightwe humans put on that animals’ welfare. For a meaningful conversation about the topic, we should not mix these two up.*
Let’s briefly assume a || world with humans2: just like us, but they simply never cared about animals at all (weight = 0). Concluding: “We thus have no welfare problem” is the logical conclusion for humans2 indeed, but it would not suffice to inform a genetically mutated human2x who happened to have developed care about animal welfare—or who simply happened to be curious about absolute welfare in his universe.
In the same vein: There’s no strict need to account for usual human’s care when analyzing whether, “Net global welfare may be negative” (title!). On the contrary, it would lead to an unnecessary bias, that just comes on top of the analysis’ necessarily huge uncertainty (that the author does not fail to emphasize, although as others comment, it could deserve even stronger emphasis).
Hence, Kyle’s estimates of the welfare of farmed animals, based on Rethink Priorities median welfare range estimates, are underestimated according to most people (in Belgium). Most people would have to come to the conclusion that net global welfare is even more negative and more declining than what Kyle’s calculation suggests.
This sort of work is very sensitive to your choices for moral weights, and while I do appreciate you showing your input weights clearly in a table I think it’s worth emphasizing up front how unusual they are. For example, I’d predict an overwhelming majority of humans would rather see an extra year of good life for one human than four chickens, twelve carp, or thirty three shrimp. And, eyeballing your calculations, if you used more conventional moral weights your bottom-line conclusion would be that net global welfare was positive and increasing.
I’m skeptical of anchoring on people’s initial intuitions about cross-species tradeoffs as a default for moral weights, as there are strong reasons to expect that those intuitions are inappropriately biased. The weights I use are far from perfect and are not robust enough to allow confident conclusions to be drawn, but I do think they’re the best ones available for this kind of analysis by a decent margin.
There are a ton of judgement calls in coming up with moral weights. I’m worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased
I agree there’s such a problem. But I think it is important to also point out that there is the same problem for people who tend to think they “do not make judgement calls about moral weights”, but have nonetheless effectively came up with their own judgement calls when they live their daily lives which “by the way” affect animals (eat animals, live in buildings that require constructions that kill millions of animals, gardening, which harms and give rise to many animals, etc).
Also, I think it is equally, maybe more, important to recognize those people who make such judgement calls without explicitly thinking about moral weights, let alone go into tedious research projects, are people who intuitively care pretty little about animals, and so their “effective intuition about moral weights” (intuitive because they didn’t want to use research to back it up) backing up their actions end up pretty biased.
I think I intuitively worry about the bias of those who do not particularly feel strongly about animals’ suffering (even those caused by them), than the bias of those who care pretty strongly about animals. And of course, disclaimer: I think I lie within the latter group.
Sure! I’d love to see a group of people who don’t start out caring about animals much more than average try to tackle this research problem. And then maybe an adversarial collaboration?
I just wrote up more on this here: Weighing Animal Worth.
Ah, interesting! I like both the terminology and and idea of “adversarial collaboration”. For instance, I think incorporating debates into this research might actually move us closer to the truth.
But I am also wary that if we use a classical way of deciding who wins debate, the losing side would aljmost always be the group who assigned higher (even just slightly higher than average) “moral weights” to animals (not relative to humans, but relative to the debate opponent). So I think maybe if we use debate as a way to push closer to the truth, we probably use the classical ways of deciding debates.
Can you say more about what you mean by that?
I’m concerned about that dynamic too and think it’s important to keep in mind, especially in the general case of researchers’ intuitions tending to bias their work, even when attempting objectivity. However, I’m also concerned about the dismissal of results like RP’s welfare ranges on the basis of speculation about the researchers’ priors and/or the counterintuitive conclusions, rather than on the merits of the analyses themselves.
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They’re applied to animals, but I think they’re really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
“Best available” doesn’t imply that you should use them to create a first order answer instead of, for example, inputting the extremes of a range of plausible values to see what changes. And even then, the analytic choices you make are both cruxes, and deeply debated.
What if you weight them by number of neurons? (Though we don’t actually know whether capacity to generate qualia scales with neuron count; it could be that it’s easy to do, and we suffer no more than chickens or even ants, for example.)
I’d be very skeptical as well of the views of the majority of humans, since we tend to be extremely biased to favor our own species, for evolutionary, cultural and biological reasons. We also benefit directly from a society that treats humans correctly, and benefit directly from animal exploitation. Some studies indicate that we put a lower moral weight to cows when there’s beef at lunch.
Plus, few people though about the topic seriously, and we are just pretty bad at imagining the happiness of other beings. We put the moral weight of a dog much higher than that of a pig (despite pigs being smarter than dogs), because we are closer to them.
There’s also a strong social stigma against those that dare to suggest otherwise.
Imagine that we were able to ask carps how much they’d weigh their own lives compared to that of humans. It would be pretty unlikely that they’d say “well, I disagree with your 12 to 1 human/carp ratio, I rather think that it’s worth sacrificing a hundred of us for one human life, definitely”.
I think the question “would you rather see additional one human life-year or 3 chicken life-years” conflates the hedonic comparison with special obligations to help human beings. One might prefer human experiences vs non-human experiences even when they are hedonically equivalent because of special obligations. If we’re exclusively interested in welfare I think a better thought experiment would be how would you feel about having these experiences yourself.
If God offered you an opportunity to have an extra year of average human life, and on top of that, 1 year of average layer hen life, 1 year of average broiler chicken life, 10 years of average farmed fish life, and 100 years of farmed shrimp life, would you accept that offer? Of course that experiment is too artificial, but people go through extreme illnesses that cause them have mental capacities similar to a chicken. I sometimes think about how afraid I would be about being reincarnated after my death, going through some mental changes to get my mental capacities equivalent to that of a chicken, and going through all the average chicken experiences. I personally wouldn’t take that risk in exchange of one additional year of human life.
I don’t think that thought experiment works for me: what would it even mean for a human to experience a year of chicken life?
Yeah I agree that it is not the most natural and straightforward thought-experiment. Unfortunately hedonic comparisons make most sense to me when I can ask “would I prefer experience A or B” and asking this question is much more difficult when you try to compare experiences for the animals.
But at least it should be physically imaginable to get me lobotomised to have mental capacities equivalent to that of a chicken. I’m much less likely to care about what happens to future me if my mental capacities were altered to be similar to that of an ant. But if my brain was altered to be similar to a chicken brain, I’m much more afraid of getting boiled alive, being crammed in a cage etc.
There are two factors mixed up here: @kyle_fish writes about an (objective) amount of animal welfare. The concept @Jeff Kaufman refers to instead includes the weight we humans put on that animals’ welfare. For a meaningful conversation about the topic, we should not mix these two up.*
Let’s briefly assume a || world with humans2: just like us, but they simply never cared about animals at all (weight = 0). Concluding: “We thus have no welfare problem” is the logical conclusion for humans2 indeed, but it would not suffice to inform a genetically mutated human2x who happened to have developed care about animal welfare—or who simply happened to be curious about absolute welfare in his universe.
In the same vein: There’s no strict need to account for usual human’s care when analyzing whether, “Net global welfare may be negative” (title!). On the contrary, it would lead to an unnecessary bias, that just comes on top of the analysis’ necessarily huge uncertainty (that the author does not fail to emphasize, although as others comment, it could deserve even stronger emphasis).
I just conducted a survey (representative sample Belgian adult population), according to which most people believe the welfare range of a bird is equal to the welfare range of a human, and the welfare level of a broiler chicken is negative. https://forum.effectivealtruism.org/posts/MP4rNBu6ftG4QE3nL/the-suffering-of-a-farmed-animal-is-equal-in-size-to-the
Hence, Kyle’s estimates of the welfare of farmed animals, based on Rethink Priorities median welfare range estimates, are underestimated according to most people (in Belgium). Most people would have to come to the conclusion that net global welfare is even more negative and more declining than what Kyle’s calculation suggests.
Agreed narrowly, but as I commented, I think the sensitivities involved are to many, many more factors.