I’m skeptical of anchoring on people’s initial intuitions about cross-species tradeoffs as a default for moral weights, as there are strong reasons to expect that those intuitions are inappropriately biased. The weights I use are far from perfect and are not robust enough to allow confident conclusions to be drawn, but I do think they’re the best ones available for this kind of analysis by a decent margin.
There are a ton of judgement calls in coming up with moral weights. I’m worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased
There are a ton of judgement calls in coming up with moral weights.I’m worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased
I agree there’s such a problem. But I think it is important to also point out that there is the same problem for people who tend to think they “do not make judgement calls about moral weights”, but have nonetheless effectively came up with their own judgement calls when they live their daily lives which “by the way” affect animals (eat animals, live in buildings that require constructions that kill millions of animals, gardening, which harms and give rise to many animals, etc).
Also, I think it is equally, maybe more, important to recognize those people who make such judgement calls without explicitly thinking about moral weights, let alone go into tedious research projects, are people who intuitively care pretty little about animals, and so their “effective intuition about moral weights” (intuitive because they didn’t want to use research to back it up) backing up their actions end up pretty biased.
I think I intuitively worry about the bias of those who do not particularly feel strongly about animals’ suffering (even those caused by them), than the bias of those who care pretty strongly about animals. And of course, disclaimer: I think I lie within the latter group.
Sure! I’d love to see a group of people who don’t start out caring about animals much more than average try to tackle this research problem. And then maybe an adversarial collaboration?
Ah, interesting! I like both the terminology and and idea of “adversarial collaboration”. For instance, I think incorporating debates into this research might actually move us closer to the truth.
But I am also wary that if we use a classical way of deciding who wins debate, the losing side would aljmost always be the group who assigned higher (even just slightly higher than average) “moral weights” to animals (not relative to humans, but relative to the debate opponent). So I think maybe if we use debate as a way to push closer to the truth, we probably use the classical ways of deciding debates.
I’m concerned about that dynamic too and think it’s important to keep in mind, especially in the general case of researchers’ intuitions tending to bias their work, even when attempting objectivity. However, I’m also concerned about the dismissal of results like RP’s welfare ranges on the basis of speculation about the researchers’ priors and/or the counterintuitive conclusions, rather than on the merits of the analyses themselves.
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They’re applied to animals, but I think they’re really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
“Best available” doesn’t imply that you should use them to create a first order answer instead of, for example, inputting the extremes of a range of plausible values to see what changes. And even then, the analytic choices you make are both cruxes, and deeply debated.
What if you weight them by number of neurons? (Though we don’t actually know whether capacity to generate qualia scales with neuron count; it could be that it’s easy to do, and we suffer no more than chickens or even ants, for example.)
I’m skeptical of anchoring on people’s initial intuitions about cross-species tradeoffs as a default for moral weights, as there are strong reasons to expect that those intuitions are inappropriately biased. The weights I use are far from perfect and are not robust enough to allow confident conclusions to be drawn, but I do think they’re the best ones available for this kind of analysis by a decent margin.
There are a ton of judgement calls in coming up with moral weights. I’m worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased
I agree there’s such a problem. But I think it is important to also point out that there is the same problem for people who tend to think they “do not make judgement calls about moral weights”, but have nonetheless effectively came up with their own judgement calls when they live their daily lives which “by the way” affect animals (eat animals, live in buildings that require constructions that kill millions of animals, gardening, which harms and give rise to many animals, etc).
Also, I think it is equally, maybe more, important to recognize those people who make such judgement calls without explicitly thinking about moral weights, let alone go into tedious research projects, are people who intuitively care pretty little about animals, and so their “effective intuition about moral weights” (intuitive because they didn’t want to use research to back it up) backing up their actions end up pretty biased.
I think I intuitively worry about the bias of those who do not particularly feel strongly about animals’ suffering (even those caused by them), than the bias of those who care pretty strongly about animals. And of course, disclaimer: I think I lie within the latter group.
Sure! I’d love to see a group of people who don’t start out caring about animals much more than average try to tackle this research problem. And then maybe an adversarial collaboration?
I just wrote up more on this here: Weighing Animal Worth.
Ah, interesting! I like both the terminology and and idea of “adversarial collaboration”. For instance, I think incorporating debates into this research might actually move us closer to the truth.
But I am also wary that if we use a classical way of deciding who wins debate, the losing side would aljmost always be the group who assigned higher (even just slightly higher than average) “moral weights” to animals (not relative to humans, but relative to the debate opponent). So I think maybe if we use debate as a way to push closer to the truth, we probably use the classical ways of deciding debates.
Can you say more about what you mean by that?
I’m concerned about that dynamic too and think it’s important to keep in mind, especially in the general case of researchers’ intuitions tending to bias their work, even when attempting objectivity. However, I’m also concerned about the dismissal of results like RP’s welfare ranges on the basis of speculation about the researchers’ priors and/or the counterintuitive conclusions, rather than on the merits of the analyses themselves.
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They’re applied to animals, but I think they’re really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
“Best available” doesn’t imply that you should use them to create a first order answer instead of, for example, inputting the extremes of a range of plausible values to see what changes. And even then, the analytic choices you make are both cruxes, and deeply debated.
What if you weight them by number of neurons? (Though we don’t actually know whether capacity to generate qualia scales with neuron count; it could be that it’s easy to do, and we suffer no more than chickens or even ants, for example.)