I feel like wading into this debate is likely to be emotionally charged and counterproductive, but I think it is reasonable to have a good deal of “moral uncertainty” when it comes to doing interspecies comparisons, whereas there’d be much less certainty (though still some) when comparing between humans (e.g., is a pregnant person worth more? Is a healthy young person worth more than an 80-year old in a coma?).
For example, one leading view would be that one chicken has equal worth to one human. Another view would be to discount the chicken by its brain size relative to humans, which would imply a value of 300 chickens per human. There are also many views in between and I’m uncertain which one to take.
Sure, such moral calculus may seem very crude, but it does not judge the animal merely by species.
I’m not objecting to having moral uncertainty about animals. I’m objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say “It depends on how much you value them” rather than discussing how much we should value them.
I didn’t intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals “is likely to be emotionally charged and counterproductive”—an attitude I think is widespread given how little I’ve seen this issue discussed—strikes me as another example of EAs’ inconsistency when it comes to animals. No EA hesitates to debate, say, someone’s preference for Christians over Muslims. So why are we afraid to debate preference among species?
FWIW, I agree that there probably exist objective facts about how to value different animals relative to each other, and people who claim to value 1 hour of human suffering the same as 1000 hours of chicken suffering are just plain wrong. But it’s pretty hard to convince people of this, so I try to avoid making arguments that rely on claiming high parity of value between humans and non-human animals. If you’re trying to make an argument, you should avoid making assumptions that many readers will disagree with, because then you’ll just lose people.
I took it that the point by Jesse was about how one should frame these issues, not that one should assume a high parity of value between human and nonhuman animals or whatever. The idea is only that these value judgements are properly subject rational argument and should be framed as if they are.
An aside: meta-ethics entered the discussion a unhelpfully here and below. It can be true that one ought to value future generations/nonhuman animals a certain way on a number of anti-realist views (subjectivism, versions of non-cognitivism). Further, it’s reasonable to hold that one can rationally argue over moral propositions, even if every moral proposition is false (error theory), in the same way that one can rationally argue over an aesthetic proposition, even if every aesthetic proposition is false. One can still appeal to reasons for seeing or believing a given way in either case. Of course, one will understand those reasons differently than the realist but the upshot is that the ‘first-order’ practice is left untouched. On the plausible moral anti-realist theories our first-order moral practices will remain largely untouched, in the same way, that on most normative anti-realist theories, concerning ideas like ‘one ought to believe that x’, ‘one ought to do x’, our relevant first-order practices will remain largely untouched.
People can discuss the reasons that they have certain moral or aesthetic preferences. They may even change their mind as a result of these discussions. But there’s nothing irrational about holding a certain set of preferences, so I object to EAs saying that particular preferences are right or wrong, especially if there’s significant disagreement.
But there’s nothing irrational about holding a certain set of preferences,
Sure there can be. As trivial cases, people could have preferences which violate VNM axioms. But usually when we talk about morality we don’t think that merely following the weakest kind of rationality is sufficient for a justified ethical system.
I feel like wading into this debate is likely to be emotionally charged and counterproductive, but I think it is reasonable to have a good deal of “moral uncertainty” when it comes to doing interspecies comparisons, whereas there’d be much less certainty (though still some) when comparing between humans (e.g., is a pregnant person worth more? Is a healthy young person worth more than an 80-year old in a coma?).
For example, one leading view would be that one chicken has equal worth to one human. Another view would be to discount the chicken by its brain size relative to humans, which would imply a value of 300 chickens per human. There are also many views in between and I’m uncertain which one to take.
Sure, such moral calculus may seem very crude, but it does not judge the animal merely by species.
I’m not objecting to having moral uncertainty about animals. I’m objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say “It depends on how much you value them” rather than discussing how much we should value them.
I didn’t intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals “is likely to be emotionally charged and counterproductive”—an attitude I think is widespread given how little I’ve seen this issue discussed—strikes me as another example of EAs’ inconsistency when it comes to animals. No EA hesitates to debate, say, someone’s preference for Christians over Muslims. So why are we afraid to debate preference among species?
FWIW, I agree that there probably exist objective facts about how to value different animals relative to each other, and people who claim to value 1 hour of human suffering the same as 1000 hours of chicken suffering are just plain wrong. But it’s pretty hard to convince people of this, so I try to avoid making arguments that rely on claiming high parity of value between humans and non-human animals. If you’re trying to make an argument, you should avoid making assumptions that many readers will disagree with, because then you’ll just lose people.
I took it that the point by Jesse was about how one should frame these issues, not that one should assume a high parity of value between human and nonhuman animals or whatever. The idea is only that these value judgements are properly subject rational argument and should be framed as if they are.
An aside: meta-ethics entered the discussion a unhelpfully here and below. It can be true that one ought to value future generations/nonhuman animals a certain way on a number of anti-realist views (subjectivism, versions of non-cognitivism). Further, it’s reasonable to hold that one can rationally argue over moral propositions, even if every moral proposition is false (error theory), in the same way that one can rationally argue over an aesthetic proposition, even if every aesthetic proposition is false. One can still appeal to reasons for seeing or believing a given way in either case. Of course, one will understand those reasons differently than the realist but the upshot is that the ‘first-order’ practice is left untouched. On the plausible moral anti-realist theories our first-order moral practices will remain largely untouched, in the same way, that on most normative anti-realist theories, concerning ideas like ‘one ought to believe that x’, ‘one ought to do x’, our relevant first-order practices will remain largely untouched.
People can discuss the reasons that they have certain moral or aesthetic preferences. They may even change their mind as a result of these discussions. But there’s nothing irrational about holding a certain set of preferences, so I object to EAs saying that particular preferences are right or wrong, especially if there’s significant disagreement.
Sure there can be. As trivial cases, people could have preferences which violate VNM axioms. But usually when we talk about morality we don’t think that merely following the weakest kind of rationality is sufficient for a justified ethical system.