I take issue with the statement “it depends greatly on how much you value a human compared to a nonhuman animal”. Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read “it depends greatly on how much we ought to value a human compared to a nonhuman”.
Imagine if EAs went around saying “it depends on how much you value an African relative to an American”. Maybe there is more reasonable uncertainty about between- as opposed to within-species comparisons, but still we demand good reasons for the value we assign to different kinds of humans. This idea is at the core of Effective Altruism. We ought to do the same with non-human sentients.
I feel like wading into this debate is likely to be emotionally charged and counterproductive, but I think it is reasonable to have a good deal of “moral uncertainty” when it comes to doing interspecies comparisons, whereas there’d be much less certainty (though still some) when comparing between humans (e.g., is a pregnant person worth more? Is a healthy young person worth more than an 80-year old in a coma?).
For example, one leading view would be that one chicken has equal worth to one human. Another view would be to discount the chicken by its brain size relative to humans, which would imply a value of 300 chickens per human. There are also many views in between and I’m uncertain which one to take.
Sure, such moral calculus may seem very crude, but it does not judge the animal merely by species.
I’m not objecting to having moral uncertainty about animals. I’m objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say “It depends on how much you value them” rather than discussing how much we should value them.
I didn’t intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals “is likely to be emotionally charged and counterproductive”—an attitude I think is widespread given how little I’ve seen this issue discussed—strikes me as another example of EAs’ inconsistency when it comes to animals. No EA hesitates to debate, say, someone’s preference for Christians over Muslims. So why are we afraid to debate preference among species?
FWIW, I agree that there probably exist objective facts about how to value different animals relative to each other, and people who claim to value 1 hour of human suffering the same as 1000 hours of chicken suffering are just plain wrong. But it’s pretty hard to convince people of this, so I try to avoid making arguments that rely on claiming high parity of value between humans and non-human animals. If you’re trying to make an argument, you should avoid making assumptions that many readers will disagree with, because then you’ll just lose people.
I took it that the point by Jesse was about how one should frame these issues, not that one should assume a high parity of value between human and nonhuman animals or whatever. The idea is only that these value judgements are properly subject rational argument and should be framed as if they are.
An aside: meta-ethics entered the discussion a unhelpfully here and below. It can be true that one ought to value future generations/nonhuman animals a certain way on a number of anti-realist views (subjectivism, versions of non-cognitivism). Further, it’s reasonable to hold that one can rationally argue over moral propositions, even if every moral proposition is false (error theory), in the same way that one can rationally argue over an aesthetic proposition, even if every aesthetic proposition is false. One can still appeal to reasons for seeing or believing a given way in either case. Of course, one will understand those reasons differently than the realist but the upshot is that the ‘first-order’ practice is left untouched. On the plausible moral anti-realist theories our first-order moral practices will remain largely untouched, in the same way, that on most normative anti-realist theories, concerning ideas like ‘one ought to believe that x’, ‘one ought to do x’, our relevant first-order practices will remain largely untouched.
People can discuss the reasons that they have certain moral or aesthetic preferences. They may even change their mind as a result of these discussions. But there’s nothing irrational about holding a certain set of preferences, so I object to EAs saying that particular preferences are right or wrong, especially if there’s significant disagreement.
But there’s nothing irrational about holding a certain set of preferences,
Sure there can be. As trivial cases, people could have preferences which violate VNM axioms. But usually when we talk about morality we don’t think that merely following the weakest kind of rationality is sufficient for a justified ethical system.
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren’t strict utilitarians.
EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree.
Metaethical claims don’t have a strong hold on normative issues. We can rationally disagree as moral realists to the extent that we have reasons for or against various moral principles. Anti-realists can disagree to the same extent based on their own reasons for or against moral principles, but it’s not obvious to me that they have any basis for rationally holding a range of moral principles which is wider than that which is available to the moral realist. At the very least, that’s not how prominent anti-realist moral philosophers seem to think.
The realism vs anti-realism debate is about how to construe normative claims, not about which normative claims are justified or not. Taking the side of anti-realism doesn’t provide a ticket to select values arbitrarily or based on personal appeal.
There’s no basis in anti-realism for saying that a moral system is objectively unjustified, no matter how reprehensible. “Justified” and similar words are value judgments, and anti-realism doesn’t accept the existence of objective value. Like, when you say “doesn’t provide a ticket”, that implies requiring permission. Permission from whom or what?
Anti-realism is such an uncomfortable philosophy that people are often unwilling or unable to accept its full implications.
There’s no basis in anti-realism for saying that a moral system is objectively unjustified, no matter how reprehensible.
There is certainly enough basis in anti-realism for saying that moral systems are unjustified. I’m not sure what you mean by “objectively” unjustified—certainly, anti-realists can’t claim that certain moral systems are true while others are false. But that doesn’t imply that ethics are arbitrary. The right moral system, according to anti-realists, could be one that fits our properly construed intuitions; one that is supported by empirical evidence; one that is grounded in basic tenets of rationality; or any other metric—just like moral realists say.
Certainly it’s possible for the anti-realist to claim “it is morally right to torture babies, because I feel like it,” just like it’s also possible for the realist to claim the same thing. And both of them will (probably) be making claims that don’t make a whole lot of sense and are easy to attack.
And certainly there are plenty of anti-realists who make claims along the lines of “I believe X morality because I am selfish and it’s the values I find appealing,” or something of the sort; but that’s simply bad philosophy which lacks justification. Actual anti-realist philosophers don’t think that way.
Like, when you say “doesn’t provide a ticket”, that implies requiring permission. Permission from whom or what?
It means that the anti-realist is missing out on some key element or intention which they are trying to include in their moral claims. They probably intend that their moral system is compatible with human intuitions, or that it is grounded in rationality, or whatever it is that they think provides the basis for morality (just like moral realists have similar metrics). And when the anti-realist makes such a moral claim, we point out: hey, your moral claims don’t make sense, what reasons do you have to follow them?
Obviously the anti-realist could say that they don’t care if their morality is rational, or justified, or intuitive, or whatever it is they have as the basis for morality. But the moral realist can do a similar thing: I could say that I simply don’t care if my moral principles are correct or not. In both cases, there’s no way to literally force them to change their beliefs, but you have exposed them for possessing faulty reasoning.
Here’s some Reddit threads that might explain it better than I can (I agree with some of the commenters that ethics and metaethics are not completely separate; however I still don’t think that ethics under anti-realism is as different as you say it is):
Okay, so maybe it’s not distinct to anti-realism. But that only strengthens my claim that there’s nothing irrational about having different values.
You keep trying to have it both ways. You say “anti-realists can’t claim that certain moral systems are true while others are false.” But then you substitute other words that suggest either empirical or normative claims:
“right”
“justified”
“easy to attack”
“doesn’t make sense”
“proper”
“rational”
“faulty reasoning”
“bad”
Even “intuitive” is subjective. Many cultures have “intuitive” values that we’d find reprehensible. (Getting back to the original claim, many people intuitively value humans much more than other animals.)
Debating value differences can be worthwhile, but I object to the EA attitude of acting like people with different values are “irrational” or “illogical”. It’s “unjustified”, as you’d say, and bad for outreach, especially when the values are controversial.
Again, antirealists can make normative claims just like anyone else. The difference is in how these claims are handled and interpreted. Antirealists just think that truth and falsity are the wrong sort of thing to be looking for when it comes to normative claims.
(And it goes without saying that anyone can make empirical claims.)
Debating value differences can be worthwhile, but I object to the EA attitude of acting like people with different values are “irrational” or “illogical”. It’s “unjustified”, as you’d say
No, I think there are plenty of beliefs and values where we are justified in calling them irrational or illogical. Specifically, there are beliefs and values where the people holding them have poor reasons for doing so, and there are beliefs and values which are harmful in society, and there are a great deal which are in both those groups.
and bad for outreach, especially when the values are controversial.
Maybe. Or maybe it’s important to prevent these ideas from gaining traction. Maybe having a clearly-defined out-group is helpful for the solidarity and strength of the in-group.
I take issue with the statement “it depends greatly on how much you value a human compared to a nonhuman animal”. Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read “it depends greatly on how much we ought to value a human compared to a nonhuman”.
Imagine if EAs went around saying “it depends on how much you value an African relative to an American”. Maybe there is more reasonable uncertainty about between- as opposed to within-species comparisons, but still we demand good reasons for the value we assign to different kinds of humans. This idea is at the core of Effective Altruism. We ought to do the same with non-human sentients.
I feel like wading into this debate is likely to be emotionally charged and counterproductive, but I think it is reasonable to have a good deal of “moral uncertainty” when it comes to doing interspecies comparisons, whereas there’d be much less certainty (though still some) when comparing between humans (e.g., is a pregnant person worth more? Is a healthy young person worth more than an 80-year old in a coma?).
For example, one leading view would be that one chicken has equal worth to one human. Another view would be to discount the chicken by its brain size relative to humans, which would imply a value of 300 chickens per human. There are also many views in between and I’m uncertain which one to take.
Sure, such moral calculus may seem very crude, but it does not judge the animal merely by species.
I’m not objecting to having moral uncertainty about animals. I’m objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say “It depends on how much you value them” rather than discussing how much we should value them.
I didn’t intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals “is likely to be emotionally charged and counterproductive”—an attitude I think is widespread given how little I’ve seen this issue discussed—strikes me as another example of EAs’ inconsistency when it comes to animals. No EA hesitates to debate, say, someone’s preference for Christians over Muslims. So why are we afraid to debate preference among species?
FWIW, I agree that there probably exist objective facts about how to value different animals relative to each other, and people who claim to value 1 hour of human suffering the same as 1000 hours of chicken suffering are just plain wrong. But it’s pretty hard to convince people of this, so I try to avoid making arguments that rely on claiming high parity of value between humans and non-human animals. If you’re trying to make an argument, you should avoid making assumptions that many readers will disagree with, because then you’ll just lose people.
I took it that the point by Jesse was about how one should frame these issues, not that one should assume a high parity of value between human and nonhuman animals or whatever. The idea is only that these value judgements are properly subject rational argument and should be framed as if they are.
An aside: meta-ethics entered the discussion a unhelpfully here and below. It can be true that one ought to value future generations/nonhuman animals a certain way on a number of anti-realist views (subjectivism, versions of non-cognitivism). Further, it’s reasonable to hold that one can rationally argue over moral propositions, even if every moral proposition is false (error theory), in the same way that one can rationally argue over an aesthetic proposition, even if every aesthetic proposition is false. One can still appeal to reasons for seeing or believing a given way in either case. Of course, one will understand those reasons differently than the realist but the upshot is that the ‘first-order’ practice is left untouched. On the plausible moral anti-realist theories our first-order moral practices will remain largely untouched, in the same way, that on most normative anti-realist theories, concerning ideas like ‘one ought to believe that x’, ‘one ought to do x’, our relevant first-order practices will remain largely untouched.
People can discuss the reasons that they have certain moral or aesthetic preferences. They may even change their mind as a result of these discussions. But there’s nothing irrational about holding a certain set of preferences, so I object to EAs saying that particular preferences are right or wrong, especially if there’s significant disagreement.
Sure there can be. As trivial cases, people could have preferences which violate VNM axioms. But usually when we talk about morality we don’t think that merely following the weakest kind of rationality is sufficient for a justified ethical system.
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren’t strict utilitarians.
EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
Metaethical claims don’t have a strong hold on normative issues. We can rationally disagree as moral realists to the extent that we have reasons for or against various moral principles. Anti-realists can disagree to the same extent based on their own reasons for or against moral principles, but it’s not obvious to me that they have any basis for rationally holding a range of moral principles which is wider than that which is available to the moral realist. At the very least, that’s not how prominent anti-realist moral philosophers seem to think.
The realism vs anti-realism debate is about how to construe normative claims, not about which normative claims are justified or not. Taking the side of anti-realism doesn’t provide a ticket to select values arbitrarily or based on personal appeal.
There’s no basis in anti-realism for saying that a moral system is objectively unjustified, no matter how reprehensible. “Justified” and similar words are value judgments, and anti-realism doesn’t accept the existence of objective value. Like, when you say “doesn’t provide a ticket”, that implies requiring permission. Permission from whom or what?
Anti-realism is such an uncomfortable philosophy that people are often unwilling or unable to accept its full implications.
Moral particularism isn’t explicitly anti-realist, but very compatible: https://en.wikipedia.org/wiki/Moral_particularism
There is certainly enough basis in anti-realism for saying that moral systems are unjustified. I’m not sure what you mean by “objectively” unjustified—certainly, anti-realists can’t claim that certain moral systems are true while others are false. But that doesn’t imply that ethics are arbitrary. The right moral system, according to anti-realists, could be one that fits our properly construed intuitions; one that is supported by empirical evidence; one that is grounded in basic tenets of rationality; or any other metric—just like moral realists say.
Certainly it’s possible for the anti-realist to claim “it is morally right to torture babies, because I feel like it,” just like it’s also possible for the realist to claim the same thing. And both of them will (probably) be making claims that don’t make a whole lot of sense and are easy to attack.
And certainly there are plenty of anti-realists who make claims along the lines of “I believe X morality because I am selfish and it’s the values I find appealing,” or something of the sort; but that’s simply bad philosophy which lacks justification. Actual anti-realist philosophers don’t think that way.
It means that the anti-realist is missing out on some key element or intention which they are trying to include in their moral claims. They probably intend that their moral system is compatible with human intuitions, or that it is grounded in rationality, or whatever it is that they think provides the basis for morality (just like moral realists have similar metrics). And when the anti-realist makes such a moral claim, we point out: hey, your moral claims don’t make sense, what reasons do you have to follow them?
Obviously the anti-realist could say that they don’t care if their morality is rational, or justified, or intuitive, or whatever it is they have as the basis for morality. But the moral realist can do a similar thing: I could say that I simply don’t care if my moral principles are correct or not. In both cases, there’s no way to literally force them to change their beliefs, but you have exposed them for possessing faulty reasoning.
Here’s some Reddit threads that might explain it better than I can (I agree with some of the commenters that ethics and metaethics are not completely separate; however I still don’t think that ethics under anti-realism is as different as you say it is):
https://www.reddit.com/r/askphilosophy/comments/3qh90s/whats_the_relationship_between_metaethics_and/
https://www.reddit.com/r/askphilosophy/comments/3fu710/how_is_it_possible_for_the_ethical_theory_to/
https://www.reddit.com/r/askphilosophy/comments/356g4r/can_an_ethical_noncognitivist_accept_any/
Okay, so maybe it’s not distinct to anti-realism. But that only strengthens my claim that there’s nothing irrational about having different values.
You keep trying to have it both ways. You say “anti-realists can’t claim that certain moral systems are true while others are false.” But then you substitute other words that suggest either empirical or normative claims:
“right”
“justified”
“easy to attack”
“doesn’t make sense”
“proper”
“rational”
“faulty reasoning”
“bad”
Even “intuitive” is subjective. Many cultures have “intuitive” values that we’d find reprehensible. (Getting back to the original claim, many people intuitively value humans much more than other animals.)
Debating value differences can be worthwhile, but I object to the EA attitude of acting like people with different values are “irrational” or “illogical”. It’s “unjustified”, as you’d say, and bad for outreach, especially when the values are controversial.
Again, antirealists can make normative claims just like anyone else. The difference is in how these claims are handled and interpreted. Antirealists just think that truth and falsity are the wrong sort of thing to be looking for when it comes to normative claims.
(And it goes without saying that anyone can make empirical claims.)
No, I think there are plenty of beliefs and values where we are justified in calling them irrational or illogical. Specifically, there are beliefs and values where the people holding them have poor reasons for doing so, and there are beliefs and values which are harmful in society, and there are a great deal which are in both those groups.
Maybe. Or maybe it’s important to prevent these ideas from gaining traction. Maybe having a clearly-defined out-group is helpful for the solidarity and strength of the in-group.