Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren’t strict utilitarians.
EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree.
Metaethical claims don’t have a strong hold on normative issues. We can rationally disagree as moral realists to the extent that we have reasons for or against various moral principles. Anti-realists can disagree to the same extent based on their own reasons for or against moral principles, but it’s not obvious to me that they have any basis for rationally holding a range of moral principles which is wider than that which is available to the moral realist. At the very least, that’s not how prominent anti-realist moral philosophers seem to think.
The realism vs anti-realism debate is about how to construe normative claims, not about which normative claims are justified or not. Taking the side of anti-realism doesn’t provide a ticket to select values arbitrarily or based on personal appeal.
There’s no basis in anti-realism for saying that a moral system is objectively unjustified, no matter how reprehensible. “Justified” and similar words are value judgments, and anti-realism doesn’t accept the existence of objective value. Like, when you say “doesn’t provide a ticket”, that implies requiring permission. Permission from whom or what?
Anti-realism is such an uncomfortable philosophy that people are often unwilling or unable to accept its full implications.
There’s no basis in anti-realism for saying that a moral system is objectively unjustified, no matter how reprehensible.
There is certainly enough basis in anti-realism for saying that moral systems are unjustified. I’m not sure what you mean by “objectively” unjustified—certainly, anti-realists can’t claim that certain moral systems are true while others are false. But that doesn’t imply that ethics are arbitrary. The right moral system, according to anti-realists, could be one that fits our properly construed intuitions; one that is supported by empirical evidence; one that is grounded in basic tenets of rationality; or any other metric—just like moral realists say.
Certainly it’s possible for the anti-realist to claim “it is morally right to torture babies, because I feel like it,” just like it’s also possible for the realist to claim the same thing. And both of them will (probably) be making claims that don’t make a whole lot of sense and are easy to attack.
And certainly there are plenty of anti-realists who make claims along the lines of “I believe X morality because I am selfish and it’s the values I find appealing,” or something of the sort; but that’s simply bad philosophy which lacks justification. Actual anti-realist philosophers don’t think that way.
Like, when you say “doesn’t provide a ticket”, that implies requiring permission. Permission from whom or what?
It means that the anti-realist is missing out on some key element or intention which they are trying to include in their moral claims. They probably intend that their moral system is compatible with human intuitions, or that it is grounded in rationality, or whatever it is that they think provides the basis for morality (just like moral realists have similar metrics). And when the anti-realist makes such a moral claim, we point out: hey, your moral claims don’t make sense, what reasons do you have to follow them?
Obviously the anti-realist could say that they don’t care if their morality is rational, or justified, or intuitive, or whatever it is they have as the basis for morality. But the moral realist can do a similar thing: I could say that I simply don’t care if my moral principles are correct or not. In both cases, there’s no way to literally force them to change their beliefs, but you have exposed them for possessing faulty reasoning.
Here’s some Reddit threads that might explain it better than I can (I agree with some of the commenters that ethics and metaethics are not completely separate; however I still don’t think that ethics under anti-realism is as different as you say it is):
Okay, so maybe it’s not distinct to anti-realism. But that only strengthens my claim that there’s nothing irrational about having different values.
You keep trying to have it both ways. You say “anti-realists can’t claim that certain moral systems are true while others are false.” But then you substitute other words that suggest either empirical or normative claims:
“right”
“justified”
“easy to attack”
“doesn’t make sense”
“proper”
“rational”
“faulty reasoning”
“bad”
Even “intuitive” is subjective. Many cultures have “intuitive” values that we’d find reprehensible. (Getting back to the original claim, many people intuitively value humans much more than other animals.)
Debating value differences can be worthwhile, but I object to the EA attitude of acting like people with different values are “irrational” or “illogical”. It’s “unjustified”, as you’d say, and bad for outreach, especially when the values are controversial.
Again, antirealists can make normative claims just like anyone else. The difference is in how these claims are handled and interpreted. Antirealists just think that truth and falsity are the wrong sort of thing to be looking for when it comes to normative claims.
(And it goes without saying that anyone can make empirical claims.)
Debating value differences can be worthwhile, but I object to the EA attitude of acting like people with different values are “irrational” or “illogical”. It’s “unjustified”, as you’d say
No, I think there are plenty of beliefs and values where we are justified in calling them irrational or illogical. Specifically, there are beliefs and values where the people holding them have poor reasons for doing so, and there are beliefs and values which are harmful in society, and there are a great deal which are in both those groups.
and bad for outreach, especially when the values are controversial.
Maybe. Or maybe it’s important to prevent these ideas from gaining traction. Maybe having a clearly-defined out-group is helpful for the solidarity and strength of the in-group.
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren’t strict utilitarians.
EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
Metaethical claims don’t have a strong hold on normative issues. We can rationally disagree as moral realists to the extent that we have reasons for or against various moral principles. Anti-realists can disagree to the same extent based on their own reasons for or against moral principles, but it’s not obvious to me that they have any basis for rationally holding a range of moral principles which is wider than that which is available to the moral realist. At the very least, that’s not how prominent anti-realist moral philosophers seem to think.
The realism vs anti-realism debate is about how to construe normative claims, not about which normative claims are justified or not. Taking the side of anti-realism doesn’t provide a ticket to select values arbitrarily or based on personal appeal.
There’s no basis in anti-realism for saying that a moral system is objectively unjustified, no matter how reprehensible. “Justified” and similar words are value judgments, and anti-realism doesn’t accept the existence of objective value. Like, when you say “doesn’t provide a ticket”, that implies requiring permission. Permission from whom or what?
Anti-realism is such an uncomfortable philosophy that people are often unwilling or unable to accept its full implications.
Moral particularism isn’t explicitly anti-realist, but very compatible: https://en.wikipedia.org/wiki/Moral_particularism
There is certainly enough basis in anti-realism for saying that moral systems are unjustified. I’m not sure what you mean by “objectively” unjustified—certainly, anti-realists can’t claim that certain moral systems are true while others are false. But that doesn’t imply that ethics are arbitrary. The right moral system, according to anti-realists, could be one that fits our properly construed intuitions; one that is supported by empirical evidence; one that is grounded in basic tenets of rationality; or any other metric—just like moral realists say.
Certainly it’s possible for the anti-realist to claim “it is morally right to torture babies, because I feel like it,” just like it’s also possible for the realist to claim the same thing. And both of them will (probably) be making claims that don’t make a whole lot of sense and are easy to attack.
And certainly there are plenty of anti-realists who make claims along the lines of “I believe X morality because I am selfish and it’s the values I find appealing,” or something of the sort; but that’s simply bad philosophy which lacks justification. Actual anti-realist philosophers don’t think that way.
It means that the anti-realist is missing out on some key element or intention which they are trying to include in their moral claims. They probably intend that their moral system is compatible with human intuitions, or that it is grounded in rationality, or whatever it is that they think provides the basis for morality (just like moral realists have similar metrics). And when the anti-realist makes such a moral claim, we point out: hey, your moral claims don’t make sense, what reasons do you have to follow them?
Obviously the anti-realist could say that they don’t care if their morality is rational, or justified, or intuitive, or whatever it is they have as the basis for morality. But the moral realist can do a similar thing: I could say that I simply don’t care if my moral principles are correct or not. In both cases, there’s no way to literally force them to change their beliefs, but you have exposed them for possessing faulty reasoning.
Here’s some Reddit threads that might explain it better than I can (I agree with some of the commenters that ethics and metaethics are not completely separate; however I still don’t think that ethics under anti-realism is as different as you say it is):
https://www.reddit.com/r/askphilosophy/comments/3qh90s/whats_the_relationship_between_metaethics_and/
https://www.reddit.com/r/askphilosophy/comments/3fu710/how_is_it_possible_for_the_ethical_theory_to/
https://www.reddit.com/r/askphilosophy/comments/356g4r/can_an_ethical_noncognitivist_accept_any/
Okay, so maybe it’s not distinct to anti-realism. But that only strengthens my claim that there’s nothing irrational about having different values.
You keep trying to have it both ways. You say “anti-realists can’t claim that certain moral systems are true while others are false.” But then you substitute other words that suggest either empirical or normative claims:
“right”
“justified”
“easy to attack”
“doesn’t make sense”
“proper”
“rational”
“faulty reasoning”
“bad”
Even “intuitive” is subjective. Many cultures have “intuitive” values that we’d find reprehensible. (Getting back to the original claim, many people intuitively value humans much more than other animals.)
Debating value differences can be worthwhile, but I object to the EA attitude of acting like people with different values are “irrational” or “illogical”. It’s “unjustified”, as you’d say, and bad for outreach, especially when the values are controversial.
Again, antirealists can make normative claims just like anyone else. The difference is in how these claims are handled and interpreted. Antirealists just think that truth and falsity are the wrong sort of thing to be looking for when it comes to normative claims.
(And it goes without saying that anyone can make empirical claims.)
No, I think there are plenty of beliefs and values where we are justified in calling them irrational or illogical. Specifically, there are beliefs and values where the people holding them have poor reasons for doing so, and there are beliefs and values which are harmful in society, and there are a great deal which are in both those groups.
Maybe. Or maybe it’s important to prevent these ideas from gaining traction. Maybe having a clearly-defined out-group is helpful for the solidarity and strength of the in-group.