Strong disagree. I am not closed to being persuaded on this, though, but I haven’t found your arguments convincing yet.
Even before going into details, though, I’d like to start with the end. I see that you find it intuitively very hard to reject the stance-independent wrongness of torture. If it boils down to intuitions, I find it as hard to accept that morality could be anything other than a human invention that is useful for some instrumental needs, and nothing more.
I am still starting to explore the philosophical grounds for my intuitions, but at the moment, I think a valid summary is something like this:
Moral Anti-Realism: moral statements do not express stance-independent truths. There is no objective moral reality analogous to mathematical or physical facts.
Contractarian Ethics: moral obligations are agreements between rational agents. Ethics emerges from social contracts (negotiated, context-sensitive rules for mutual benefit) not from metaphysical truths.
Subjective Preference: Moral norms are built from individual preferences, desires, and aversions filtered through the pragmatic need to live together peacefully and negotiate conflicts. Some preferences (e.g. for not being tortured) are near-universal, but still not “objective.”
Rationality is procedural and instrumental: it is about coherently pursuing one’s preferences and goals, given the available information, constraints and beliefs.
Skeptic of all intuitions: Moral intuitions are evolved (biologically and culturally) emotional heuristics which we’ve also internalized, policed and indoctrinated into since childhood.
Nitpicking some of the stuff you talk about:
But lots of moral statements just really don’t seem like any of these. The wrongness of slavery, the holocaust, baby torture, stabbing people in the eye—it seems like all these things really are wrong and this fact doesn’t depend on what people think about it.
‘Seems’ is a verb you use a lot all through this section. Lots of things seem, but we’ve learned not to trust intuitions. The sun seems to move and rise in the East. With empirical stuff, we can at least make some observations and measurements, develop some theories and put them to the test. We can’t seem to have anything similar with ethics. A plausible explanation for these things you list seem morally true to us is the same as for why, from the top of skyscraper in a city, the streets below all seem to radiate outward from your position. We are Westerners, we are part of a culture with specific values that has, because of accidents, been tremendously materially, economically and politically successful. It is easy to imagine we are, if not at a pinnacle, ‘in the right road of history’ and that all in the present will have to converge and just deep our project. I find it much more likely that 500 years from now, our successors -probably non WEIRD people, perhaps AI- will look with the same contempt to our moral fantasies as we have for the cults of the Roman and Babylonian gods. You’re assuming as obvious a narrative of lineal moral progress which I think is really open to disputation.
If I have a reason to prevent my own suffering, it seems that suffering is bad, which gives me a moral reason to prevent it.
Suffering is bad for me. It seems plausible to assume it will also be so for others, which means I should use this part of information as part of the bargaining set of tools for game-theoretically negotiating with others the satisfaction of my preference with the minimal sacrifice one can get away with while maximizing the overall results (but just because the latter ultimately give a bigger satisfaction of my own than in the lack of agreements and contracts).
But this means that moral anti-realists must think that you can never have a reason to care about something independ of what you actually do care about. This is crazy as shown by the following cases
I fail to see where you’re going with these contrived examples of yours. Like, what people desire is (I’d say always, but let’s caveat it a bit) what gives them pleasure. It is not plausible to consider cases where this is not the case. But even if that wasn’t the case, I don’t see the irrationality even in these examples. You’re assuming a very specific, value-laden view of rationality -one that says people are “irrational” if they pursue ends you see as harmful, malformed, or futile. But I imagine anti-realists view rationality as I stated above: as consistency between means and ends. If someone has strange or harmful goals, that may be sad or tragic to you, but it’s not irrational on their terms. You’re just begging the question by smuggling in your own evaluative framework as if it were universal.
But just as there are visual appearances, there are intellectual appearances. Just as it appears to me that there’s a table in front of me, it appears to me that it’s wrong to torture babies. Just as I should think there’s a table absent a good reason to doubt it, I should think it’s wrong to torture babies. In fact, I should be more confident in the wrongness of torturing babies, because that seems less likely to be the result of error. It seems more likely I’m hallucinating a table than that I’m wrong about the wrongness of baby torture.
This analogy fails because it treats moral intuition like sensory perception, but without acknowledging the critical difference: empirical perceptions are testable, correctable, and embedded in a shared external reality. I might trust that I see a table but I can measure it, predict how it behaves, let others confirm it. Moral intuitions don’t offer that. They’re not observable facts but untestable gut reactions. Saying “I just see that baby torture is wrong” is not evidence, it’s a psychological datum, not a method of discovery. You’re proposing a methodology where feeling intensely about something counts as knowing it, even in the absence of any testing, mechanism, or independent verification. That’s not realism; it’s intuitionism dressed as epistemology.
We all begin inquiry from things that “seem right”, but in empirical and mathematical domains, we don’t stop there. We test, predict, measure, or prove. That’s the key difference: perception and intuition may guide us initially, but scientific realism and mathematical Platonism justify beliefs by their explanatory power, coherence, and predictive success. In contrast, moral realism lacks any comparable mechanism. You can’t test a moral intuition the way you test a physical hypothesis or formalize a logical inference. There’s no experiment, model, or predictive structure that tells us whether “baby torture is wrong” is a metaphysical fact or just a deeply shared psychological aversion. You’re claiming parity where there’s a methodological gap.
As for the claim that critics of intuition rely on intuitions too: there’s a difference between relying on formal coherence (e.g., basic logical tautologies) and on moral gut feelings. The probability example confuses things, as Bayes’ theorem and the conjunction rule aren’t known by intuition but by mathematical derivation, and our confidence in them comes from their internal consistency and predictive accuracy, not how they “feel.”
I’d also like to go into the last two big topics you propose, i.e., evolutionary debunking arguments and physicalism, but this post is already too long, and probably not conducive to a conversation.
Strong disagree. I am not closed to being persuaded on this, though, but I haven’t found your arguments convincing yet.
Even before going into details, though, I’d like to start with the end. I see that you find it intuitively very hard to reject the stance-independent wrongness of torture. If it boils down to intuitions, I find it as hard to accept that morality could be anything other than a human invention that is useful for some instrumental needs, and nothing more.
I am still starting to explore the philosophical grounds for my intuitions, but at the moment, I think a valid summary is something like this:
Moral Anti-Realism: moral statements do not express stance-independent truths. There is no objective moral reality analogous to mathematical or physical facts.
Contractarian Ethics: moral obligations are agreements between rational agents. Ethics emerges from social contracts (negotiated, context-sensitive rules for mutual benefit) not from metaphysical truths.
Subjective Preference: Moral norms are built from individual preferences, desires, and aversions filtered through the pragmatic need to live together peacefully and negotiate conflicts. Some preferences (e.g. for not being tortured) are near-universal, but still not “objective.”
Rationality is procedural and instrumental: it is about coherently pursuing one’s preferences and goals, given the available information, constraints and beliefs.
Skeptic of all intuitions: Moral intuitions are evolved (biologically and culturally) emotional heuristics which we’ve also internalized, policed and indoctrinated into since childhood.
Nitpicking some of the stuff you talk about:
‘Seems’ is a verb you use a lot all through this section. Lots of things seem, but we’ve learned not to trust intuitions. The sun seems to move and rise in the East. With empirical stuff, we can at least make some observations and measurements, develop some theories and put them to the test. We can’t seem to have anything similar with ethics. A plausible explanation for these things you list seem morally true to us is the same as for why, from the top of skyscraper in a city, the streets below all seem to radiate outward from your position. We are Westerners, we are part of a culture with specific values that has, because of accidents, been tremendously materially, economically and politically successful. It is easy to imagine we are, if not at a pinnacle, ‘in the right road of history’ and that all in the present will have to converge and just deep our project. I find it much more likely that 500 years from now, our successors -probably non WEIRD people, perhaps AI- will look with the same contempt to our moral fantasies as we have for the cults of the Roman and Babylonian gods. You’re assuming as obvious a narrative of lineal moral progress which I think is really open to disputation.
Suffering is bad for me. It seems plausible to assume it will also be so for others, which means I should use this part of information as part of the bargaining set of tools for game-theoretically negotiating with others the satisfaction of my preference with the minimal sacrifice one can get away with while maximizing the overall results (but just because the latter ultimately give a bigger satisfaction of my own than in the lack of agreements and contracts).
I fail to see where you’re going with these contrived examples of yours. Like, what people desire is (I’d say always, but let’s caveat it a bit) what gives them pleasure. It is not plausible to consider cases where this is not the case. But even if that wasn’t the case, I don’t see the irrationality even in these examples. You’re assuming a very specific, value-laden view of rationality -one that says people are “irrational” if they pursue ends you see as harmful, malformed, or futile. But I imagine anti-realists view rationality as I stated above: as consistency between means and ends. If someone has strange or harmful goals, that may be sad or tragic to you, but it’s not irrational on their terms. You’re just begging the question by smuggling in your own evaluative framework as if it were universal.
This analogy fails because it treats moral intuition like sensory perception, but without acknowledging the critical difference: empirical perceptions are testable, correctable, and embedded in a shared external reality. I might trust that I see a table but I can measure it, predict how it behaves, let others confirm it. Moral intuitions don’t offer that. They’re not observable facts but untestable gut reactions. Saying “I just see that baby torture is wrong” is not evidence, it’s a psychological datum, not a method of discovery. You’re proposing a methodology where feeling intensely about something counts as knowing it, even in the absence of any testing, mechanism, or independent verification. That’s not realism; it’s intuitionism dressed as epistemology.
We all begin inquiry from things that “seem right”, but in empirical and mathematical domains, we don’t stop there. We test, predict, measure, or prove. That’s the key difference: perception and intuition may guide us initially, but scientific realism and mathematical Platonism justify beliefs by their explanatory power, coherence, and predictive success. In contrast, moral realism lacks any comparable mechanism. You can’t test a moral intuition the way you test a physical hypothesis or formalize a logical inference. There’s no experiment, model, or predictive structure that tells us whether “baby torture is wrong” is a metaphysical fact or just a deeply shared psychological aversion. You’re claiming parity where there’s a methodological gap.
As for the claim that critics of intuition rely on intuitions too: there’s a difference between relying on formal coherence (e.g., basic logical tautologies) and on moral gut feelings. The probability example confuses things, as Bayes’ theorem and the conjunction rule aren’t known by intuition but by mathematical derivation, and our confidence in them comes from their internal consistency and predictive accuracy, not how they “feel.”
I’d also like to go into the last two big topics you propose, i.e., evolutionary debunking arguments and physicalism, but this post is already too long, and probably not conducive to a conversation.