Thanks for opening this important debate! I’d like to offer a different perspective that might bridge some gaps between realism and anti-realism.
I tend to view morality as something that evolved because it has objectively useful functions for social systems—primarily maintaining group cohesion, trust, and long-term stability. In this view, moral judgments aren’t just arbitrary or subjective preferences, but neither are they metaphysical truths that exist independently of human experience. Rather, they’re deeply tied to the objective requirements for sustainable group existence.
Some moral norms seem universally valid, like prohibitions against harming children. Why? Because any society that systematically harms its offspring simply can’t sustain itself. Other norms, like fairness and autonomy, emerge because they’re objectively beneficial in complex groups: fairness keeps group interactions stable and predictable, and autonomy ensures individual concerns are incorporated, helping the system adapt and flourish.
So perhaps morality can be seen as having both subjective and objective dimensions:
Objective dimension: Morality serves objectively measurable functions that are universally beneficial across human groups (like trust-building, protecting vulnerable members, and promoting long-term cooperation).
Subjective dimension: How exactly these functions translate into specific norms can differ across cultures or contexts (though within limits).
In my recent post “Beyond Short-Termism: How δ and w Can Realign AI with Our Values”, I explored this idea through the lenses of two key moral parameters—Time horizon (δ) and Moral scope (w)—showing how expanding these parameters corresponds to what we typically view as higher morality.
I’d be curious to hear your thoughts on this functional view of morality: do you think it might bridge the realism–anti-realism divide, or does it fail to capture some key aspect of the debate?
I tend to think the word objective doesn’t fit morality from a philosophical standpoint. “objective” truths are arguments that we can decide upon who is right by checking predictions, each of them is evidence for the validity of a claim. If I say earth is round, we can check this claim by talking to experts or flying to space and looking at earth, etc. All of which are prediction of subjective experiences we will have.
So “objective” argument means I am guessing something about the future world, that it will look one way and not another. Morality is in a totally different domian. I would like the future world to be with less suffering and not more, but this is a longing, not a prediction that can be refuted. So with morality there is some sense of logic and being systematic, but at the core it is not an objective question becasue it can’t be decided upon with predictions
That’s a thoughtful perspective — and I agree that morality isn’t something we can measure with a ruler or confirm through telescopic observations. It’s not that kind of “objective.”
But perhaps it’s more like language or mathematics: not hard-coded into the universe, but once established, it provides a shared structure for trust, fairness, and cooperation. Even if our moral goals stem from desires rather than predictions, we still need common ground to live and act together meaningfully.
That naturally raises the deeper question: what values can or should structure that common ground? I’m planning to explore this further in an upcoming post.
I wouldn’t put mathematics in the same bag as morality. As per the indispensibility argument, one can make a fair case (that one can’t for ethics) that strong, indirect evidence for the truth of mathematics (and some types of it actually ‘hard-coded into the universe’) is that all the hard sciences rely on it to explain stuff. Take the math away and there is no science. Take moral realism away and… nothing happens, really?
I agree that ethics does provide a shared structure for trust, fairness, and cooperation, but it makes much more sense to employ, then social-contractual language and speak about game-theoretic equilibria. Of course, the problem with this is that it doesn’t satisfy the urge some people have of trying to force their deeply felt but historically and culturally deeply contingent values into some universal, unavoidable mandate. And we all can feel this when we try, as BB does, to bring up examples of concrete cases that really challenge those values that we’ve interiorized.
I agree morality and math differ in how we justify them, yet both are indispensable scaffolds. Remove calculus and physics stops working; remove a core layer of shared moral norms (honesty, non-violence) and large-scale cooperation stalls. Game-theoretic language is fine, but the equilibrium still acts as an objective constraint: groups ignoring it disappear. That functional obligatoriness is what I had in mind.
I don’t think this large-scale cooperation or society or groups function is morality. It is linked to morality but it is fundamentally something else. A society can “function” well with having part of it suffer tremendously for the benefit of another group. There is nothing objective with longing for a world with less suffering, it is basically in another realm, not in the one of math or rational, though it is tied to rationality in some way
That’s a great point — and I agree that morality isn’t reducible to mere societal functioning or large-scale cooperation. Some societies can be “stable” while being profoundly unjust or harmful to part of their population. But I think this highlights a deeper structure: morality isn’t binary — it’s developmental.
We can think of morality as existing at different levels:
Basic morality secures minimal cooperation and trust — typically grounded in narrow circles (family, tribe) and short time horizons (days, years).
High morality expands both the temporal horizon and the moral circle — incorporating distant others, future generations, and even nonhuman beings.
This connects to an idea I’ve been exploring: morality as a coordinate in a 2D space defined by Time (how far we care into the future) and Scope (how wide our moral concern extends). Most people start somewhere in the lower-left of this space, and ethical growth is about moving upward and outward.
In that view, societies may function on basic morality, but flourishing — for individuals and civilization — requires higher-level ethics. And while morality might not be “objective” like math, it can still be intersubjectively structured, and in that sense, stable, teachable, and improvable.
Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical ‘true’ one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I’m still left overall better than without the large-scape cooperation and under the agreed-upon norms.
Thanks for opening this important debate! I’d like to offer a different perspective that might bridge some gaps between realism and anti-realism.
I tend to view morality as something that evolved because it has objectively useful functions for social systems—primarily maintaining group cohesion, trust, and long-term stability. In this view, moral judgments aren’t just arbitrary or subjective preferences, but neither are they metaphysical truths that exist independently of human experience. Rather, they’re deeply tied to the objective requirements for sustainable group existence.
Some moral norms seem universally valid, like prohibitions against harming children. Why? Because any society that systematically harms its offspring simply can’t sustain itself. Other norms, like fairness and autonomy, emerge because they’re objectively beneficial in complex groups: fairness keeps group interactions stable and predictable, and autonomy ensures individual concerns are incorporated, helping the system adapt and flourish.
So perhaps morality can be seen as having both subjective and objective dimensions:
Objective dimension: Morality serves objectively measurable functions that are universally beneficial across human groups (like trust-building, protecting vulnerable members, and promoting long-term cooperation).
Subjective dimension: How exactly these functions translate into specific norms can differ across cultures or contexts (though within limits).
In my recent post “Beyond Short-Termism: How δ and w Can Realign AI with Our Values”, I explored this idea through the lenses of two key moral parameters—Time horizon (δ) and Moral scope (w)—showing how expanding these parameters corresponds to what we typically view as higher morality.
I’d be curious to hear your thoughts on this functional view of morality: do you think it might bridge the realism–anti-realism divide, or does it fail to capture some key aspect of the debate?
I tend to think the word objective doesn’t fit morality from a philosophical standpoint. “objective” truths are arguments that we can decide upon who is right by checking predictions, each of them is evidence for the validity of a claim. If I say earth is round, we can check this claim by talking to experts or flying to space and looking at earth, etc. All of which are prediction of subjective experiences we will have.
So “objective” argument means I am guessing something about the future world, that it will look one way and not another. Morality is in a totally different domian. I would like the future world to be with less suffering and not more, but this is a longing, not a prediction that can be refuted. So with morality there is some sense of logic and being systematic, but at the core it is not an objective question becasue it can’t be decided upon with predictions
That’s a thoughtful perspective — and I agree that morality isn’t something we can measure with a ruler or confirm through telescopic observations. It’s not that kind of “objective.”
But perhaps it’s more like language or mathematics: not hard-coded into the universe, but once established, it provides a shared structure for trust, fairness, and cooperation. Even if our moral goals stem from desires rather than predictions, we still need common ground to live and act together meaningfully.
That naturally raises the deeper question: what values can or should structure that common ground? I’m planning to explore this further in an upcoming post.
I wouldn’t put mathematics in the same bag as morality. As per the indispensibility argument, one can make a fair case (that one can’t for ethics) that strong, indirect evidence for the truth of mathematics (and some types of it actually ‘hard-coded into the universe’) is that all the hard sciences rely on it to explain stuff. Take the math away and there is no science. Take moral realism away and… nothing happens, really?
I agree that ethics does provide a shared structure for trust, fairness, and cooperation, but it makes much more sense to employ, then social-contractual language and speak about game-theoretic equilibria. Of course, the problem with this is that it doesn’t satisfy the urge some people have of trying to force their deeply felt but historically and culturally deeply contingent values into some universal, unavoidable mandate. And we all can feel this when we try, as BB does, to bring up examples of concrete cases that really challenge those values that we’ve interiorized.
I agree morality and math differ in how we justify them, yet both are indispensable scaffolds. Remove calculus and physics stops working; remove a core layer of shared moral norms (honesty, non-violence) and large-scale cooperation stalls. Game-theoretic language is fine, but the equilibrium still acts as an objective constraint: groups ignoring it disappear. That functional obligatoriness is what I had in mind.
I don’t think this large-scale cooperation or society or groups function is morality. It is linked to morality but it is fundamentally something else. A society can “function” well with having part of it suffer tremendously for the benefit of another group. There is nothing objective with longing for a world with less suffering, it is basically in another realm, not in the one of math or rational, though it is tied to rationality in some way
That’s a great point — and I agree that morality isn’t reducible to mere societal functioning or large-scale cooperation. Some societies can be “stable” while being profoundly unjust or harmful to part of their population. But I think this highlights a deeper structure: morality isn’t binary — it’s developmental.
We can think of morality as existing at different levels:
Basic morality secures minimal cooperation and trust — typically grounded in narrow circles (family, tribe) and short time horizons (days, years).
High morality expands both the temporal horizon and the moral circle — incorporating distant others, future generations, and even nonhuman beings.
This connects to an idea I’ve been exploring: morality as a coordinate in a 2D space defined by Time (how far we care into the future) and Scope (how wide our moral concern extends). Most people start somewhere in the lower-left of this space, and ethical growth is about moving upward and outward.
In that view, societies may function on basic morality, but flourishing — for individuals and civilization — requires higher-level ethics. And while morality might not be “objective” like math, it can still be intersubjectively structured, and in that sense, stable, teachable, and improvable.
Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical ‘true’ one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I’m still left overall better than without the large-scape cooperation and under the agreed-upon norms.