I don’t think this large-scale cooperation or society or groups function is morality. It is linked to morality but it is fundamentally something else. A society can “function” well with having part of it suffer tremendously for the benefit of another group. There is nothing objective with longing for a world with less suffering, it is basically in another realm, not in the one of math or rational, though it is tied to rationality in some way
That’s a great point — and I agree that morality isn’t reducible to mere societal functioning or large-scale cooperation. Some societies can be “stable” while being profoundly unjust or harmful to part of their population. But I think this highlights a deeper structure: morality isn’t binary — it’s developmental.
We can think of morality as existing at different levels:
Basic morality secures minimal cooperation and trust — typically grounded in narrow circles (family, tribe) and short time horizons (days, years).
High morality expands both the temporal horizon and the moral circle — incorporating distant others, future generations, and even nonhuman beings.
This connects to an idea I’ve been exploring: morality as a coordinate in a 2D space defined by Time (how far we care into the future) and Scope (how wide our moral concern extends). Most people start somewhere in the lower-left of this space, and ethical growth is about moving upward and outward.
In that view, societies may function on basic morality, but flourishing — for individuals and civilization — requires higher-level ethics. And while morality might not be “objective” like math, it can still be intersubjectively structured, and in that sense, stable, teachable, and improvable.
Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical ‘true’ one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I’m still left overall better than without the large-scape cooperation and under the agreed-upon norms.
I don’t think this large-scale cooperation or society or groups function is morality. It is linked to morality but it is fundamentally something else. A society can “function” well with having part of it suffer tremendously for the benefit of another group. There is nothing objective with longing for a world with less suffering, it is basically in another realm, not in the one of math or rational, though it is tied to rationality in some way
That’s a great point — and I agree that morality isn’t reducible to mere societal functioning or large-scale cooperation. Some societies can be “stable” while being profoundly unjust or harmful to part of their population. But I think this highlights a deeper structure: morality isn’t binary — it’s developmental.
We can think of morality as existing at different levels:
Basic morality secures minimal cooperation and trust — typically grounded in narrow circles (family, tribe) and short time horizons (days, years).
High morality expands both the temporal horizon and the moral circle — incorporating distant others, future generations, and even nonhuman beings.
This connects to an idea I’ve been exploring: morality as a coordinate in a 2D space defined by Time (how far we care into the future) and Scope (how wide our moral concern extends). Most people start somewhere in the lower-left of this space, and ethical growth is about moving upward and outward.
In that view, societies may function on basic morality, but flourishing — for individuals and civilization — requires higher-level ethics. And while morality might not be “objective” like math, it can still be intersubjectively structured, and in that sense, stable, teachable, and improvable.
Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical ‘true’ one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I’m still left overall better than without the large-scape cooperation and under the agreed-upon norms.