(I’ll reply to your points in the opposite order of how you made them.)
I’d love to see a more in-depth development and defense of this idea. [...] Just off the top of my head, it isn’t clear to me why we should give greater normative authority to a perspective that isn’t actually guided by the correct ambitious morality.
Instead of conceptualizing contractualism/cooperation-morality and consequentialism/care-morality as “climbing the same mountain from different sides,” I view them as separate perspectives. (I agree the “it’s the same mountain!” view has some appeal, so I acknowledge that I have to say more on why I see them as separate.)
It boils down to my belief that ambitious morality is under-defined. If I thought it was well-specified, I’d see things the same way you do.
Say that two philosophically sophisticated reasoners endorse different specifications of ambitious morality. If minimal morality was a low-demanding version of ambitious morality, they would now also hold two different versions of minimal morality. This would contradict the contractualist intent behind minimal morality – it being fair to everyone.
In my framework, minimal morality is the largest common denominator in any attempts to specify “doing the most moral/altruistic thing.”
You say:
(And attempts to establish a purely neutral, axiology-independent answer, like public reason liberalism in political philosophy, are notoriously question-begging.)
Maybe my view on this is a bit naive, but I feel like the cluster in concept space around “don’t be a jerk” is quite recognizable (even though it’s fuzzy).
Also, making it a low-demanding morality makes consensus-finding a lot easier. (Minimal morality is easier to agree on precisely because it’s unambitious.)
If there are low-effort, low-cost ways to make the world vastly better, for example, I’d think that we could reasonably take that to be a minimal requirement of morality and not just an optional extra for the morally ambitious. (What instead sets the ambitious apart, I would think, is their willingness to put in more than the morally required level of effort or sacrifice.)
I actually agree with this. See endnote 28 (context: you’re someone with an anti-natalist ambitious morality and you can press a button to bring a paradise-like population into existence where one inhabit will suffer a pinprick at some point):
Some existing people would (presumably) greatly prefer the paradise-population to come into existence, which seems a good enough reason for minimal morality to ask of us to push that button. (Minimal morality is mostly about avoiding causing harm, but there’s no principled reason never to include an obligation to benefit. The categorical action-omission of libertarianism seems too extreme! If all we had to do to further others’ goals were to push a button and accept a pinprick of disvalue on our ambitious morality, we’d be jerks not to press that button.)
On your second point:
Is preferability “non-existent”?
This rephrasing doesn’t change things for me. I’m mainly thrown off by the appearance of these (both “goodness” and “preferability”) being bedrock concepts. (I’m not sure “bedrock concepts are non-existent” is the best way to put it. I just don’t have a place for them in my ontology.)
What I’d be on board with is a moral naturalist account of “preferability” (or even “goodness”) so that something is preferable if philosophically sophisticated reasoners interested in figuring out morality come to agree on some account. (There are some objections to this sort of account, where goodness is tightly linked to expert convergence. First, who counts as an expert seems under-defined. Second, what distinguishes “experts converge because of features of the moral reality” from “experts converge because they happen to all share the same subjective views”? Third, what reasoners consider appealing may change over time, so expert consensus in the 18th century may look different from expert consensus today or in a hundred years. Those objections explain why moral non-naturalists may not be happy with this account. Still, I actually think moral naturalist moral realism is intelligible and a useful concept to have. I.e., I think there are some decent answers we can give to these objections so that the account makes sense. I discuss this some more in an endnote (8) of a different post.) However, while this naturalist “preferability” concept has a well-specified intension in the referencing context, its extension could be empty. (In fact, I have argued in previous posts that we can somewhat confidently conclude that its extension is empty. This view informs my framework here.)
(I plan to reply to your thoughts on promise-making analogy later in a separate comment.)
Thanks for the comments!
(I’ll reply to your points in the opposite order of how you made them.)
Instead of conceptualizing contractualism/cooperation-morality and consequentialism/care-morality as “climbing the same mountain from different sides,” I view them as separate perspectives. (I agree the “it’s the same mountain!” view has some appeal, so I acknowledge that I have to say more on why I see them as separate.)
It boils down to my belief that ambitious morality is under-defined. If I thought it was well-specified, I’d see things the same way you do.
Say that two philosophically sophisticated reasoners endorse different specifications of ambitious morality. If minimal morality was a low-demanding version of ambitious morality, they would now also hold two different versions of minimal morality. This would contradict the contractualist intent behind minimal morality – it being fair to everyone.
In my framework, minimal morality is the largest common denominator in any attempts to specify “doing the most moral/altruistic thing.”
You say:
Maybe my view on this is a bit naive, but I feel like the cluster in concept space around “don’t be a jerk” is quite recognizable (even though it’s fuzzy).
Also, making it a low-demanding morality makes consensus-finding a lot easier. (Minimal morality is easier to agree on precisely because it’s unambitious.)
I actually agree with this. See endnote 28 (context: you’re someone with an anti-natalist ambitious morality and you can press a button to bring a paradise-like population into existence where one inhabit will suffer a pinprick at some point):
On your second point:
This rephrasing doesn’t change things for me. I’m mainly thrown off by the appearance of these (both “goodness” and “preferability”) being bedrock concepts. (I’m not sure “bedrock concepts are non-existent” is the best way to put it. I just don’t have a place for them in my ontology.)
What I’d be on board with is a moral naturalist account of “preferability” (or even “goodness”) so that something is preferable if philosophically sophisticated reasoners interested in figuring out morality come to agree on some account. (There are some objections to this sort of account, where goodness is tightly linked to expert convergence. First, who counts as an expert seems under-defined. Second, what distinguishes “experts converge because of features of the moral reality” from “experts converge because they happen to all share the same subjective views”? Third, what reasoners consider appealing may change over time, so expert consensus in the 18th century may look different from expert consensus today or in a hundred years. Those objections explain why moral non-naturalists may not be happy with this account. Still, I actually think moral naturalist moral realism is intelligible and a useful concept to have. I.e., I think there are some decent answers we can give to these objections so that the account makes sense. I discuss this some more in an endnote (8) of a different post.) However, while this naturalist “preferability” concept has a well-specified intension in the referencing context, its extension could be empty. (In fact, I have argued in previous posts that we can somewhat confidently conclude that its extension is empty. This view informs my framework here.)
(I plan to reply to your thoughts on promise-making analogy later in a separate comment.)