I like this post; as you note, we’ve been thinking along very similar lines. But you reach different conclusions than I do—in particular, I disagree that “the ambitious morality of “do the most moral/altruistic thing” is something like preference utilitarianism.” In other words, I think most of your arguments about minimal morality are still consistent with having an axiology.
I didn’t read your post very carefully, but I think the source of the disagreement is that you’re conflating objectivity/subjectivity with respect to the moral actor and objectivity/subjectivity with respect to the moral patient.
More specifically: let’s say that I’m a moral actor, and I have some axiology. I might agree that this axiology is not objective: it’s just my own idiosyncratic axiology. But it nevertheless might be non-subjective with respect to moral patients, in the sense that my axiology says that some experiences have value regardless of what the people having those experiences want. So I could be a hedonist despite thinking that hedonism isn’t the objectively-correct axiology.
This distinction also helps resolve the tension between “there’s an objective axiology” and “people are free to choose their own life goals”: the objective axiology of what’s good for a person might in part depend on what they want.
Having an axiology which says things like “my account of welfare is partly determined by hedonic experiences and partly by preferences and partly by how human-like the agent is” may seem unparsimonious, but I think that’s just what it means for humans to have complex values. And then, as you note, we can also follow minimal (cooperation) morality for people who are currently alive, and balance that with maximizing the welfare of people who don’t yet exist.
Thanks! Those points sound like they’re quite compatible with my framework.
tl;dr: When I said that “in fixed population contexts, the ambitious mortality of ‘do the most moral/altruistic thing’ is something like preference utilitarianism,” that was a very simplified point for the summary. It would be more accurate of the overall post if I had said something more like “In fixed population, fixed interests/goals contexts, any ambitious morality of […] would have a lot of practical overlap with something like preference utilitarianism.” Also, my post is indeed compatible with having an axiology yourself that differs from other people’s takes – a more accurate title for my post would be “population ethics without an objective axiology.”
To reply in more depth:
But you reach different conclusions than I do—in particular, I disagree that “the ambitious morality of “do the most moral/altruistic thing” is something like preference utilitarianism.”
The part you’re quoting is specifically about a fixed population context and the simplifying assumption that people there have “fixed” interests/goals. As I acknowledge in endnote 4: “Technically, interests/goals aren’t necessarily fixed in fixed population contexts either, since we can imagine people with under-defined goals or goals that don’t mind being changed in specific ways.” So, the main point about ambitious morality being something like preference utilitarianism in fixed population contexts is the claim that care-morality and cooperation-morality overlap for practical purposes in contexts where interests/goals are (completely) fixed. I discuss this in more depth in the section “Minimal morality vs. ambitious morality:”
Admittedly, there are specific contexts where minimal morality is like a low-demanding version of ambitious morality. Namely, contexts where “care-morality” and “cooperation-morality” have the most overlap. For instance, say we’re thinking about moral reasons towards a specific person with well-defined interests/goals and we’re equal to them in terms of “capability levels” (e.g., we cannot grant all their wishes with god-like power, so empowering them is the best way to advance their goals). In that scenario, “care-morality” and “cooperation-morality” arguably fall together. Since it seems reasonable to assume that the other person knows what’s best for them, promoting their interests/goals from a cooperative standpoint should amount to the same thing as helping them from a care/altruism standpoint.[14]
Endnote14:
One caveat here is that people may have self-sacrificing goals. For instance, say John is an effective altruist who’s intent on spending all his efforts on making the world a better place. Here, it seems like caring about “John the person” comes apart from caring about “John the aspiring utilitarian robot.” Still, on a broad enough conception of “interests/goals,” it would always be better if John was doing well himself while accomplishing his altruistic goals. I often talk about “interests/goals” instead of just “goals” to highlight this difference. (In my vocabulary, “interests” aren’t always rationally endorsed, but they are essential to someone’s flourishing.)
[...]
Where people have well-specified interests/goals, it would be a preposterous conception of care-morality to stick someone into an experience machine against their will or kill them against their will to protect them from future suffering. So, whenever a single-minded specification of care-morality (e.g., “hedonistic utilitarianism” or “negative utilitarianism”) contradicts someone’s well-specified interests/goals, that type of care-morality seems misapplied and out of place.
As you can see in endnote 14, I consider “preference utilitarianism” itself under-defined and have sympathies for a view that doesn’t just listen to the rational, planning part of people’s brains (e.g., someone saying “I’m a hardcore effective altruist; I don’t rationally endorse caring intrinsically about my personal well-being”). I’d also consider that humans are biological creatures with “interests” – a system-1 “monkey brain” with its own needs, separate (or at least separable) from idealized self-identities that the rational, planning part of our brain may come up with.
So, if we also want to fulfill these interests/needs, that could be justification for a quasi-hedonistic view or for the type of mixed view that you advocate?
In other words, I think most of your arguments about minimal morality are still consistent with having an axiology.
They are! That’s how I meant my post. An earlier title for my post that I had in a draft was “Population ethics without an objective axiology” – I later shortened it to make the title more catchy.
As I say in the summary:
Accordingly, people can think of “population ethics” in several different (equally defensible)[5] ways:
Subjectivist person-affecting views: [..]
Subjectivist totalism: [...]
Subjectivist anti-natalism: [...]
At least the second and third examples here, which are examples of specifications of ambitious morality, can be described as having a (subjective) axiology!
I also mention the adjective “axiological” later in the post in the same context of “if we want to specify what’s happening here, we need a (subjective) axiology:”
Because different possible people/beings make different appeals,[17] ambitious morality focused on possible people/beings is under-defined – specifying it requires further (“axiological”) judgment calls.
You say further:
This distinction also helps resolve the tension between “there’s an objective axiology” and “people are free to choose their own life goals”: the objective axiology of what’s good for a person might in part depend on what they want. [...] And then, as you note, we can also follow minimal (cooperation) morality for people who are currently alive, and balance that with maximizing the welfare of people who don’t yet exist.
That describes really well how I intended it. The “place” for ambitious morality / axiology is wherever cooperation-morality leaves anything under-defined. It isn’t the only defensible axiology the way I see it, but I very much consider (subjective and cooperative) hedonism a viable option within my framework.
a more accurate title for my post would be “population ethics without objective axiology.”
Perhaps consider changing it to that, then? Since I’m a subjectivist, I consider all axiologies subjective—and therefore “without axiology” is very different from “without objective axiology”.
(I feel like I would have understood that our arguments were consistent either if the title had been different, or if I’d read the post more carefully—but alas, neither condition held.)
I’d also consider that humans are biological creatures with “interests” – a system-1 “monkey brain” with its own needs, separate (or at least separable) from idealized self-identities that the rational, planning part of our brain may come up with. So, if we also want to fulfill these interests/needs, that could be justification for a quasi-hedonistic view or for the type of mixed view that you advocate?
I like this justification for hedonism. I suspect that a version of this is the only justification that will actually hold up in the long term, once we’ve more thoroughly internalized qualia anti-realism.
I like this post; as you note, we’ve been thinking along very similar lines. But you reach different conclusions than I do—in particular, I disagree that “the ambitious morality of “do the most moral/altruistic thing” is something like preference utilitarianism.” In other words, I think most of your arguments about minimal morality are still consistent with having an axiology.
I didn’t read your post very carefully, but I think the source of the disagreement is that you’re conflating objectivity/subjectivity with respect to the moral actor and objectivity/subjectivity with respect to the moral patient.
More specifically: let’s say that I’m a moral actor, and I have some axiology. I might agree that this axiology is not objective: it’s just my own idiosyncratic axiology. But it nevertheless might be non-subjective with respect to moral patients, in the sense that my axiology says that some experiences have value regardless of what the people having those experiences want. So I could be a hedonist despite thinking that hedonism isn’t the objectively-correct axiology.
This distinction also helps resolve the tension between “there’s an objective axiology” and “people are free to choose their own life goals”: the objective axiology of what’s good for a person might in part depend on what they want.
Having an axiology which says things like “my account of welfare is partly determined by hedonic experiences and partly by preferences and partly by how human-like the agent is” may seem unparsimonious, but I think that’s just what it means for humans to have complex values. And then, as you note, we can also follow minimal (cooperation) morality for people who are currently alive, and balance that with maximizing the welfare of people who don’t yet exist.
Thanks! Those points sound like they’re quite compatible with my framework.
tl;dr: When I said that “in fixed population contexts, the ambitious mortality of ‘do the most moral/altruistic thing’ is something like preference utilitarianism,” that was a very simplified point for the summary. It would be more accurate of the overall post if I had said something more like “In fixed population, fixed interests/goals contexts, any ambitious morality of […] would have a lot of practical overlap with something like preference utilitarianism.” Also, my post is indeed compatible with having an axiology yourself that differs from other people’s takes – a more accurate title for my post would be “population ethics without an objective axiology.”
To reply in more depth:
The part you’re quoting is specifically about a fixed population context and the simplifying assumption that people there have “fixed” interests/goals. As I acknowledge in endnote 4: “Technically, interests/goals aren’t necessarily fixed in fixed population contexts either, since we can imagine people with under-defined goals or goals that don’t mind being changed in specific ways.” So, the main point about ambitious morality being something like preference utilitarianism in fixed population contexts is the claim that care-morality and cooperation-morality overlap for practical purposes in contexts where interests/goals are (completely) fixed. I discuss this in more depth in the section “Minimal morality vs. ambitious morality:”
Endnote14:
[...]
As you can see in endnote 14, I consider “preference utilitarianism” itself under-defined and have sympathies for a view that doesn’t just listen to the rational, planning part of people’s brains (e.g., someone saying “I’m a hardcore effective altruist; I don’t rationally endorse caring intrinsically about my personal well-being”). I’d also consider that humans are biological creatures with “interests” – a system-1 “monkey brain” with its own needs, separate (or at least separable) from idealized self-identities that the rational, planning part of our brain may come up with. So, if we also want to fulfill these interests/needs, that could be justification for a quasi-hedonistic view or for the type of mixed view that you advocate?
They are! That’s how I meant my post. An earlier title for my post that I had in a draft was “Population ethics without an objective axiology” – I later shortened it to make the title more catchy.
As I say in the summary:
Subjectivist person-affecting views: [..]
Subjectivist totalism: [...]
Subjectivist anti-natalism: [...]
At least the second and third examples here, which are examples of specifications of ambitious morality, can be described as having a (subjective) axiology!
I also mention the adjective “axiological” later in the post in the same context of “if we want to specify what’s happening here, we need a (subjective) axiology:”
You say further:
That describes really well how I intended it. The “place” for ambitious morality / axiology is wherever cooperation-morality leaves anything under-defined. It isn’t the only defensible axiology the way I see it, but I very much consider (subjective and cooperative) hedonism a viable option within my framework.
Makes sense, glad we’re on the same page!
Perhaps consider changing it to that, then? Since I’m a subjectivist, I consider all axiologies subjective—and therefore “without axiology” is very different from “without objective axiology”.
(I feel like I would have understood that our arguments were consistent either if the title had been different, or if I’d read the post more carefully—but alas, neither condition held.)
I like this justification for hedonism. I suspect that a version of this is the only justification that will actually hold up in the long term, once we’ve more thoroughly internalized qualia anti-realism.