So is it basically saying that many people follow different types of utilitarianism (I’m assuming this means the “ambitious moralities”), but judging which one is better is quite neglible since all the types usually share important moral similarities (I’m assuming what this means “minimal morality”)?
So is it basically saying that many people follow different types of utilitarianism (I’m assuming this means the “ambitious moralities”)
Yes to this part. (“Many people” maybe not in the world at large, but especially in EA circles where people try to orient their lives around altruism.)
Also, I’m here speaking of “utilitarianism as a personal goal” rather than “utilitarianism as the single true morality that everyone has to adopt.”
This distinction is important. Usually, when people speak about utilitarianism, or when they write criticisms of utilitarianism, they assume that utilitarians believe that everyone ought to be a utilitarian, and that utilitarianism is the answer to all questions in morality. By contrast, “utilitarianism as a personal morality” is just saying “Personally, I want to devote my life to making the world a better place according to the axiology behind my utilitarianism, but it’s a separate question how I relate to other people who pursue different goals in their life.”
And this is where minimal morality comes in: Minimal morality is answering that separate question with “I will respect other people’s life goals.”
So, minimal morality is a separate thing from ambitious morality. (I guess the naming is unfortunate here since it sounds like ambitious morality is just “more on top of” minimal morality. Instead, I see them as separate things. The reason I named them the way I did is because minimal morality is relevant to everyone as a constraint for how to not go through life as a jerk, while ambitious morality is something only a handful of particularly-morally motivated people are interested in. (Of course, per Singer’s drowning child argument, maybe more people should be interested in ambitious morality than is the case.)
but judging which one is better is quite neglible since all the types usually share important moral similarities (I’m assuming what this means “minimal morality”)?
Not exactly.
“Judging which one is better” isn’t necessarily negligible, but it’s a personal choice, meaning there’s no uniquely compelling answer that will appeal to everyone.
You may ask “Why do people even endorse a well-specified axiology at all if they know it won’t be convincing to everyone? Why not just go with the ‘minimum core of morality’ that everyone endorses, even if this were to leave lots of things vague and under-defined?”
I’ve written a dialogue about this in a previous post:
Critic: Why would moral anti-realists bother to form well-specified moral views? If they know that their motivation to act morally points in an arbitrary direction, shouldn’t they remain indifferent about the more contested aspects of morality? It seems that it’s part of the meaning of “morality” that this sort of arbitrariness shouldn’t happen.
Me: Empirically, many anti-realists do bother to form well-specified moral views. We see many examples among effective altruists who self-identify as moral anti-realists. That seems to be what people’s motivation often does in these circumstances.
Critic: Point taken, but I’m saying maybe they shouldn’t? At the very least, I don’t understand why they do it.
Me: You said that it’s “part of the meaning of morality” that arbitrariness “shouldn’t happen.” That captures the way moral non-naturalists think of morality. But in the moral naturalism picture, it seems perfectly coherent to consider that morality might be under-defined (or “indefinable”). If there are several defensible ways to systematize a target concept like “altruism/doing good impartially,” you can be indifferent between all those ways or favor one of them. Both options seem possible.
Critic: I understand being indifferent in the light of indefinability. If the true morality is under-defined, so be it. That part seems clear. What I don’t understand is favoring one of the options. Can you explain to me the thinking of someone who self-identifies as a moral anti-realist yet has moral convictions in domains where they think that other philosophically sophisticated reasoners won’t come to share them?
Me: I suspect that your beliefs about morality are too primed by moral realist ways of thinking. If you internalized moral anti-realism more, your intuitions about how morality needs to function could change.
Consider the concept of “athletic fitness.” Suppose many people grew up with a deep-seated need to study it to become ideally athletically fit. At some point in their studies, they discover that there are multiple options to cash out athletic fitness, e.g., the difference between marathon running vs. 100m-sprints. They may feel drawn to one of those options, or they may be indifferent.
Likewise, imagine that you became interested in moral philosophy after reading some moral arguments, such as Singer’s drowning child argument in Famine, Affluence and Morality. You developed the motivation to act morally as it became clear to you that, e.g., spending money on poverty reduction ranks “morally better” (in a sense that you care about) than spending money on a luxury watch. You continue to study morality. You become interested in contested subdomains of morality, like theories of well-being or population ethics. You experience some inner pressure to form opinions in those areas because when you think about various options and their implications, your mind goes, “Wow, these considerations matter.” As you learn more about metaethics and the option space for how to reason about morality, you begin to think that moral anti-realism is most likely true. In other words, you come to believe that there are likely different systematizations of “altruism/doing good impartially” that individual philosophically sophisticated reasoners will deem defensible. At this point, there are two options for how you might feel: either you’ll be undecided between theories, or you find that a specific moral view deeply appeals to you.
In the story I just described, your motivation to act morally comes from things that are very “emotionally and epistemically close” to you, such as the features of Peter Singer’s drowning child argument. Your moral motivation doesn’t come from conceptual analysis about “morality” as an irreducibly normative concept. (Some people do think that way, but this isn’t the story here!) It also doesn’t come from wanting other philosophical reasoners to necessarily share your motivation. Because we’re discussing a naturalist picture of morality, morality tangibly connects to your motivations. You want to act morally not “because it’s moral,” but because it relates to concrete things like helping people, etc. Once you find yourself with a moral conviction about something tangible, you don’t care whether others would form it as well.
I mean, you would care if you thought others not sharing your particular conviction was evidence that you’re making a mistake. If moral realism was true, it would be evidence of that. However, if anti-realism is indeed correct, then it wouldn’t have to weaken your conviction.
Critic: Why do some people form convictions and not others?
Me: It no longer feels like a choice when you see the option space clearly. You either find yourself having strong opinions on what to value (or how to morally reason), or you don’t.
So, some people may feel too uncertain to choose right away, while others will be drawn to a particular personal/subjective answer to “What utilitarian axiology do I want to use as my target criterion for making the world a better place?”
Different types of utilitarianism can give quite opposing recommendations for how to act, so I wouldn’t say the similarities are insignificant or that there’s no reason to pay attention to there being differences.
However, I think people’s attitudes to their personal moral views should be different if they see their moral views as subjective/personal, as opposed to objective/absolutist.
For instance, let’s say I favor a tranquilist axiology that’s associated with negative utilitarianism. If I thought negative utilitarianism was the single correct moral theory that everyone would adopt if only they were smart and philosophically sophisticated enough, I might think it’s okay to destroy the world. However, since I believe that different morally-motivated people can legitimately come to quite different conclusions about how they want to do “the most moral/altruistic thing,” there’s a sense in which I only use my tranquilist convictions to “cast a vote” in favor of my desired future, but wouldn’t unilaterally act on it in ways that are bad for other people’s morality.
This is a bit like the difference between Democrats and Republicans in the US. If Democrats thought “being a Democrat” is the right answer to everything and Replublicans are wrong in a deep sense, they might be tempted to poison the tea of their Republican neighbor on election day. However, the identities “Democrat” or “Republican” are not all that matters! In addition, people should have an identity of “It’s important to follow the overarching process of having a democracy.”
“It’s important to follow the overarching process of having a democracy” is here analogous to recognizing the importance of minimal morality.
So I think this would be a better summary of the article : ``` The text discusses several key points:
1.Many people in the effective altruism (EA) community follow different types of utilitarianism as their personal “ambitious moralities” for making the world better. 2.The author distinguishes between “utilitarianism as a personal goal” versus utilitarianism as the single true morality everyone must adopt. 3.”Minimal morality” is about respecting others’ life goals, separate from one’s “ambitious morality.” 4.Judging which ambitious morality is better is not necessarily negligible since they can give quite different recommendations for how to act. 5.However, people should approach their personal moral views differently if they see them as subjective rather than objective. 6.The author uses an analogy with political parties (Democrats vs. Republicans) to illustrate respecting others’ moral views while still advocating for one’s own. 7.”Minimal morality” is analogous to respecting the overarching democratic process, despite having different ambitious political goals.
In summary, the text argues for a pluralistic view where people can have different utilitarian “ambitious moralities” as personal goals, while still respecting a shared “minimal morality” of not imposing their view on others or acting in ways harmful to others’ moral pursuits. ```
Please let me know if this is condensed enough while still answering all relevant parts of the article.
2.The author distinguishes between “utilitarianism as a personal goal” versus utilitarianism as the single true morality everyone must adopt.
And I argue (or link to arguments in previous posts) that the latter interpretation isn’t defensible. Utilitarianism as the true morality would have to be based on an objective axiology, but there’s likely no such thing (only subjective axiologies).
Maybe also worth highlighting is that the post contains an argument about how we can put person-affecting views on more solid theoretical grounding. (This goes more into the weeds, but it’s a topic that comes up a lot in EA discourse.) Here’s a summary of that argument:
The common arguments against person-affecting views seem to be based on the assumption, “we want an overarching framework that tells us what’s best for both existing/sure-to-exist and possible people at the same time.”
However, since (so I argue) there’s no objective axiology, it’s worth asking whether this is maybe too steep of a requirement?
Person-affecting views seem well-grounded if we view them as a deliberate choice between two separate perspectives, where the non-person affecting answer is “adopt a subjective axiology that tells us what’s best for newly created people,” and the person-affecting answer is “leave our axiology under-defined.”
Leaving one’s subjective axiology under-defined means that many actions we can take that affect new people will be equally “permissible.”
Still, this doesn’t mean “anything goes,” since we’ll still have some guidance from minimal morality: On the context of creating new people/beings, minimal morality implies that we should (unless it’s unreasonably demanding) not commit actions that are objectionable according to all plausible subjective axiologies.
Concretely, this means that it’s permissible to do a range of things even if they are neither what’s best on anti-natalist grounds, nor what’s best on totalist grounds, as long as we don’t do something that’s bad on both these grounds.
So is it basically saying that many people follow different types of utilitarianism (I’m assuming this means the “ambitious moralities”), but judging which one is better is quite neglible since all the types usually share important moral similarities (I’m assuming what this means “minimal morality”)?
Yes to this part. (“Many people” maybe not in the world at large, but especially in EA circles where people try to orient their lives around altruism.)
Also, I’m here speaking of “utilitarianism as a personal goal” rather than “utilitarianism as the single true morality that everyone has to adopt.”
This distinction is important. Usually, when people speak about utilitarianism, or when they write criticisms of utilitarianism, they assume that utilitarians believe that everyone ought to be a utilitarian, and that utilitarianism is the answer to all questions in morality. By contrast, “utilitarianism as a personal morality” is just saying “Personally, I want to devote my life to making the world a better place according to the axiology behind my utilitarianism, but it’s a separate question how I relate to other people who pursue different goals in their life.”
And this is where minimal morality comes in: Minimal morality is answering that separate question with “I will respect other people’s life goals.”
So, minimal morality is a separate thing from ambitious morality. (I guess the naming is unfortunate here since it sounds like ambitious morality is just “more on top of” minimal morality. Instead, I see them as separate things. The reason I named them the way I did is because minimal morality is relevant to everyone as a constraint for how to not go through life as a jerk, while ambitious morality is something only a handful of particularly-morally motivated people are interested in. (Of course, per Singer’s drowning child argument, maybe more people should be interested in ambitious morality than is the case.)
Not exactly.
“Judging which one is better” isn’t necessarily negligible, but it’s a personal choice, meaning there’s no uniquely compelling answer that will appeal to everyone.
You may ask “Why do people even endorse a well-specified axiology at all if they know it won’t be convincing to everyone? Why not just go with the ‘minimum core of morality’ that everyone endorses, even if this were to leave lots of things vague and under-defined?”
I’ve written a dialogue about this in a previous post:
So, some people may feel too uncertain to choose right away, while others will be drawn to a particular personal/subjective answer to “What utilitarian axiology do I want to use as my target criterion for making the world a better place?”
Different types of utilitarianism can give quite opposing recommendations for how to act, so I wouldn’t say the similarities are insignificant or that there’s no reason to pay attention to there being differences.
However, I think people’s attitudes to their personal moral views should be different if they see their moral views as subjective/personal, as opposed to objective/absolutist.
For instance, let’s say I favor a tranquilist axiology that’s associated with negative utilitarianism. If I thought negative utilitarianism was the single correct moral theory that everyone would adopt if only they were smart and philosophically sophisticated enough, I might think it’s okay to destroy the world. However, since I believe that different morally-motivated people can legitimately come to quite different conclusions about how they want to do “the most moral/altruistic thing,” there’s a sense in which I only use my tranquilist convictions to “cast a vote” in favor of my desired future, but wouldn’t unilaterally act on it in ways that are bad for other people’s morality.
This is a bit like the difference between Democrats and Republicans in the US. If Democrats thought “being a Democrat” is the right answer to everything and Replublicans are wrong in a deep sense, they might be tempted to poison the tea of their Republican neighbor on election day. However, the identities “Democrat” or “Republican” are not all that matters! In addition, people should have an identity of “It’s important to follow the overarching process of having a democracy.”
“It’s important to follow the overarching process of having a democracy” is here analogous to recognizing the importance of minimal morality.
The parent comment here explains ambitious morality vs minimal morality.
My post also makes some other points, such as giving new inspiration to person-affecting views.
For a summary of that, see here.
So I think this would be a better summary of the article :
```
The text discusses several key points:
1.Many people in the effective altruism (EA) community follow different types of utilitarianism as their personal “ambitious moralities” for making the world better.
2.The author distinguishes between “utilitarianism as a personal goal” versus utilitarianism as the single true morality everyone must adopt.
3.”Minimal morality” is about respecting others’ life goals, separate from one’s “ambitious morality.”
4.Judging which ambitious morality is better is not necessarily negligible since they can give quite different recommendations for how to act.
5.However, people should approach their personal moral views differently if they see them as subjective rather than objective.
6.The author uses an analogy with political parties (Democrats vs. Republicans) to illustrate respecting others’ moral views while still advocating for one’s own.
7.”Minimal morality” is analogous to respecting the overarching democratic process, despite having different ambitious political goals.
In summary, the text argues for a pluralistic view where people can have different utilitarian “ambitious moralities” as personal goals, while still respecting a shared “minimal morality” of not imposing their view on others or acting in ways harmful to others’ moral pursuits.
```
Please let me know if this is condensed enough while still answering all relevant parts of the article.
That’s good.
And I argue (or link to arguments in previous posts) that the latter interpretation isn’t defensible. Utilitarianism as the true morality would have to be based on an objective axiology, but there’s likely no such thing (only subjective axiologies).
Maybe also worth highlighting is that the post contains an argument about how we can put person-affecting views on more solid theoretical grounding. (This goes more into the weeds, but it’s a topic that comes up a lot in EA discourse.) Here’s a summary of that argument:
The common arguments against person-affecting views seem to be based on the assumption, “we want an overarching framework that tells us what’s best for both existing/sure-to-exist and possible people at the same time.”
However, since (so I argue) there’s no objective axiology, it’s worth asking whether this is maybe too steep of a requirement?
Person-affecting views seem well-grounded if we view them as a deliberate choice between two separate perspectives, where the non-person affecting answer is “adopt a subjective axiology that tells us what’s best for newly created people,” and the person-affecting answer is “leave our axiology under-defined.”
Leaving one’s subjective axiology under-defined means that many actions we can take that affect new people will be equally “permissible.”
Still, this doesn’t mean “anything goes,” since we’ll still have some guidance from minimal morality: On the context of creating new people/beings, minimal morality implies that we should (unless it’s unreasonably demanding) not commit actions that are objectionable according to all plausible subjective axiologies.
Concretely, this means that it’s permissible to do a range of things even if they are neither what’s best on anti-natalist grounds, nor what’s best on totalist grounds, as long as we don’t do something that’s bad on both these grounds.