So I think this would be a better summary of the article : ``` The text discusses several key points:
1.Many people in the effective altruism (EA) community follow different types of utilitarianism as their personal “ambitious moralities” for making the world better. 2.The author distinguishes between “utilitarianism as a personal goal” versus utilitarianism as the single true morality everyone must adopt. 3.”Minimal morality” is about respecting others’ life goals, separate from one’s “ambitious morality.” 4.Judging which ambitious morality is better is not necessarily negligible since they can give quite different recommendations for how to act. 5.However, people should approach their personal moral views differently if they see them as subjective rather than objective. 6.The author uses an analogy with political parties (Democrats vs. Republicans) to illustrate respecting others’ moral views while still advocating for one’s own. 7.”Minimal morality” is analogous to respecting the overarching democratic process, despite having different ambitious political goals.
In summary, the text argues for a pluralistic view where people can have different utilitarian “ambitious moralities” as personal goals, while still respecting a shared “minimal morality” of not imposing their view on others or acting in ways harmful to others’ moral pursuits. ```
Please let me know if this is condensed enough while still answering all relevant parts of the article.
2.The author distinguishes between “utilitarianism as a personal goal” versus utilitarianism as the single true morality everyone must adopt.
And I argue (or link to arguments in previous posts) that the latter interpretation isn’t defensible. Utilitarianism as the true morality would have to be based on an objective axiology, but there’s likely no such thing (only subjective axiologies).
Maybe also worth highlighting is that the post contains an argument about how we can put person-affecting views on more solid theoretical grounding. (This goes more into the weeds, but it’s a topic that comes up a lot in EA discourse.) Here’s a summary of that argument:
The common arguments against person-affecting views seem to be based on the assumption, “we want an overarching framework that tells us what’s best for both existing/sure-to-exist and possible people at the same time.”
However, since (so I argue) there’s no objective axiology, it’s worth asking whether this is maybe too steep of a requirement?
Person-affecting views seem well-grounded if we view them as a deliberate choice between two separate perspectives, where the non-person affecting answer is “adopt a subjective axiology that tells us what’s best for newly created people,” and the person-affecting answer is “leave our axiology under-defined.”
Leaving one’s subjective axiology under-defined means that many actions we can take that affect new people will be equally “permissible.”
Still, this doesn’t mean “anything goes,” since we’ll still have some guidance from minimal morality: On the context of creating new people/beings, minimal morality implies that we should (unless it’s unreasonably demanding) not commit actions that are objectionable according to all plausible subjective axiologies.
Concretely, this means that it’s permissible to do a range of things even if they are neither what’s best on anti-natalist grounds, nor what’s best on totalist grounds, as long as we don’t do something that’s bad on both these grounds.
The parent comment here explains ambitious morality vs minimal morality.
My post also makes some other points, such as giving new inspiration to person-affecting views.
For a summary of that, see here.
So I think this would be a better summary of the article :
```
The text discusses several key points:
1.Many people in the effective altruism (EA) community follow different types of utilitarianism as their personal “ambitious moralities” for making the world better.
2.The author distinguishes between “utilitarianism as a personal goal” versus utilitarianism as the single true morality everyone must adopt.
3.”Minimal morality” is about respecting others’ life goals, separate from one’s “ambitious morality.”
4.Judging which ambitious morality is better is not necessarily negligible since they can give quite different recommendations for how to act.
5.However, people should approach their personal moral views differently if they see them as subjective rather than objective.
6.The author uses an analogy with political parties (Democrats vs. Republicans) to illustrate respecting others’ moral views while still advocating for one’s own.
7.”Minimal morality” is analogous to respecting the overarching democratic process, despite having different ambitious political goals.
In summary, the text argues for a pluralistic view where people can have different utilitarian “ambitious moralities” as personal goals, while still respecting a shared “minimal morality” of not imposing their view on others or acting in ways harmful to others’ moral pursuits.
```
Please let me know if this is condensed enough while still answering all relevant parts of the article.
That’s good.
And I argue (or link to arguments in previous posts) that the latter interpretation isn’t defensible. Utilitarianism as the true morality would have to be based on an objective axiology, but there’s likely no such thing (only subjective axiologies).
Maybe also worth highlighting is that the post contains an argument about how we can put person-affecting views on more solid theoretical grounding. (This goes more into the weeds, but it’s a topic that comes up a lot in EA discourse.) Here’s a summary of that argument:
The common arguments against person-affecting views seem to be based on the assumption, “we want an overarching framework that tells us what’s best for both existing/sure-to-exist and possible people at the same time.”
However, since (so I argue) there’s no objective axiology, it’s worth asking whether this is maybe too steep of a requirement?
Person-affecting views seem well-grounded if we view them as a deliberate choice between two separate perspectives, where the non-person affecting answer is “adopt a subjective axiology that tells us what’s best for newly created people,” and the person-affecting answer is “leave our axiology under-defined.”
Leaving one’s subjective axiology under-defined means that many actions we can take that affect new people will be equally “permissible.”
Still, this doesn’t mean “anything goes,” since we’ll still have some guidance from minimal morality: On the context of creating new people/beings, minimal morality implies that we should (unless it’s unreasonably demanding) not commit actions that are objectionable according to all plausible subjective axiologies.
Concretely, this means that it’s permissible to do a range of things even if they are neither what’s best on anti-natalist grounds, nor what’s best on totalist grounds, as long as we don’t do something that’s bad on both these grounds.