2.The author distinguishes between “utilitarianism as a personal goal” versus utilitarianism as the single true morality everyone must adopt.
And I argue (or link to arguments in previous posts) that the latter interpretation isn’t defensible. Utilitarianism as the true morality would have to be based on an objective axiology, but there’s likely no such thing (only subjective axiologies).
Maybe also worth highlighting is that the post contains an argument about how we can put person-affecting views on more solid theoretical grounding. (This goes more into the weeds, but it’s a topic that comes up a lot in EA discourse.) Here’s a summary of that argument:
The common arguments against person-affecting views seem to be based on the assumption, “we want an overarching framework that tells us what’s best for both existing/sure-to-exist and possible people at the same time.”
However, since (so I argue) there’s no objective axiology, it’s worth asking whether this is maybe too steep of a requirement?
Person-affecting views seem well-grounded if we view them as a deliberate choice between two separate perspectives, where the non-person affecting answer is “adopt a subjective axiology that tells us what’s best for newly created people,” and the person-affecting answer is “leave our axiology under-defined.”
Leaving one’s subjective axiology under-defined means that many actions we can take that affect new people will be equally “permissible.”
Still, this doesn’t mean “anything goes,” since we’ll still have some guidance from minimal morality: On the context of creating new people/beings, minimal morality implies that we should (unless it’s unreasonably demanding) not commit actions that are objectionable according to all plausible subjective axiologies.
Concretely, this means that it’s permissible to do a range of things even if they are neither what’s best on anti-natalist grounds, nor what’s best on totalist grounds, as long as we don’t do something that’s bad on both these grounds.
That’s good.
And I argue (or link to arguments in previous posts) that the latter interpretation isn’t defensible. Utilitarianism as the true morality would have to be based on an objective axiology, but there’s likely no such thing (only subjective axiologies).
Maybe also worth highlighting is that the post contains an argument about how we can put person-affecting views on more solid theoretical grounding. (This goes more into the weeds, but it’s a topic that comes up a lot in EA discourse.) Here’s a summary of that argument:
The common arguments against person-affecting views seem to be based on the assumption, “we want an overarching framework that tells us what’s best for both existing/sure-to-exist and possible people at the same time.”
However, since (so I argue) there’s no objective axiology, it’s worth asking whether this is maybe too steep of a requirement?
Person-affecting views seem well-grounded if we view them as a deliberate choice between two separate perspectives, where the non-person affecting answer is “adopt a subjective axiology that tells us what’s best for newly created people,” and the person-affecting answer is “leave our axiology under-defined.”
Leaving one’s subjective axiology under-defined means that many actions we can take that affect new people will be equally “permissible.”
Still, this doesn’t mean “anything goes,” since we’ll still have some guidance from minimal morality: On the context of creating new people/beings, minimal morality implies that we should (unless it’s unreasonably demanding) not commit actions that are objectionable according to all plausible subjective axiologies.
Concretely, this means that it’s permissible to do a range of things even if they are neither what’s best on anti-natalist grounds, nor what’s best on totalist grounds, as long as we don’t do something that’s bad on both these grounds.