I’m curious, what more specifically do you find it persuasive of? I generally feel that people do not easily want to bite the bullets that experiences have no independent positive value (only interdependent value), and that value doesn’t aggregate, and that outweighing “does not compute” in a physical balancing kind of sense. (I haven’t yet read the 2016 Oxford Handbook of Hypo-egoic Phenomena, but I expect that hypo-egoic systems, like many or most kinds of Buddhism, may be an angle from which it’d be easier to bite these bullets, though I don’t know much about the current intersection of Buddhism and EA.)
If I can try to rephrase your beliefs: economic rationality tells us that tradeoffs do in fact exist, and therefore rational agents must be able to make a comparison in every case. There has to be some amount of every value that you’d trade for another amount of every other value, otherwise you’ll end up paralyzed and decisionless.
You’re saying that, although we’d like to have this coherent total utility function, realistically it’s impossible to do so. We run into the theoretical problems you mention, and more fundamentally, some of our goals simply are not maximizing goals, and there is no rule that can accurately describe the relationship between those goals. Do we end up paralyzed and decisionless, with no principled way to tradeoff between the different goals? Yes, that’s unavoidable.
And one clarification: Would you say that this non-comparability is a feature more of human preferences, where we biologically have desires that aren’t integrated into a single utility function, or morality, where there are independent goals with independent moral worth?
Yes, we’ll unavoidably face quantitative resource splits between goals, none of which we can fully satisfy as long as we have even one infinitely hungry goal like “minimize suffering”, “maximize happiness”, or “maximize survival”. In practice, we can resolve conflicts between these goals by coming up with a common language to mediate trade between them, but how could they settle on an agreement if they were all independent and infinite goals? (My currently preferred solution is that happiness and survival are not such goals, compared to compassion.)
Alternatively, they could split from being a unified agent into being three agents, each independently unified, but they’d eventually run into conflicts again down the line (if they’re competing over control, space, energy, etc.). I’m interested in internal unity from the “minimize suffering” perspective, because violent competition from non-negotiating splitting causes suffering. In other words, I suppose my self-compassion wants my goals to play in harmony, and “self-compassion aligned with omnicompassion” is the unification that results in that harmony in the most robust way I can imagine.
More biological needs are more clearly in the domain of self-compassion, while omnicompassion is the theoretical attractor, asymptotically approximated ideal, or “gravity” that pulls self-compassion towards extended self-compassion, which is like a process of gradually importing more and more needs of others as my own, which increases expected harmony so long as it’s done slowly enough to maintain the harmony within the initial small self. For people with a chaotic life situation or who are working on a lot of unmet needs, this domain of self-compassion could occupy a lot of their attention for years while still being aligned with eventually extending self-compassion, which means that a seemingly non-helping person may be working on precisely the need areas that are long-term aligned with their role in minimizing suffering (or maximizing harmony or however one likes to think of it).
(I may have sidestepped your question of morality and moral worth, because I prefer to think in terms of needs and motivating tensions, and to see if the theoretical implications of unified consequentialism could be derived from our needs instead of abstract shoulds, obligations, imperatives, or duties.)
I’m curious, what more specifically do you find it persuasive of? I generally feel that people do not easily want to bite the bullets that experiences have no independent positive value (only interdependent value), and that value doesn’t aggregate, and that outweighing “does not compute” in a physical balancing kind of sense. (I haven’t yet read the 2016 Oxford Handbook of Hypo-egoic Phenomena, but I expect that hypo-egoic systems, like many or most kinds of Buddhism, may be an angle from which it’d be easier to bite these bullets, though I don’t know much about the current intersection of Buddhism and EA.)
Yes, we’ll unavoidably face quantitative resource splits between goals, none of which we can fully satisfy as long as we have even one infinitely hungry goal like “minimize suffering”, “maximize happiness”, or “maximize survival”. In practice, we can resolve conflicts between these goals by coming up with a common language to mediate trade between them, but how could they settle on an agreement if they were all independent and infinite goals? (My currently preferred solution is that happiness and survival are not such goals, compared to compassion.)
Alternatively, they could split from being a unified agent into being three agents, each independently unified, but they’d eventually run into conflicts again down the line (if they’re competing over control, space, energy, etc.). I’m interested in internal unity from the “minimize suffering” perspective, because violent competition from non-negotiating splitting causes suffering. In other words, I suppose my self-compassion wants my goals to play in harmony, and “self-compassion aligned with omnicompassion” is the unification that results in that harmony in the most robust way I can imagine.
More biological needs are more clearly in the domain of self-compassion, while omnicompassion is the theoretical attractor, asymptotically approximated ideal, or “gravity” that pulls self-compassion towards extended self-compassion, which is like a process of gradually importing more and more needs of others as my own, which increases expected harmony so long as it’s done slowly enough to maintain the harmony within the initial small self. For people with a chaotic life situation or who are working on a lot of unmet needs, this domain of self-compassion could occupy a lot of their attention for years while still being aligned with eventually extending self-compassion, which means that a seemingly non-helping person may be working on precisely the need areas that are long-term aligned with their role in minimizing suffering (or maximizing harmony or however one likes to think of it).
(I may have sidestepped your question of morality and moral worth, because I prefer to think in terms of needs and motivating tensions, and to see if the theoretical implications of unified consequentialism could be derived from our needs instead of abstract shoulds, obligations, imperatives, or duties.)