Multiple terminal values will always lead to irreconcilable conflicts.
(1) Do you hold suffering to be a terminal value motivating its minimization for its own sake?
(2) Do you also hold that there is some positive maximand that does not ultimately derive its value from its instrumental usefulness for minimizing suffering?
Anyone who answers yes to both (1) and (2) is not a unified entity playing one infinite game with one common currency (infinite optimand), but contains at least two infinite optimands, because with limited resources, we will never fully satisfy any single terminal value (e.g., its probability of being minimized or maximized throughout space & time).
I’ve been working on a compassion-centric motivation unification (as an improvement of existing formulations of negative utilitarianism) because I find it the most consistent & psychologically realistic theory that solves all these theoretical problems with no ultimately unacceptable implications. To arrive at practical answers from thought experiments, we want to account for all possibly relevant externalities of our scenarios. For example, the practical situations of {killing children} vs. {not having children} do not have “roughly the same outcome” in any scenario I can think of (due to all kinds of inescapable interdependencies). Similarly, compassion for all sentient beings does not necessarily imply attempting to end Earth (do you see Buddhists researching that?), because technocivilization might reach more & more exoplanets the longer it survives, or at least want to remain to ensure that suffering won’t re-evolve here.
To further explore intuitions between terminal value monism vs. terminal value pluralism, can you order the following motivations by your certainty of holding them as absolutes?
(A) You want to minimize suffering moments.
(B) You want to minimize the risk of extinction (i.e., prolong the survival of life/consciousness).
(C) You want to maximize happy moments.
I sometimes imagine I’m a Dyson sphere of near-infinite resources splitting my budget between these goals. I find that B is often instrumental for A on a cosmic scale, but that C derives its budget entirely from the degree to which it helps A: equanimity, resilience, growth, learning, awe, gratitude, and other phenomena of positive psychology are wonderful tools for compassionate actors for minimizing suffering, but I would not copy/boost them beyond the degree to which they are the minimizing way. In other words, I could not tell my Exoplanet Rescue Mission Department why I wanted to spend their resources on creating more ecstatic meditators on Mars, because A is only interested in instruments for minimizing suffering. Besides, I wouldn’t undergo surgery without anaesthesia for any number of meditators on Mars, because they wouldn’t help my suffering; in a world where anaesthetics opportunity-cost a monastery on Mars, what would you do? Is “outweighing” between terminal values an actual physical computation taking place anywhere outside an ethicist’s head, or a fiction?
Multiple terminal values will always lead to irreconcilable conflicts.
This is not the case when there’s a well-defined procedure for resolving such conflicts. For example, you can map several terminal values onto a numerical “utility” scale.
This is not the case when there’s a well-defined procedure for resolving such conflicts.
Yes, but there isn’t. The theoretical case for terminal value monism is strong because monism doesn’t need such a procedure. All forms of terminal value pluralism run into the problem of incommensurability; monism doesn’t. With monism, we can evaluate and compare x-risks and positive psychology states & traits (vs. suffering) by their instrumental effects for minimizing suffering (which may be empirically difficult, but not theoretically impossible). What more do we want from a unified theory?
Do we want the slightest (epsilon) increase in x-risk to end up weighing more than any suffering? If this is our pre-decided definition, are we going to give up on suffering as a terminal value, caring about suffering only if it increases x-risk?
For example, you can map several terminal values onto a numerical “utility” scale.
I can’t. Who can? (In a non-arbitrary way we could agree on from behind the veil of ignorance.) The analogy of a scale requires a common dimension by which the comparator can sort the two (mass, in the analogy of a scale). What is a common dimension for intercomparing suffering & x-risk, or suffering & positive states? An arbitrary numerical assignment?
To arrive at an impartial theory that doesn’t sanctify our self-serving intuitions, we’d want to formulate our terminal value pluralism, “behind the veil”, by agreeing on independent numerical utility values for different terminal value-grounded currencies, such as:
(1) +epsilon probability of human extinction
(2) +epsilon probability that someone undergoes, e.g., a cluster headache episode (or equivalent)
(3) +epsilon probability that someone instantiates/deepens a positive psychology state [requiring a common dimension for “positivity”, unlike monism]
With monism, we don’t need to agree on definitions & values for multiple such currencies. Instead, we want to ground the (dis)value of other values in their relationships to extreme suffering, which everyone already finds terminally motivating in their own case (unlike x-risk reduction or positivity-production, worth noting). I wouldn’t agree to any theory where extreme suffering can be outweighed by enough positivity elsewhere, because outweighing “does not compute”: positivity can only outweigh suffering if it reduces even more suffering, but not by itself, because a positive fantasy of infinite utility is not an antidote to suffering, because the aggregate terminal positivity physically exists only as a fantasy [an imaginary spreadsheet cell] that never interacts with our terminal suffering.
Without monism, how do we agree on the pluralist numerical values if we could end up undergoing the cluster headache-equivalent suffering ourselves (i.e., simulating an impartial compassion)? Are we to trust that cluster headaches aren’t so bad, when they’re outweighed according to a formula that some (not all) people agreed on?
Multiple terminal values will always lead to irreconcilable conflicts.
(1) Do you hold suffering to be a terminal value motivating its minimization for its own sake?
(2) Do you also hold that there is some positive maximand that does not ultimately derive its value from its instrumental usefulness for minimizing suffering?
Anyone who answers yes to both (1) and (2) is not a unified entity playing one infinite game with one common currency (infinite optimand), but contains at least two infinite optimands, because with limited resources, we will never fully satisfy any single terminal value (e.g., its probability of being minimized or maximized throughout space & time).
I’ve been working on a compassion-centric motivation unification (as an improvement of existing formulations of negative utilitarianism) because I find it the most consistent & psychologically realistic theory that solves all these theoretical problems with no ultimately unacceptable implications. To arrive at practical answers from thought experiments, we want to account for all possibly relevant externalities of our scenarios. For example, the practical situations of {killing children} vs. {not having children} do not have “roughly the same outcome” in any scenario I can think of (due to all kinds of inescapable interdependencies). Similarly, compassion for all sentient beings does not necessarily imply attempting to end Earth (do you see Buddhists researching that?), because technocivilization might reach more & more exoplanets the longer it survives, or at least want to remain to ensure that suffering won’t re-evolve here.
To further explore intuitions between terminal value monism vs. terminal value pluralism, can you order the following motivations by your certainty of holding them as absolutes?
(A) You want to minimize suffering moments.
(B) You want to minimize the risk of extinction (i.e., prolong the survival of life/consciousness).
(C) You want to maximize happy moments.
I sometimes imagine I’m a Dyson sphere of near-infinite resources splitting my budget between these goals. I find that B is often instrumental for A on a cosmic scale, but that C derives its budget entirely from the degree to which it helps A: equanimity, resilience, growth, learning, awe, gratitude, and other phenomena of positive psychology are wonderful tools for compassionate actors for minimizing suffering, but I would not copy/boost them beyond the degree to which they are the minimizing way. In other words, I could not tell my Exoplanet Rescue Mission Department why I wanted to spend their resources on creating more ecstatic meditators on Mars, because A is only interested in instruments for minimizing suffering. Besides, I wouldn’t undergo surgery without anaesthesia for any number of meditators on Mars, because they wouldn’t help my suffering; in a world where anaesthetics opportunity-cost a monastery on Mars, what would you do? Is “outweighing” between terminal values an actual physical computation taking place anywhere outside an ethicist’s head, or a fiction?
This is not the case when there’s a well-defined procedure for resolving such conflicts. For example, you can map several terminal values onto a numerical “utility” scale.
Yes, but there isn’t. The theoretical case for terminal value monism is strong because monism doesn’t need such a procedure. All forms of terminal value pluralism run into the problem of incommensurability; monism doesn’t. With monism, we can evaluate and compare x-risks and positive psychology states & traits (vs. suffering) by their instrumental effects for minimizing suffering (which may be empirically difficult, but not theoretically impossible). What more do we want from a unified theory?
Do we want the slightest (epsilon) increase in x-risk to end up weighing more than any suffering? If this is our pre-decided definition, are we going to give up on suffering as a terminal value, caring about suffering only if it increases x-risk?
I can’t. Who can? (In a non-arbitrary way we could agree on from behind the veil of ignorance.) The analogy of a scale requires a common dimension by which the comparator can sort the two (mass, in the analogy of a scale). What is a common dimension for intercomparing suffering & x-risk, or suffering & positive states? An arbitrary numerical assignment?
To arrive at an impartial theory that doesn’t sanctify our self-serving intuitions, we’d want to formulate our terminal value pluralism, “behind the veil”, by agreeing on independent numerical utility values for different terminal value-grounded currencies, such as:
(1) +epsilon probability of human extinction
(2) +epsilon probability that someone undergoes, e.g., a cluster headache episode (or equivalent)
(3) +epsilon probability that someone instantiates/deepens a positive psychology state [requiring a common dimension for “positivity”, unlike monism]
With monism, we don’t need to agree on definitions & values for multiple such currencies. Instead, we want to ground the (dis)value of other values in their relationships to extreme suffering, which everyone already finds terminally motivating in their own case (unlike x-risk reduction or positivity-production, worth noting). I wouldn’t agree to any theory where extreme suffering can be outweighed by enough positivity elsewhere, because outweighing “does not compute”: positivity can only outweigh suffering if it reduces even more suffering, but not by itself, because a positive fantasy of infinite utility is not an antidote to suffering, because the aggregate terminal positivity physically exists only as a fantasy [an imaginary spreadsheet cell] that never interacts with our terminal suffering.
Without monism, how do we agree on the pluralist numerical values if we could end up undergoing the cluster headache-equivalent suffering ourselves (i.e., simulating an impartial compassion)? Are we to trust that cluster headaches aren’t so bad, when they’re outweighed according to a formula that some (not all) people agreed on?