Working on psychological questions related to minimalist axiologies and on reasons to be careful about the practical implications of abstract formalisms.
I have MA and BA degrees in psychology, with minors in mathematics, cognitive science, statistics, computer science, and analytic philosophy.
Multiple terminal values will always lead to irreconcilable conflicts.
(1) Do you hold suffering to be a terminal value motivating its minimization for its own sake?
(2) Do you also hold that there is some positive maximand that does not ultimately derive its value from its instrumental usefulness for minimizing suffering?
Anyone who answers yes to both (1) and (2) is not a unified entity playing one infinite game with one common currency (infinite optimand), but contains at least two infinite optimands, because with limited resources, we will never fully satisfy any single terminal value (e.g., its probability of being minimized or maximized throughout space & time).
I’ve been working on a compassion-centric motivation unification (as an improvement of existing formulations of negative utilitarianism) because I find it the most consistent & psychologically realistic theory that solves all these theoretical problems with no ultimately unacceptable implications. To arrive at practical answers from thought experiments, we want to account for all possibly relevant externalities of our scenarios. For example, the practical situations of {killing children} vs. {not having children} do not have “roughly the same outcome” in any scenario I can think of (due to all kinds of inescapable interdependencies). Similarly, compassion for all sentient beings does not necessarily imply attempting to end Earth (do you see Buddhists researching that?), because technocivilization might reach more & more exoplanets the longer it survives, or at least want to remain to ensure that suffering won’t re-evolve here.
To further explore intuitions between terminal value monism vs. terminal value pluralism, can you order the following motivations by your certainty of holding them as absolutes?
(A) You want to minimize suffering moments.
(B) You want to minimize the risk of extinction (i.e., prolong the survival of life/consciousness).
(C) You want to maximize happy moments.
I sometimes imagine I’m a Dyson sphere of near-infinite resources splitting my budget between these goals. I find that B is often instrumental for A on a cosmic scale, but that C derives its budget entirely from the degree to which it helps A: equanimity, resilience, growth, learning, awe, gratitude, and other phenomena of positive psychology are wonderful tools for compassionate actors for minimizing suffering, but I would not copy/boost them beyond the degree to which they are the minimizing way. In other words, I could not tell my Exoplanet Rescue Mission Department why I wanted to spend their resources on creating more ecstatic meditators on Mars, because A is only interested in instruments for minimizing suffering. Besides, I wouldn’t undergo surgery without anaesthesia for any number of meditators on Mars, because they wouldn’t help my suffering; in a world where anaesthetics opportunity-cost a monastery on Mars, what would you do? Is “outweighing” between terminal values an actual physical computation taking place anywhere outside an ethicist’s head, or a fiction?