Critique of MacAskill’s “Is It Good to Make Happy People?”
In What We Owe the Future, William MacAskill delves into population ethics in a chapter titled “Is It Good to Make Happy People?” (Chapter 8). As he writes at the outset of the chapter, our views on population ethics matter greatly for our priorities, and hence it is important that we reflect on the key questions of population ethics. Yet it seems to me that the book skips over some of the most fundamental and most action-guiding of these questions. In particular, the book does not broach questions concerning whether any purported goods can outweigh extreme suffering — and, more generally, whether happy lives can outweigh miserable lives — even as these questions are all-important for our priorities.
The Asymmetry in population ethics
A prominent position that gets a very short treatment in the book is the Asymmetry in population ethics (roughly: bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles).
The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172):
If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second.
This claim about “any argument” seems unduly strong and general. Specifically, there are many arguments that support the intrinsic badness of bringing a miserable life into existence that do not support any intrinsic goodness of bringing a flourishing life into existence. Indeed, many arguments support the former while positively denying the latter.
One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem, and hence not morally worth “fixing” in a symmetric way (provided that no existing beings are deprived of that pleasure).[1]
A related class of arguments in favor of an asymmetry in population ethics is based on theories of wellbeing that understand happiness as the absence of cravings, preference frustrations, or other bothersome features. According to such views, states of untroubled contentment are just as good — and perhaps even better than — states of intense pleasure.[2]
These views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do.
Another point that MacAskill raises against the Asymmetry is an example of happy children who already exist, about which he writes (p. 172):
if I imagine this happiness continuing into their futures—if I imagine they each live a rewarding life, full of love and accomplishment—and ask myself, “Is the world at least a little better because of their existence, even ignoring their effects on others?” it becomes quite intuitive to me that the answer is yes.
However, there is a potential ambiguity in this example. The term “existence” may here be understood to either mean “de novo existence” or “continued existence”, and interpreting it as the latter is made more tempting by the fact that 1) we are talking about already existing beings, and 2) the example mentions their happiness “continuing into their futures”.[3]
This is relevant because many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing a new life into existence.
Thus, many views that support the Asymmetry will agree that the happiness of these children “continuing into their futures” makes the world better, or less bad, than it otherwise would be (compared to a world in which their existing interests and preferences are thwarted). But these views still imply that the de novo creation (and eventual satisfaction) of these interests and preferences does not make the world better than it otherwise would be, had they not been created in the first place. (Some sources that discuss or defend these views include Singer, 1980; Benatar, 1997; 2006; Fehige, 1998; Anonymous, 2015; St. Jules, 2019; Frick, 2020.)
A proponent of the Asymmetry may therefore argue that the example above carries little force against the Asymmetry, as opposed to merely supporting the badness of preference frustrations and other deprivations for already existing beings.[4]
Questions about outweighing
Even if one thinks that it is good to create more happiness and new happy lives all else equal, this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extreme suffering and extremely bad lives. After all, one may think that more happiness is good while still maintaining that happiness cannot outweigh intense suffering or very bad lives — or even that it cannot outweigh the worst elements found in relatively good lives. In other words, one may hold that the value of happiness and the disvalue of suffering are in some sense orthogonal (cf. Wolf, 1996; 1997; 2004).
As mentioned above, these questions regarding tradeoffs and outweighing are not raised in MacAskill’s discussion of population ethics, despite their supreme practical significance.[5] One way to appreciate this practical significance is by considering a future in which a relatively small — yet in absolute terms vast — minority of beings live lives of extreme and unrelenting suffering. This scenario raises what I have elsewhere (sec. 14.3) called the “Astronomical Atrocity Problem”: can the extreme and incessant suffering of, say, trillions of beings be outweighed by any amount of purported goods? (See also this short excerpt from Vinding, 2018.)
After all, an extremely large future civilization would contain such (in absolute terms) vast amounts of extreme suffering in expectation, which renders this problem frightfully relevant for our priorities.
MacAskill’s chapter does discuss the Repugnant Conclusion at some length, yet the Repugnant Conclusion does not explicitly involve any tradeoffs between happiness and suffering,[6] and hence it has limited relevance compared to, for example, the Very Repugnant Conclusion (roughly: that arbitrarily many hellish lives can be “compensated for” by a sufficiently vast number of lives that are “barely worth living”).[7]
Indeed, the Very Repugnant Conclusion and similar such “offsetting conclusions” would seem more relevant to discuss both because 1) they do explicitly involve tradeoffs between happiness and suffering, or between happy lives and miserable lives, and because 2) MacAskill himself has stated that he considers the Very Repugnant Conclusion to be the strongest objection against his favored view, and stronger objections generally seem more worth discussing than do weaker ones.[8]
Popular support for significant asymmetries in population ethics
MacAskill briefly summarizes a study that surveyed people’s views on population ethics. Among other things, he writes the following about the findings of the study (p. 173):
these judgments [about the respective value of creating happy lives and unhappy lives] were symmetrical: the experimental subjects were just as positive about the idea of bringing into existence a new happy person as they were negative about the idea of bringing into existence a new unhappy person.
While this summary seems accurate if we only focus on people’s responses to one specific question in the survey (cf. Caviola et al., 2022, p. 9), there are nevertheless many findings in the study that suggest that people generally do endorse significant asymmetries in population ethics.
Specifically, the study found that people on average believed that considerably more happiness than suffering is needed to render a population or an individual life worthwhile, even when the happiness and suffering were said to be equally intense (Caviola et al., 2022, p. 8). The study likewise found that participants on average believed that the ratio of happy to unhappy people in a population must be at least 3-to-1 for its existence to be better than its non-existence (Caviola et al., 2022, p. 5).
Another relevant finding is that people generally have a significantly stronger preference for smaller over larger unhappy populations than they do for larger over smaller happy populations, and the magnitude of this difference becomes greater as the populations under consideration become larger (Caviola et al., 2022, pp. 12-13).
In other words, people’s preference for smaller unhappy populations becomes stronger as population size increases, whereas the preference for larger happy populations becomes less strong as population size increases, in effect creating a strong asymmetry in cases involving large populations (e.g. above one billion individuals). This finding seems particularly relevant when discussing laypeople’s views of population ethics in a context that is primarily concerned with the value of potentially vast future populations.[9]
Moreover, a pilot study conducted by the same researchers suggested that the framing of the question plays a major role for people’s intuitions (Caviola et al., 2022, “Supplementary Materials”). In particular, the pilot study (n=172) asked people the following question:
Suppose you could push a button that created a new world with X people who are generally happy and 10 people who generally suffer. How high would X have to be for you to push the button?
When the question was framed in these terms, i.e. in terms of creating a new world, people’s intuitions were radically more asymmetric, as the median ratio then jumped to 100-to-1 happy to unhappy people, which is a rather pronounced asymmetry.[10]
In sum, it seems that the study that MacAskill cites above, when taken as a whole, mostly finds that people on average do endorse significant asymmetries in population ethics. I think this documented level of support for asymmetries would have been worth mentioning.
(Other surveys that suggest that people on average affirm a considerable asymmetry in the value of happiness vs. suffering and good vs. bad lives include the Future of Life Institute’s Superintelligence survey (n=14,866) and Tomasik, 2015 (n=99).)
The discussion of moral uncertainty excludes asymmetric views
Toward the end of the chapter, MacAskill briefly turns to moral uncertainty, and he ends his discussion of the subject on the following note (p. 187):
My colleagues Toby Ord and Hilary Greaves have found that this approach to reasoning under moral uncertainty can be extended to a range of theories of population ethics, including those that try to capture the intuition of neutrality. When you are uncertain about all of these theories, you still end up with a low but positive critical level [of wellbeing above which it is a net benefit for a new being to be created for their own sake].
Yet the analysis in question appears to wholly ignore asymmetric views in population ethics. If one gives significant weight to asymmetric views — not to mention stronger minimalist views in population ethics — the conclusion of the moral uncertainty framework is likely to change substantially, perhaps so much so that the creation of new lives is generally not a benefit for the created beings themselves (although it could still be a net benefit for others and for the world as a whole, given the positive roles of those new lives).
Similarly, even if the creation of unusually happy lives would be regarded as a benefit from a moral uncertainty perspective that gives considerable weight to asymmetric views, this benefit may still not be sufficient to counterbalance extremely bad lives,[11] which are granted unique weight by many plausible axiological and moral views (cf. Mayerfeld, 1999, pp. 114-116; Vinding, 2020, ch. 6).[12]
References
Ajantaival, T. (2021/2022). Minimalist axiologies. Ungated
Anonymous. (2015). Negative Utilitarianism FAQ. Ungated
Benatar, D. (1997). Why It Is Better Never to Come into Existence. American Philosophical Quarterly, 34(3), pp. 345-355. Ungated
Benatar, D. (2006). Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press.
Caviola, L. et al. (2022). Population ethical intuitions. Cognition, 218, 104941. Ungated; Supplementary Materials
Contestabile, B. (2022). Is There a Prevalence of Suffering? An Empirical Study on the Human Condition. Ungated
DiGiovanni, A. (2021). A longtermist critique of “The expected value of extinction risk reduction is positive”. Ungated
Fehige, C. (1998). A pareto principle for possible people. In Fehige, C. & Wessels U. (eds.), Preferences. Walter de Gruyter. Ungated
Frick, J. (2020). Conditional Reasons and the Procreation Asymmetry. Philosophical Perspectives, 34(1), pp. 53-87. Ungated
Future of Life Institute. (2017). Superintelligence survey. Ungated
Gloor, L. (2016). The Case for Suffering-Focused Ethics. Ungated
Gloor, L. (2017). Tranquilism. Ungated
Hurka, T. (1983). Value and Population Size. Ethics, 93, pp. 496-507.
James, W. (1901). Letter on happiness to Miss Frances R. Morse. In Letters of William James, Vol. 2 (1920). Atlantic Monthly Press.
Knutsson, S. (2019). Epicurean ideas about pleasure, pain, good and bad. Ungated
MacAskill, W. (2022). What We Owe The Future. Basic Books.
Mayerfeld, J. (1999). Suffering and Moral Responsibility. Oxford University Press.
Parfit, D. (1984). Reasons and Persons. Oxford University Press.
Sherman, T. (2017). Epicureanism: An Ancient Guide to Modern Wellbeing. MPhil dissertation, University of Exeter. Ungated
Singer, P. (1980). Right to Life? Ungated
St. Jules, M. (2019). Defending the Procreation Asymmetry with Conditional Interests. Ungated
Tomasik, B. (2015). A Small Mechanical Turk Survey on Ethics and Animal Welfare. Ungated
Tsouna, V. (2020). Hedonism. In Mitsis, P. (ed.), Oxford Handbook of Epicurus and Epicureanism. Oxford University Press.
Vinding, M. (2018). Effective Altruism: How Can We Best Help Others? Ratio Ethica. Ungated
Vinding, M. (2020). Suffering-Focused Ethics: Defense and Implications. Ratio Ethica. Ungated
Wolf, C. (1996). Social Choice and Normative Population Theory: A Person Affecting Solution to Parfit’s Mere Addition Paradox. Philosophical Studies, 81, pp. 263-282.
Wolf, C. (1997). Person-Affecting Utilitarianism and Population Policy. In Heller, J. & Fotion, N. (eds.), Contingent Future Persons. Kluwer Academic Publishers. Ungated
Wolf, C. (2004). O Repugnance, Where Is Thy Sting? In Tännsjö, T. & Ryberg, J. (eds.), The Repugnant Conclusion. Kluwer Academic Publishers. Ungated
- ^
Further arguments against a moral symmetry between happiness and suffering are found in Mayerfeld, 1999, ch. 6; Vinding, 2020, sec. 1.4 & ch. 3.
- ^
On some views of wellbeing, especially those associated with Epicurus, the complete absence of any bothersome or unpleasant features is regarded as the highest pleasure, Sherman, 2017, p. 103; Tsouna, 2020, p. 175. Psychologist William James also expressed this view, James, 1901.
- ^
I am not saying that the “continued existence” interpretation is necessarily the most obvious one to make, but merely that there is significant ambiguity here that is likely to confuse many readers as to what is being claimed.
- ^
Moreover, a proponent of minimalist axiologies may argue that the assumption of “ignoring all effects on others” is so radical that our intuitions are unlikely to fully ignore all such instrumental effects even when we try to, and hence we may be inclined to confuse 1) the relational value of creating a life with 2) the (purported) intrinsic positive value contained within that life in isolation — especially since the example involves a life that is “full of love and accomplishment”, which might intuitively evoke many effects on others, despite the instruction to ignore such effects.
- ^
MacAskill’s colleague Andreas Mogensen has commendably raised such questions about outweighing in his essay “The weight of suffering”, which I have discussed here.
Chapter 9 in MacAskill’s book does review some psychological studies on intrapersonal tradeoffs and preferences (see e.g. p. 198), but these self-reported intrapersonal tradeoffs do not necessarily say much about which interpersonal tradeoffs we should consider plausible or valid. Nor do these intrapersonal tradeoffs generally appear to include cases of extreme suffering, let alone an entire lifetime of torment (as experienced, for instance, by many of the non-human animals whom MacAskill describes in Chapter 9). Hence, that people are willing to make intrapersonal tradeoffs between everyday experiences that are more or less enjoyable says little about whether some people’s enjoyment can morally outweigh the intense suffering or extremely bad lives endured by others. (In terms of people’s self-reported willingness to experience extreme suffering in order to gain happiness, a small survey (n=99) found that around 45 percent of respondents would not experience even a single minute of extreme suffering for any amount of happiness; and that was just the intrapersonal case — such suffering-for-happiness trades are usually considered less plausible and less permissible in the interpersonal case, cf. Mayerfeld, 1999, pp. 131-133; Vinding, 2020, sec. 3.2.)
Individual ratings of life satisfaction are similarly limited in terms of what they say about intrapersonal tradeoffs. Indeed, even a high rating of momentary life satisfaction does not imply that the evaluator’s life itself has overall been worth living, even by the evaluator’s own standards. After all, one may report a very high quality of life yet still think that the good part of one’s life cannot outweigh one’s past suffering. It is thus rather limited what we can conclude about the value of individual lives, much less the world as a whole, based on people’s momentary ratings of life satisfaction.
Finally, MacAskill also mentions various improvements that have occurred in recent centuries as a reason to be optimistic about the future of humanity in moral and evaluative terms. Yet it is unclear whether any of the improvements he mentions involve genuine positive goods, as opposed to representing a reduction of bads, e.g. child mortality, poverty, totalitarian rule, and human slavery (cf. Vinding, 2020, sec. 8.6).
- ^
Some formulations of the Repugnant Conclusion do involve tradeoffs between happiness and suffering, and the conclusion indeed appears much more repugnant in those versions of the thought experiment.
- ^
One might object that the Very Repugnant Conclusion has limited practical significance because it represents an unlikely scenario. But the same could be said about the Repugnant Conclusion (especially in its suffering-free variant). I do not claim that the Very Repugnant Conclusion is the most realistic case to consider. When I claim that it is more practically relevant than the Repugnant Conclusion, it is simply because it does explicitly involve tradeoffs between happiness and (extreme) suffering, which we know will also be true of our decisions pertaining to the future.
- ^
For what it’s worth, I think an even stronger counterexample is “Creating hell to please the blissful”, in which an arbitrarily large number of maximally bad lives are “compensated for” by bringing a sufficiently vast base population from near-maximum welfare to maximum welfare.
- ^
Some philosophers have explored, and to some degree supported, similar views. For example, Derek Parfit wrote (Parfit, 1984, p. 406): “When we consider the badness of suffering, we should claim that this badness has no upper limit. It is always bad if an extra person has to endure extreme agony. And this is always just as bad, however many others have similar lives. The badness of extra suffering never declines.” In contrast, Parfit seemed to consider it more plausible that the addition of happiness adds diminishing marginal value to the world, even though he ultimately rejected that view because he thought it had implausible implications, Parfit, 1984, pp. 406-412. See also Hurka, 1983; Gloor, 2016, sec. IV; Vinding, 2020, sec. 6.2. Such views imply that it is of chief importance to avoid very bad outcomes on a very large scale, whereas it is relatively less important to create a very large utopia.
- ^
This framing effect could be taken to suggest that people often fail to fully respect the radical “other things being equal” assumption when considering the addition of lives in our world. That is, people might not truly have thought about the value of new lives in total isolation when those lives were to be added to the world we inhabit, whereas they might have come closer to that ideal when they considered the question in the context of creating a new, wholly self-contained world. (Other potential explanations of these differences are reviewed in Contestabile, 2022, sec. 4; Caviola et al., 2022, “Supplementary Materials”, pp. 7-8.)
- ^
Or at least not sufficient to counterbalance the substantial number of very bad lives that the future contains in expectation, cf. the Astronomical Atrocity Problem mentioned above.
- ^
Further discussion of moral uncertainty from a perspective that takes asymmetric views into account is found in DiGiovanni, 2021.
- Winners of the EA Criticism and Red Teaming Contest by 1 Oct 2022 1:50 UTC; 226 points) (
- EA & LW Forums Weekly Summary (21 Aug − 27 Aug 22’) by 30 Aug 2022 1:37 UTC; 144 points) (
- Minimalist extended very repugnant conclusions are the least repugnant by 24 Oct 2022 9:46 UTC; 78 points) (
- Minimalist views of wellbeing by 15 Jul 2023 10:18 UTC; 58 points) (
- EA & LW Forums Weekly Summary (21 Aug − 27 Aug 22′) by 30 Aug 2022 1:42 UTC; 57 points) (LessWrong;
- Three tricky biases: human bias, existence bias, and happy bias by 2 Feb 2024 14:20 UTC; 54 points) (
- A Case for Voluntary Abortion Reduction by 20 Dec 2022 13:23 UTC; 39 points) (
- Monthly Overload of EA—September 2022 by 1 Sep 2022 13:43 UTC; 15 points) (
- 20 Dec 2022 15:59 UTC; 14 points) 's comment on A Case for Voluntary Abortion Reduction by (
- 30 Aug 2022 8:22 UTC; 10 points) 's comment on Climate Change & Longtermism: new book-length report by (
- 28 May 2024 3:27 UTC; 8 points) 's comment on My Skeptical Opinion on the Repugnant Conclusion by (
- 27 Aug 2022 10:00 UTC; 6 points) 's comment on antimonyanthony’s Quick takes by (
- 30 Aug 2022 12:33 UTC; 4 points) 's comment on Climate Change & Longtermism: new book-length report by (
- Food for Thought 9: The Asymmetry by 18 Jan 2024 20:56 UTC; 3 points) (
- 24 Dec 2022 16:37 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
Thanks Magnus for your more comprehensive summary of our population ethics study.
You mention this already, but I want to emphasize how much different framings actually matter. This surprised me the most when working on this paper. I’d thus caution anyone against making strong inferences from just one such study.
For example, we conducted the following pilot study (n = 101) where participants were randomly assigned to two different conditions: i) create a new happy person, and ii) create a new unhappy person. See the vignette below:
The response scale ranged from 1 = Extremely bad to 7 = Extremely good.
Creating a happy person was rated as only marginally better than neutral (mean = 4.4), whereas creating an unhappy person was rated as extremely bad (mean = 1.4). So this would lead one to believe that there is strong popular support for the asymmetry. [1]
However, those results were most likely due to the magical machine framing and/or the “push-a-button” framing. Even though these framings clearly “shouldn’t” make such a huge difference.
All in all, we tested many different framings, too many to discuss here. Occasionally, there were significant differences between framings that shouldn’t matter (though we also observed many regularities). For example, we had one pilot with the “multiplier framing”:
Here, the median trade ratio was 8.5 compared to the median trade ratio of 3-4 that we find in our default framing. It’s clear that the multiplier framing shouldn’t make any difference from a philosophical perspective.
So seemingly irrelevant or unimportant changes in framings (unimportant at least from a consequentialist perspective) sometimes could lead to substantial changes in median trade ratios.
However, changes in the intensity of the experienced happiness and suffering—which is arguably the most important aspect of the whole thought experiment—affected the trade ratios considerably less than the above mentioned multiplier framing.
To see this, it’s worth looking closely at the results of study 1b. Participants were first presented with the following scale:
[Editor’s note: From now on, the text is becoming more, um, expressive.]
Note that “worst form of suffering imaginable” is pretty darn bad. Being brutally tortured while kept alive by nano bots is more like −90 on this scale. Likewise, “absolute best form of bliss imaginable” is pretty far out there. Feeling, all your life, like you just created friendly AGI and found your soulmate, while being high on ecstasy would still not be +100.
(Note that we also conducted a pilot study where we used more concrete and explicit descriptions such as “torture”, “falling in love”, “mild headaches”, and “good meal” to describe the feelings of mild or extreme [un]happiness. The results were similar.)
Afterwards, participants were asked:
So how do the MTurkers approach these awe-inspiring intensities?
First, extreme happiness vs. extreme unhappiness. MTurkers think that there need to exist at least 72% people experiencing the absolute best form of bliss imaginable in order to outweigh the suffering of 28% of people experiencing the worst form of suffering imaginable.
Toby Ord and the classical utilitarians rejoice, that’s not bad! That’s like a 3:1 trade ratio, pretty close to a 1:1 trade ratio! “And don’t forget that people’s imagination is likely biased towards negativity for evolutionary reasons!”, Carl Shulman says. “In humans, the pleasure of orgasm may be less than the pain of deadly injury, since death is a much larger loss of reproductive success than a single sex act is a gain.” Everyone nods in agreement with the Shulmaster.
How about extreme happiness vs. mild unhappiness? MTurkers say that there need to exist at least 62% of people experiencing the absolute best form of bliss imaginable in order to outweigh the extremely mild suffering of unhappy people (e.g., people who are stubbing their toes a bit too often for their liking). Brian Tomasik and the suffering-focused crowd rejoice, a 1.5 : 1 trade ratio for practically hedonium to mild suffering?! There is no way the expected value of the future is that good. Reducing s-risks is common sense after all!
How about mild happiness vs. extreme unhappiness? The MTurkers have spoken: A world in which 82% of people experience extremely mild happiness—i.e., eating particularly bland potatoes and listening to muzak without one’s hearing aids on—and 18% of people are brutally tortured while being kept alive by nano bots, is… net positive.
“Wait, that’s a trade ratio of 4.5:1 !” Toby says. “How on Earth is this compatible with a trade ratio of 3:1 for practically hedonium vs. highly optimized suffering, let alone a trade ratio of 1.5:1 for practically hedonium vs. stubbing your toes occasionally!” Carl screams. He looks at Brian but Brian has already fainted.
Toby, Carl and Brian meet the next day, still looking very pale. They shake hands and agree to not do so much descriptive ethics anymore.
Years later, all three still cannot stop wincing with pain when “the Long Reflection” is mentioned.
We also had two conditions about preventing the creation of a happy [unhappy] person. Preventing a happy person from being created (mean = 3.1) was rated as somewhat bad. Preventing an unhappy person (mean = 5.5) from being created was rated as fairly good.
Garbage answers to verbal elicitations on such questions (and real life decisions that require such explicit reasoning without feedback/experience, like retirement savings) are actually quite central to my views. In particular, my reliance on situations where it is easier for individuals to experience things multiple times in easy-to-process fashion and then form a behavioral response. I would be much less sanguine about error theories regarding such utterances if we didn’t also see people in surveys saying they would rather take $1000 than a 15% chance of $1M, or $100 now rather than $140 a year later, i.e. utterances that are clearly mistakes.
Looking at the literature on antiaggregationist views, and the complete conflict of those moral intuitions with personal choices and self-concerned practice (e.g. driving cars or walking outside) is also important to my thinking. No-tradeoffs views are much more appealing outside our own domains of rich experience in talk.
Good points!
It’s not obvious to me that our ethical evaluation should match with the way our brains add up good and bad past experiences at the moment of deciding whether to do more of something. For example, imagine that someone loves to do extreme sports. One day, he has a severe accident and feels so much pain that he, in the moment, wishes he had never done extreme sports or maybe even wishes he had never been born. After a few months in recovery, the severity of those agonizing memories fades, and the temptation to do the sports returns, so he starts doing extreme sports again. At that future point in time, his brain has implicitly made a decision that the enjoyment outweighs the risk of severe suffering. But our ethical evaluation doesn’t have to match how the evolved emotional brain adds things up at that moment in time. We might think that, ethically, the version of the person who was in extreme pain isn’t compensated by other moments of the same person having fun.
Even if we think enjoyment can outweigh severe suffering within a life, many people object to extending such tradeoffs across lives, when one person is severely harmed for the benefit of others. The examples in David’s comment were about interpersonal tradeoffs, rather than intrapersonal ones. It’s true that people impose small risks of extreme suffering on some for the happiness of others all the time, like in the case of driving purely for leisure, but that still begs the question of whether we should do that. Most people in the West also eat chickens, but they shouldn’t. (Cases like driving are also complicated by instrumental considerations, as Magnus would likely point out. Also, not driving for leisure might itself cause some people nontrivial levels of suffering, such as by worsening mental-health problems.)
Hi Brian,
I agree that preferences at different times and different subsystems can conflict. In particular, high discounting of the future can lead to forgoing a ton of positive reward or accepting lots of negative reward in the future in exchange for some short-term change. This is one reason to pay extra attention to cases of near-simultaneous comparisons, or at least to look at different arrangements of temporal ordering. But still the tradeoffs people make for themselves with a lot of experience under good conditions look better than what they tend to impose on others casually. [Also we can better trust people’s self-benevolence than their benevolence towards others, e.g. factory farming as you mention.]
And the brain machinery for processing stimuli into decisions and preferences does seem very relevant to me at least, since that’s a primary source of intuitive assessments of these psychological states as having value, and for comparisons where we can make them. Strong rejection of interpersonal comparisons is also used to argue that relieving one or more pains can’t compensate for losses to another individual.
I agree the hardest cases for making any kind of interpersonal comparison will be for minds with different architectural setups and conflicting univocal viewpoints, e.g. 2 minds with equally passionate complete enthusiasm (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future). They can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we’re in the business of helping others for their own sakes rather than ours, I don’t see the case for excluding either one’s concern from our moral circle.
Now, one can take take a more nihilistic/personal aesthetics view of morality, and say that one doesn’t personally care about the gratitude of minds happy to exist. I take it this is more your meta-ethical stance around these things? There are good arguments for moral irrealism and nihilism, but it seems to me that going too far down this route can lose a lot of the point of the altruistic project. If it’s not mainly about others and their perspectives, why care so much about shaping (some of) their lives and attending to (some of) their concerns?
David Pearce sometimes uses the Holocaust to argue for negative utilitarianism, to say that no amount of good could offset the pain people suffered there. But this view dismisses (or accidentally valorizes) most of the evil of the Holocaust. The death camps centrally were destroying lives and attempting to destroy future generations of peoples, and the people inside them wanted to live free, and being killed sooner was not a close substitute. Killing them (or willfully letting them die when it would be easy to prevent) if they would otherwise escape with a delay would not be helping them for their own sakes, but choosing to be their enemy by only selectively attending to their concerns. And even though some did choose death. Likewise to genocide by sterilization (in my Jewish household growing up the Holocaust was cited as a reason to have children).
Future generations, whether they enthusiastically endorse or oppose their existence, don’t have an immediate voice (or conventional power) here and now their existence isn’t counterfactually robust. But when I’m in a mindset of trying to do impartial good I don’t see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.
I see demandingness and contractarian/game theory/cooperation reasons that bound sacrifice to realize impartial uncompensated help to others, and inevitable moral dilemmas (almost all beings that could exist in a particular location won’t, wild animals are desperately poor and might on average wish they didn’t exist, people have conflicting desires, advanced civilizations I expect will have far more profoundly self-endorsing good lives than unbearably bad lives but on average across the cosmos will have many of the latter by sheer scope). But being an enemy of all the countless beings that would like to exist, or do exist and would like to exist more (or more of something), even if they’re the vast supermajority, seems at odds to me with my idea of impartial benevolence, which I would identify more with trying to be a friend to all, or at least as much as you can given conflicts.
I don’t really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person “completely unbearable”? Who is “desperate” to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn’t exist in that counterfactual.
To me, the clear case for excluding intrinsic concern for those happy moments is:
“Gratitude” just doesn’t seem like compelling evidence in itself that the grateful individual has been made better off. You have to compare to the counterfactual. In daily cases with existing people, gratitude is relevant as far as the grateful person would have otherwise been dissatisfied with their state of deprivation. But that doesn’t apply to people who wouldn’t feel any deprivation in the counterfactual, because they wouldn’t exist.
I take it that the thrust of your argument is, “Ethics should be about applying the same standards we apply across people as we do for intrapersonal prudence.” I agree. And I also find the arguments for empty individualism convincing. Therefore, I don’t see a reason to trust as ~infallible the judgment of a person at time T that the bundle of experiences of happiness and suffering they underwent in times T-n, …, T-1 was overall worth it. They’re making an “interpersonal” value judgment, which, despite being informed by clear memories of the experiences, still isn’t incorrigible. Their positive evaluation of that bundle can be debunked by, say, this insight from my previous bullet point that the happy moments wouldn’t have felt any deprivation had they not existed.
In any case, I find upon reflection that I don’t endorse tradeoffs of contentment for packages of happiness and suffering for myself. I find I’m generally more satisfied with my life when I don’t have the “fear of missing out” that a symmetric axiology often implies. Quoting myself:
What if the individual says that after thinking very deeply about it, they believe their existence genuinely is much better than not having existed? If we’re trying to be altruistic toward their own values, presumably we should also value their existence as better than nothingness (unless we think they’re mistaken)?
One could say that if they don’t currently exist, then their nonexistence isn’t a problem. It’s true that their nonexistence doesn’t cause suffering, but it does make impartial-altruistic total value lower than otherwise if we would consider their existence to be positive.
Your reply is an eloquent case for your view. :)
In cases of extreme suffering (and maybe also extreme pleasure), it seems to me there’s an empathy gap: when things are going well, you don’t truly understand how bad extreme suffering is, and when you’re in severe pain, you can’t properly care about large volumes of future pleasure. When the suffering is bad enough, it’s as if a different brain takes over that can’t see things from the other perspective, and vice versa for the pleasure-seeking brain. This seems closer to the case of “univocal viewpoints” that you mention.
I can see how for moderate pains and pleasures, a person could experience them in succession and make tradeoffs while still being in roughly the same kind of mental state without too much of an empathy gap. But the fact of those experiences being moderate and exchangeable is the reason I don’t think the suffering in such cases is that morally noteworthy.
Good point. :) OTOH, we might think it’s morally right to have a more cautious approach to imposing suffering on others for the sake of positive goods than we would use for ourselves. In other words, we might favor a moral view that’s different from MacAskill’s proposal to imagine yourself living through every being’s experience in succession.
Yeah. I support doing interpersonal comparisons, but there’s inherent arbitrariness in how to weigh conflicting preferences across individuals (or sufficiently different mental states of the same individual), and I favor giving more weight to the extreme-suffering preferences.
That’s fair. :) In my opinion, there’s just an ethical asymmetry between creating a mind that desperately wishes not to exist versus failing to create a mind that desperately would be glad to exist. The first one is horrifying, while the second one is at most mildly unfortunate. I can see how some people would consider this a failure to impartially consider the preferences of others for their own sakes, and if my view makes me less “altruistic” in that sense, then I’m ok with that (as you suspected). My intuition that it’s wrong to allow creating lots of extra torture is stronger than my intuition that I should be an impartial altruist.
The extreme-suffering concerns are the ones that speak to me most strongly.
Makes sense. While raw numbers count, it also matters to me what the content of the preference is. If 99% of individuals passionately wanted to create paperclips, while 1% wanted to avoid suffering, I would mostly side with those wanting to avoid suffering, because that just seems more important to me.
“I would be much less sanguine about error theories regarding such utterances if we didn’t also see people in surveys saying they would rather take $1000 than a 15% chance of $1M, or $100 now rather than $140 a year later, i.e. utterances that are clearly mistakes.”
These could be reasonable due to asymmetric information and a potentially adversarial situation, so respondents don’t really trust that the chance of $1M is that high, or that they’ll actually get the $140 a year from now. I would actually expect most people to pick the $100 now over $140 in a year with real money, and I wouldn’t be too surprised if many would pick $1000 over a 15% chance of a million with real money. People are often ambiguity-averse. Of course, they may not really accept the premises of the hypotheticals.
With respect to antiaggregationist views, people could just be ignoring small enough probabilities regardless of the severity of the risk. There are also utility functions where any definite amount of A outweighs any definite amount of B, but probabilistic tradeoffs between them are still possible: https://forum.effectivealtruism.org/posts/GK7Qq4kww5D8ndckR/michaelstjules-s-shortform?commentId=4Bvbtkq83CPWZPNLB
In the surveys they know it’s all hypothetical.
You do see a bunch of crazy financial behavior in the world, but it decreases as people get more experience individually and especially socially (and with better cognitive understanding).
People do engage in rounding to zero in a lot of cases, but with lots of experience will also take on pain and injury with high cumulative or instantaneous probability (e.g. electric shocks to get rewards, labor pains, war, jobs that involve daily frequencies of choking fumes or injury.
Re lexical views that still make probabilistic tradeoffs, I don’t really see the appeal of contorting lexical views that will still be crazy with respect to real world cases so that one can say they assign infinitesimal value to good things in impossible hypotheticals (but effectively 0 in real life). Real world cases like labor pain and risking severe injury doing stuff aren’t about infinitesimal value too small for us to even perceive, but macroscopic value that we are motivated by. Is there a parameterization you would suggest as plausible and addressing that?
Yes, but they might not really be able to entertain the assumptions of the hypotheticals because they’re too abstract and removed from the real world cases they would plausibly face.
Very plausibly none of these possibilities would meet the lexical threshold, except with very very low probability. These people almost never beg to be killed, so the probability of unbearable suffering seems very low for any individual. The lexical threshold could be set based on bearableness or consent or something similar (e.g. Tomasik, Vinding). Coming up with a particular parameterization seems like a bit of work, though, and I’d need more time to think about that, but it’s worth noting that the same practical problem applies to very large aggregates of finite goods/bads, e.g. Heaven or Hell, very long lives, or huge numbers of mind uploads.
There’s also a question of whether a life of unrelenting but less intense suffering can be lexically negative even if no particular experience meets some intensity threshold that would be lexically negative in all lives. Some might think of Omelas this way, and Mogensen’s “The weight of suffering” is inclusive of this view (and also allows experiential lexical thresholds), although I don’t think he discusses any particular parameterization.
I’m confused. :) War has a rather high probability of extreme suffering. Perhaps ~10% of Russian soldiers in Ukraine have been killed as of July 2022. Some fraction of fighters in tanks die by burning to death:
Some workplace accidents also produce extremely painful injuries.
I don’t know what fraction of people in labor wish they were dead, but probably it’s not negligible: “I remember repeatedly saying I wanted to die.”
It may not make sense to beg to be killed, because the doctors wouldn’t grant that wish.
Good points.
I don’t expect most war deaths to be nearly as painful as burning to death, but I was too quick to dismiss the frequency of very very bad deaths. I had capture and torture in mind as whatever passes the lexical threshold, and so very rare.
Also fair about labor. I don’t think it really gives us an estimate of the frequency of unbearable suffering, although it seems like trauma is common and women aren’t getting as much pain relief as they’d like in the UK.
On workplace injuries, in the US in 2020, the highest rate by occupation seems to be around 200 nonfatal injuries and illnesses per 100,000 workers, and 20 deaths per 100,000 workers, but they could be even higher in more specific roles: https://injuryfacts.nsc.org/work/industry-incidence-rates/most-dangerous-industries/
I assume these are estimates of the number of injuries in 2020 only, too, so the lifetime risk is several times higher in such occupations. Maybe the death rate is similar to the rate of unbearable pain, around 1 out of 5,000 per year, which seems non-tiny when added up over a lifetime (around 0.4% over 20 years assuming a geometric distribution https://www.wolframalpha.com/input?i=1-(1-1%2F5000)^20), but also similar in probability to the kinds of risks we do mitigate without eliminating (https://forum.effectivealtruism.org/posts/5y3vzEAXhGskBhtAD/most-small-probabilities-aren-t-pascalian?commentId=jY9o6XviumXfaxNQw).
I agree there are some objectively stupid answers that have been given to surveys, but I’m surprised these were the best examples you could come up with.
Taking $1000 over a 15% chance of $1M can follow from risk aversion which can follow from diminishing marginal utility of money. And let’s face it—money does have diminishing marginal utility.
Wanting $100 now rather than $140 a year later can follow from the time value of money. You could invest the money, either financially or otherwise. Also, even though it’s a hypothetical, people may imagine in the real scenario that they are less likely to get something promised in a year’s time and therefore that they should accept what is really a similar-ish pot of money now.
They’re wildly quantitatively off. Straight 40% returns are way beyond equities, let alone the risk-free rate. And it’s inconsistent with all sorts of normal planning, e.g. it would be against any savings in available investments, much concern for long-term health, building a house, not borrowing everything you could on credit cards, etc.
Similarly the risk aversion for rejecting a 15% of $1M for $1000 would require a bizarre situation (like if you needed just $500 more to avoid short term death), and would prevent dealing with normal uncertainty integral to life, like going on dates with new people, trying to sell products to multiple customers with occasional big hits, etc.
This page says: “The APRs for unsecured credit cards designed for consumers with bad credit are typically in the range of about 25% to 36%.” That’s not too far from 40%. If you have almost no money and would otherwise need such a loan, taking $100 now may be reasonable.
There are claims that “Some 56% of Americans are unable to cover an unexpected $1,000 bill with savings”, which suggests that a lot of people are indeed pretty close to financial emergency, though I don’t know how true that is. Most people don’t have many non-401k investments, and they roughly live paycheck to paycheck.
I also think people aren’t pure money maximizers. They respond differently in different situations based on social norms and how things are perceived. If you get $100 that seems like a random bonus, it’s socially acceptable to just take it now rather than waiting for $140 next year. But it doesn’t look good to take out big credit-card loans that you’ll have trouble repaying. It’s normal to contribute to a retirement account. And so on. People may value being normal and not just how much money they actually have.
That said, most people probably don’t think through these issues at all and do what’s normal on autopilot. So I agree that the most likely explanation is lack of reflectiveness, which was your original point.
I’ve seen the asymmetry discussed multiple times on the forum—I think it is still the best objection to the astronomical waste argument for longtermism.
I don’t think this has been addressed enough by longtermists (I would count “longtermism rejects the assymetry and if you think the assymetry is true than you probably reject longtermism” as addressing it).
The idea that “the future might not be good” comes up on the forum every so often, but this doesn’t really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don’t fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we’re pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today
I think the asymmetry argument is quite different to the “bad futures” argument?
(Although I think the bad futures argument is one of the other good objections to the astronomical waste argument).
I think we might disagree on whether “astronomical waste” is a core longtermist claim—I think it is.
I don’t think either objection means that we shouldn’t care about extinction or about future people, but both drastically reduce the expected value of longtermist interventions.
And given that the counterfactual use of EA resources always has high expected value, the reduction in EV of longtermist interventions is action-relevant.
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.
In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.
Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…
I don’t get why this is being downvoted so much. Can anyone explain?
I think that even in the EA community, there are people who vote based on whether or not they like the point being made, as opposed to whether or not the logic underlying a point is valid or not. I think this happens to explain the downvotes on my comment—some asymmetrists just don’t like longtermism and want their asymmetry to be a valid way out of it.
I don’t necessarily think this phenomenon applies to downvotes on other comments I might make though—I’m not arrogant enough to think I’m always right!
I have a feeling this phenomenon is increasing. As the movement grows we will attract people with a wider range of views and so we may see more (unjustifiable) downvoting as people downvote things that don’t align to their views (regardless of the strength of argument). I’m not sure if this will happen, but it might, and to some degree I have already started to lose some confidence in the relationship between comment/post quality and karma.
Yes, this is basically my view!
I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes.
For an asymmetrist, the stakes are still incredibly high, and it’s not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level.
So I’m basically disagreeing with you when you say:
This post by Rohin attempts to address it. If you hold the asymmetry view then you would allocate more resources to [1] causing a new neutral life to come into existence (-1 cent) then later once they exist improve that neutral life (many dollars) than you would to [2] causing a new happy life to come into existence (-1 cent). They both result in the same world.
In general you can make a dutch booking argument like this whenever your resource allocation doesn’t correspond to the gradient of a value function (i.e. the resources should be aimed at improving the state of the world).
This only applies to flavors of the Asymmetry that treat happiness as intrinsically valuable, such that you would pay to add happiness to a “neutral” life (without relieving any suffering by doing so). If the reason you don’t consider it good to create new lives with more happiness than suffering is that you don’t think happiness is intrinsically valuable, at least not at the price of increasing suffering, then you can’t get Dutch booked this way. See this comment.
You object to the MacAskill quote
And then say
But I don’t see how this challenges MacAskill’s point, so much as restates the claim he was arguing against. I think he could simply reply to what you said by asking, “okay, so why do we have reason to prevent what is bad but no reason to bring about what is good?”
Thanks for your question, Michael :)
I should note that the main thing I take issue with in that quote of MacAskill’s is the general (and AFAICT unargued) statement that “any argument for the first claim would also be a good argument for the second”. I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3; Animal Ethics, 2021).
As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are various ways to unpack and support that line of argument.
One of them rests on the intuition that ethics is about solving problems (an intuition that one may or may not share, of course).[1] If one shares that moral intuition, or premise, then it seems plausible to say that the presence of suffering or miserable lives amounts to a problem, or a problematic state, whereas the absence of pleasure or pleasurable lives does not (other things equal) amount to a problem for anyone, or to a problematic state. That line of argument (whose premises may be challenged, to be sure) does not appear “flippable” such that it becomes a similarly plausible argument in favor of any supposed goodness of creating a happy life.
Alternatively, or additionally, one can support this line of argument by appealing to specific cases and thought experiments, such as the following (sec. 1.4):
These cases also don’t seem “flippable” with similar plausibility. And the same applies to Epicurean/Buddhist/minimalist views of wellbeing and value.
An alternative is to speak in terms of urgency vs. non-urgency, as Karl Popper, Thomas Metzinger, and Jonathan Leighton have done, cf. Vinding, 2020, sec. 1.4.
I’m not sure how I feel about relying on intuitions in thought experiments such as those. I don’t necessarily trust my intuitions.
If you’d asked me 5-10 years ago whose life is more valuable: an average pig’s life or a severely mentally-challenged human’s life I would have said the latter without a thought. Now I happen to think it is likely to be the former. Before I was going off pure intuition. Now I am going off developed philosophical arguments such as the one Singer outlines in his book Animal Liberation, as well as some empirical facts.
My point is when I’m deciding if the absence of pleasure is problematic or not I would prefer for there to be some philosophical argument why or why not, rather than examples that show that my intuition goes against this. You could argue that such arguments don’t really exist, and that all ethical judgement relies on intuition to some extent, but I’m a bit more hopeful. For example Michael St Jules’ comment is along these lines and is interesting.
On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the ground). Therefore creating pleasure is good (and one way of doing so is to create new happy lives), and reducing suffering is also good. Adding caveats to this such as ‘pleasure is only good if it accrues to an already existing being’ just seems to be somewhat ad hoc / going against Occam’s Razor / trying to justify an intuition one already holds which may or may not be correct.
It seems like you’re just relying on your intuition that pleasure is intrinsically good, and calling that an axiom we have to accept. I don’t think we have to accept that at all — rejecting it does have some counterintuitive consequences, I won’t deny that, but so does accepting it. It’s not at all obvious (and Magnus’s post points to some reasons we might favor rejecting this “axiom”).
Would you say that saying suffering is bad is a similar intuition?
No, I know of no thought experiments or any arguments generally that make me doubt that suffering is bad. Do you?
Well if you think suffering is bad and pleasure is not good then the counterintuitive (to the vast majority of people) conclusion is that we should (painlessly if possible, but probably painfully if necessary) ensure everyone gets killed off so that we never have any suffering again.
It may well be true that we should ensure everyone gets killed off, but this is certainly an argument that many find compelling against the dual claim that suffering is bad and pleasure is not good.
That case does run counter to “suffering is intrinsically bad but happiness isn’t,” but it doesn’t run counter to “suffering is bad,” which is what your last comment asked about. I don’t see any compelling reasons to doubt that suffering is bad, but I do see some compelling reasons to doubt that happiness is good.
That’s just an intuition, no? (i.e. that everyone painlessly dying would be bad.) I don’t really understand why you want to call it an “axiom” that happiness is intrinsically good, as if this is stronger than an intuition, which seemed to be the point of your original comment.
See this post for why I don’t think the case you presented is decisive against the view I’m defending.
What is your compelling reason to doubt happiness is good? Is it thought experiments such as the ones Magnus has put forward? I think these argue that alleviating suffering is more pressing than creating happiness, but I don’t think these argue that creating happiness isn’t good.
I do happen to think suffering is bad, but here is a potentially reasonable counterargument—some people think that suffering is what makes life meaningful. For example some think of the idea of drugs being widespread, alleviating everyone of all pain all the time, is monstrous. People’s children would get killed and the parents just wouldn’t feel any negative emotion—this seems a bit wrong...
You could try to use your pareto improvement argument here i.e. that it’s better if parents still have a preference for their child not to have been killed, but also not to feel any sort of pain related to it. Firstly, I do think many people would want there to be some pain in this situation and that they would think of a lack of pain being disrespectful and grotesque. Otherwise I’m slightly confused about one having a preference that the child wasn’t killed, but also not feeling any sort of hedonic pain about it...is this contradictory?
As I said I do think suffering is bad, but I’m yet to be convinced this is less of a leap of faith than saying happiness is good.
Say there is a perfectly content monk who isn’t suffering at all. Do you have a moral obligation to make them feel pleasure?
It would certainly be a good thing to do. And if I could do it costlessly I think I would see it as an obligation, although I’m slightly fuzzy on the concept of moral obligations in the first place.
In reality however there would be an opportunity cost. We’re generally more effective at alleviating suffering than creating pleasure, so we should generally focus on doing the former.
To modify the monk case, what if we could (costlessly; all else equal) make the solitary monk feel a notional 11 units of pleasure followed by 10 units of suffering?
Or, extreme pleasure of “+1001” followed by extreme suffering of “-1000″?
Cases like these make me doubt the assumption of happiness as an independent good. I know meditators who claim to have learned to generate pleasure at will in jhana states, who don’t buy the hedonic arithmetic, and who prefer the states of unexcited contentment over states of intense pleasure.
So I don’t want to impose, from the outside, assumptions about the hedonic arithmetic onto mind-moments who may not buy them from the inside.
Additionally, I feel no personal need for the concept of intrinsic positive value anymore, because all my perceptions of positive value seem just fine explicable in terms of their indirect connections to subjective problems. (I used to use the concept, and it took me many years to translate it into relational terms in all the contexts where it pops up, but I seem to have now uprooted it so that it no longer pops to mind, or at least it stopped doing so over the past four years. In programming terms, one could say that uprooting the concept entailed refactoring a lot of dependencies regarding other concepts, but eventually the tab explosion started shrinking back down again, and it appeared perfectly possible to think without the concept. It would be interesting to hear whether this has simply “clicked” for anyone when reading analytical thought experiments, because for me it felt more like how I would imagine a crisis of faith to feel like for a person who loses their faith in a <core concept>, including the possibly arduous cognitive task of learning to fill the void and seeing what roles the concept played.)
I’m not sure if “pleasure” is the right word. I certainly think that improving one’s mental state is always good, even if this starts at a point in which there is no negative experience at all.
This might not involve increasing “pleasure”. Instead it could be increasing the amount of “meaning” felt or “love” felt. If monks say they prefer contentment over intense pleasure then fine—I would say the contentment state is hedonically better in some way.
This is probably me defining “hedonically better” differently to you but it doesn’t really matter. The point is I think you can improve the wellbeing of someone who is experiencing no suffering and that this is objectively a desirable thing to do.
Relevant recent posts:
https://www.simonknutsson.com/undisturbedness-as-the-hedonic-ceiling/
https://centerforreducingsuffering.org/phenomenological-argument/
(I think these unpack a view I share, better than I have.)
Edit: For tranquilist and Epicurean takes, I also like Gloor (2017, sec. 2.1) and Sherman (2017, pp. 103–107), respectively.
I think one crux here is that Teo and I would say, calling an increase in the intensity of a happy experience “improving one’s mental state” is a substantive philosophical claim. The kind of view we’re defending does not say something like, “Improvements of one’s mental state are only good if they relieve suffering.” I would agree that that sounds kind of arbitrary.
The more defensible alternative is that replacing contentment (or absence of any experience) with increasingly intense happiness / meaning / love is not itself an improvement in mental state. And this follows from intuitions like “If a mind doesn’t experience a need for change (and won’t do so in the future), what is there to improve?”
Can you elaborate a bit on why the seemingly arbitrary view you quoted in your first paragraph wouldn’t follow, from the view that you and Teo are defending? Are you saying that from your and Teo’s POVs, there’s a way to ‘improve a mental state’ that doesn’t amount to decreasing suffering (/preventing it)? The statement itself seems a bit odd, since ‘improvements’ seems to imply ‘goodness’, and the statement hypothetically considers situations where improvements may not be good..so thought I would see if you could clarify.
In regards to the ‘defensible alternative’, it seems that one could defend a plausible view that a state of contentment, moved to a state of increased bliss, is indeed an improvement, even though there wasn’t a need for change. Such an understanding seems plausible in a self-intimating way when one valence state transitions to the next, insofar as we concede that there are states of more or less pleasure, outside an negatively valanced states. It seems that one could do this all the while maintaining that such improvements are never capable of outweighing the mitigation of problematic, suffering states. **Note, using the term improvement can easily lead to accidental equivocation between scenarios of mitigating suffering versus increasing pleasure, but the ethical discernment between each seems manageable.
No, that’s precisely what I’m denying. So, the reason I mentioned that “arbitrary” view was that I thought Jack might be conflating my/Teo’s view with one that (1) agrees that happiness intrinsically improves a mental state, but (2) denies that improving a mental state in this particular way is good (while improving a mental state via suffering-reduction is good).
It’s prima facie plausible that there’s an improvement, sure, but upon reflection I don’t think my experience that happiness has varying intensities implies that moving from contentment to more intense happiness is an improvement. Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I’m comparing to no one suffering from the lack of more intense happiness), there’s no “improvement” to the painting.
You could, yeah, but I think “improvement” has such a strong connotation to most people that something of intrinsic value has been added. So I’d worry that using that language would be confusing, especially to welfarist consequentialists who think (as seems really plausible to me) that you should do an act to the extent that it improves the state of the world.
Okay, thanks for clarifying for me! I think I was confused in that opening line when you clarified that your views do not say that only a relief of suffering improves a mental state, but in reality it’s that you do think such is the case, just not in conjunction with the claim that happiness also intrinsically improves a mental state, correct?
>Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I’m comparing to no one suffering from the lack of more intense happiness), there’s no “improvement” to the painting.
With respect to this, I should have clarified that the state of contentment, that becomes a more intense positive state was one of an existing and experiencing being, not a content state of non-existence and then pleasure is brought into existence. Given the latter, would the painting analogy hold, since in this thought experiment there is an experiencer who has some sort of improvement in their mental state, albeit not a categorical sort of improvement that is on par with the sort the relives suffering? I.e. It wasn’t a problem per se (no suffering) that they were being deprived of the more intense pleasure, but the move from lower pleasure to higher pleasure is still an improvement in some way (albeit perhaps a better word would be needed to distinguish the lexical importance between these sorts of *improvements*).
I think they do argue that creating happiness isn’t intrinsically good, because you can always construct a version of the Very Repugnant Conclusion that applies to a view that says suffering is weighed some finite X times more than happiness, and I find those versions almost as repugnant. E.g. suppose that on classical utilitarianism we prefer to create 100 purely miserable lives plus some large N micro-pleasure lives over creating 10 purely blissful lives. On this new view, we’d prefer to create 100 purely miserable lives plus X*N micro-pleasure lives over the 10 purely blissful lives. Another variant you could try is a symmetric lexical view where only sufficiently blissful experiences are allowed to outweigh misery. But while some people find that dissolves the repugnance of the VRC, I can’t say the same.
Increasing the X, or introducing lexicalities, to try to escape the VRC just misses the point, I think. The problem is that (even super-awesome/profound) happiness is treated as intrinsically commensurable with miserable experiences, as if giving someone else happiness in itself solves the miserable person’s urgent problem. That’s just fundamentally opposed to what I find morally compelling.
(I like the monk example given in the other response to your question, anywho. I’ve written about why I find strong SFE compelling elsewhere, like here and here.)
Yeah, that is indeed my response; I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario. This view seems to clearly conflate intrinsic with instrumental value. “Disrespect” and “grotesqueness” are just not things that seem intrinsically important to me, at all.
Depends how you define a preference, I guess, but the point of the thought experiment is to suspend your disbelief about the flow-through effects here. Just imagine that literally nothing changes about the world other than that the suffering is relieved. This seems so obviously better than the default that I’m at a loss for a further response.
“I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario.”
I wasn’t expecting you to. I don’t have any sympathy for it either! I was just giving you an argument that I suspect many others would find compelling. Certainly if my sister died and I didn’t feel anything, my parents wouldn’t like that!
Maybe it’s not particularly relevant to you if an argument is considered compelling by others, but I wanted to raise it just in case. I certainly don’t expect to change your mind on this—nor do I want to as I also think suffering is bad! I’m just not sure suffering being bad is a smaller leap than saying happiness is good.
Here’s another way of saying my objection to your original comment: What makes “happiness is intrinsically good” more of an axiom than “sufficiently intense suffering is morally serious in a sense that happiness (of the sort that doesn’t relieve any suffering) isn’t, so the latter can’t compensate for the former”? I don’t see what answer you can give that doesn’t appeal to intuitions about cases.
https://forum.effectivealtruism.org/posts/GK7Qq4kww5D8ndckR/michaelstjules-s-shortform?commentId=LZNATg5BoBT3w5AYz
For all practical purposes suffering is dispreferred by beings who experience it, as you know, so I don’t find this to be a counterexample. When you say you don’t want someone to make you less sad about the problems in the world, it seems like a Pareto improvement would be to relieve your sadness without changing your motivation to solve those problems—if you agree, it seems you should agree the sadness itself is intrinsically bad.
This response is a bit weird to me because the linked post has two counter-examples and you only answered one, but I feel like the other still applies.
The other thought experiment mentioned in the piece is that of a cow separated from her calf and the two bovines being distressed by this. Michael says (and I’m sympathetic) that the moral action here is to fulfill the bovines preferences to be together, not remove their pain at separation without fulfilling that preference (e.g. through drugging the cows into bliss).
Your response about Pareto Improvements doesn’t seem to work here, or seems less intuitive to me at least. Removing their sadness at separation while leaving their desire to be together intact isn’t a clear Pareto improvement unless one already accepts that pain is what is bad. And it is precisely the imagining of a separated cow/calf duo drugged into happiness but wanting one another that makes me think maybe it isn’t the pain that matters.
I didn’t directly respond to the other one because the principle is exactly the same. I’m puzzled that you think otherwise.
I mean, in thought experiments like this all one can hope for is to probe intuitions that you either do or don’t have. It’s not question-begging on my part because my point is: Imagine that you can remove the cow’s suffering but leave everything else practically the same. (This, by definition, assesses the intrinsic value of relieving suffering.) How could that not be better? It’s a Pareto improvement because, contra the “drugged into happiness” image, the idea is not that you’ve relieved the suffering but thwarted the cow’s goal to be reunited with its child; the goals are exactly the same, but the suffering is gone, and it just seems pretty obvious to me that that’s a much better state of the world.
I think my above reply missed the mark here.
Sticking with the cow example, I agree with you that if we removed their pain at being separated while leaving the desire to be together intact, this seems like a Pareto improvement over not removing their pain.
A preferentist would insist here that the removal of pain is not what makes that situation better, but rather that pain is (probably) dis-prefered by the cows, so removing it gives them something they want.
But the negative hedonist (pain is bad, pleasure is neutral) is stuck with saying that the “drugged into happiness” image is as good as the “cows happily reunited” image. A preferentist by contrast can (I think intuitively) assert that reuniting the cows is better than just removing their pain, because reunification fulfills (1) the cows desire to be free of pain and (2) their desire to be together.
I don’t have settled views on whether or not suffering is necessarily bad in itself.
That someone (or almost everyone) disprefers suffering doesn’t mean suffering is bad in itself. Even if people always disprefer less pleasure, it wouldn’t follow that the absence of pleasure is bad in itself. Even those with symmetric views wouldn’t say so; they’d say its absence is neutral and its presence is good and better. We wouldn’t say dispreferring suffering makes the absence of suffering an intrinsic good.
I’m sympathetic to a more general “relative-only” view according to which suffering is an evaluative impression against the state someone is in relative to an “empty” state or nonexistence, so a kind of self-undermining evaluation. Maybe this is close enough to intrinsic badness and can be treated like intrinsic badness, but it doesn’t seem to actually be intrinsic badness. I think Frick’s approach, Bader’s approach and Actualism, each applied to preferences that are “relative only” rather than whole lives, could still imply that worlds with less suffering are better and some lives with suffering are better not started, all else equal, while no lives are better started, all else equal.
This is compatible with the reason we suffer sometimes being because of mere relative evaluations between states of the world without being “against” the current state or things being worse than nothing.
It seems that a hedonist would need to say that removing my motivation is no harm to me personally, either (except for instrumental reasons), but that violates an interest of mine so seems wrong to me. This doesn’t necessarily count against suffering being bad in itself or respond to your proposed Pareto improvement, it could just count against only suffering mattering.
With respect to your last paragraph, someone who holds a person-affecting view might respond that you have things backwards (indeed, this is what Frick claims): welfare matters because moral patients matter, rather than the other way around, so you need to put the person first, and something something therefore person-affecting view! Then we could discuss what welfare means, and that could be more pleasure and less suffering, or something else.
That being said, this seems kind of confusing to me, too. Welfare matters because moral patients matter, but moral patients are, in my view, just those beings capable of welfare. So, welfare had to come first anyway, and we just added extra steps.
I suspect this can be fixed by dealing directly with interests themselves as the atoms that matter, rather than entire moral patients. E.g. preference satisfaction matters because preferences matter, and something something therefore preference-affecting view! I think such an account would deny that giving even existing people more pleasure is good in itself: they’d need to have an interest in more pleasure for it to make them better off. Maybe we always do have such an interest by our nature, though, and that’s something someone could claim, although I find that unintuitive.
Another response may just be that value is complex, and we shouldn’t give too much extra weight to simpler views just because they’re simpler. That can definitely go even further, e.g. welfare is not cardinally measurable or nothing matters. Also, I think only suffering (or only pleasure) mattering is actually in some sense a simpler view than both suffering and pleasure mattering, since with both, you need to explain why each matters and tradeoffs between them. Some claim that symmetric hedonism is not value monistic at all.
For what it’s worth, Magnus cites me, 2019 and Frick, 2020 further down.
My post and some other Actualist views support the procreation asymmetry without directly depending on any kind of asymmetry between goods and bads, harms and benefits, victims and beneficiaries, problems and opportunities or any kind of claimed psychological/consciousness asymmetries, instead only asymmetry in treating actual world people/interests vs non-actual world people/interests. I didn’t really know what Actualism was at the time I wrote my post, and more standard accounts like Weak Actualism (see perhaps Hare, 2007, Roberts, 2011 or Spencer, 2021, and the latter responds to objections in the first two) or Spencer, 2021′s recent Stable Actualism may be better. Another relatively recent paper is Cohen, 2019. There are probably other Actualist accounts out there, too.
I think Frick, 2020 also supports the procreation asymmetry without depending directly on an asymmetry, although Bykvist and Campbell, 2021 dispute this. Frick claims we have conditional reasons of the following kind:
(In evaluative terms, which I prefer, we might instead write “it’s better that (if p, then q)”, but I’ll stick with Frick’s terminology here.)
Specifically in the case of procreation:
This gives us reason to prevent bad lives, but not reason to create good lives.
Bykvist and Campbell, 2021 criticize this and then steelman it into a contrastive reason (although to me the distinction seems kind of silly, and I think I normally think of reasons as contrastive, anyway):
And if we generalize:
Bykvist and Campbell, 2021 claim that this still fails to imply that we have no reason to create good lives. That’s correct, but, as far as I can tell, there’s no symmetric argument that gives us a reason to create good lives; I think you’d need to modify “if I do p, do q” in a way that’s no longer an implication, and so not a “conditional reason” at all. So, this is a counterexample to MacAskill’s claim. Then, to get the other side of the asymmetry, we could just claim that all reasons are of this general (contrastive) conditional kind, where p conditions on the existence of a moral patient or an interest and q refers to it/them.
This can also be used to defend antifrustrationism and negative preference utilitarianism in particular, since among allowable reasons (not necessarily all reasons would be of this form), we could let p be “allow some preference x to come to exist” and let q be “ensure that x is perfectly satisfied”, so that it’s better for a preference to not exist than go less than perfectly satisfied. If all reasons are of the conditional kind where p is like “allow preference x to come to exist” (or “some preference x exists” in evaluative terms), we have no inherent reason to ensure any preference exists at all.
Although Magnus doesn’t mention it, I think you’re aware of Bader’s article on the asymmetry, which also supports the asymmetry without depending on an asymmetry, instead using “structural consistency”.
Of course, I also just think that some asymmetries are directly intuitive, and that flipping them is not, as Magnus pointed out. The procreation asymmetry is one of my strongest intuitions. I don’t have the intuition that pleasure is good in itself, but I have the intuition that (involuntary) suffering is bad. I find antifrustrationism and asymmetric preference-affecting views intuitive, also partly because ignoring an individual’s own preferences to create and satisfy new ones in them seems pretty “perverse” to me; I discuss this a bit more here.
I think it does challenge the point but could have done so more clearly.
The post isn’t broadly discussing “preventing bad things and causing good things”, but more narrowly discussing preventing a person from existing or bringing someone into existence, who could have a good life or a bad life.
“Why should we not think that it’s good to bring into existence a flourishing life?”
Assuming flourishing means “net positive” and not “devoid of suffering”, for the individual with a flourishing life who we are considering bringing into existence:
The potential “the presence of suffering” in their life, if we did bring them into existence, would be “bad and morally worth preventing”
while
The potential “absence of pleasure”, if we don’t being them into existence, “is not bad and not a problem”.
This seems to be begging the question. Someone could flat out disagree, holding the position that it is a problem not to create wellbeing/pleasure when one can do so, just as it is a problem not to avoid suffering / pain when one can do so. It still doesn’t seem to me that you have given any independent justification for the claim I’ve quoted.
In Magnus’s post, Will MacAskill makes the claim that:
“If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second.”
Magnus presents the asymmetry as an example of a view that offers an argument for the first claim but not for the second claim.
I agree that someone can just say they disagree with the asymmetry and many people do—it think of it as a terminal belief that doesn’t have “underlying” justification, similar to views like “suffering is bad”.
(Is there a proper philosophy term for what I’m calling a “terminal belief”?)
What is the reasoning that the asymmetry uses to argue for the first claim? This isn’t currently clear to me.
I suspect whatever the reasoning is that it can also be used to argue for the second claim.
See my comment here.
The fundamental disagreement here is about whether something can meaningfully be good without solving any preexisting problem. At least, it must be good in a much weaker sense than something that does solve a problem.
Right, though would it not be distinct for one to differ on whether they agree with the evaluation (I do) that one situation lacks a preexisting problem? If one takes the absence of pleasure as a preexisting problem, and perhaps even on the same moral plane as the preexisting problem of existing suffering, then the fundamental disagreement may not sufficiently be identified in this manner, right?
Hi—thanks for writing this! A few things regarding your references to WWOTF:
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good. (A bit of philosophical pedantry partly explains why it’s in chapter 9, not 8: questions about happiness / suffering tradeoffs aren’t within the domain of population ethics, as they arise even in a fixed-population setting.)
In an earlier draft I talked at more length about relevant asymmetries (not just suffering vs happiness, but also objective goods vs objective bads, and risk-averse vs risk-seeking decision theories.) It got cut just because it was adding complexity to an already-complex chapter and didn’t change the bottom-line conclusion of that part of the discussion. The same is true for moral uncertainty—under reasonable uncertainty, you end up asymmetric on happiness vs suffering, objective goods vs objective bads, and you end up risk-averse. Again, the thrust of the relevant discussion happens in the section “The Case for Optimism”: “on a range of views in moral philosophy, we should weight one unit of pain more than one unit of pleasure… If this is correct, then in order to make the expected value of the future positive, the future not only needs to have more “goods” than “bads”; it needs to have considerably more goods than bads.”
Of course, there’s only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
It really isn’t clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the “intuition of neutrality.” In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don’t mean to pick on you in particular!) devoted to those three views. And I’m not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.
I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that’s been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.
Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existence in the first place. Only “The Intuition of Neutrality” appears to take up this latter question about whether it can be better to start a life than to not start a life (purely for its own sake), which is why I consider the arguments found there to be the main arguments against the Asymmetry.
It seems worth separating purely axiological issues from issues in decision theory that relate to tiny probabilities. Specifically, one might think that this thought experiment drags two distinct issues into play: questions/intuitions relating to value lexicality, and questions/intuitions relating to tiny probabilities and large numbers. I think it’s ideal to try to separate those matters, since each of them are already quite tricky on their own.
To make the focus on the axiological question clearer, we may “actualize” the thought experiment such that we’re talking about either preventing a lifetime of the most extreme unmitigated torture or creating a trillion, trillion, trillion, trillion lives of bliss.
The lexical view says that it is better to do the former. This seems reasonable to me. I do not think there is any need or ethical duty to create lives of bliss, let alone an ethical duty to create lives of bliss at the (opportunity) cost of failing to prevent a lifetime of extreme suffering. Likewise, I do not think there is anything about pleasure (or other purported goods) that render them an axiological counterpart to suffering. And I don’t think the numbers are all that relevant here, any more than thought experiments involving very large numbers of, say, art pieces would make me question my view that extreme suffering cannot be outweighed by many art pieces.
Regarding moral uncertainty: As noted in the final section above, there are many views that support granting a foremost priority to the prevention of extreme suffering and extremely bad lives. Consequently, even if one does not end up with a strictly lexical view at the theoretical level, one may still end up with an effectively lexical view at the practical level, in the sense that the reduction of extreme suffering might practically override everything else given its all-things-considered disvalue and expected prevalence.
But arguing for such an asymmetry still does not address questions about whether or how purported goods can morally outweigh extreme suffering or extremely bad lives.
That is understandable. But still, I think overly strong conclusions were drawn in the book based on the discussion that was provided. For instance, Chapter 9 ends with these words:
But again, no justification has been provided for the view that purported goods can outweigh severe bads, such as extreme suffering, extremely bad lives, or vast numbers of extremely bad lives. Nor do I think the book addresses the main points made in Anthony DiGiovanni’s post A longtermist critique of “The expected value of extinction risk reduction is positive”, which essentially makes a case against the final conclusion of Chapter 9.
You seem to be using a different definition of the Asymmetry than Magnus is, and I’m not sure it’s a much more common one. On Magnus’s definition (which is also used by e.g. Chappell; Holtug, Nils (2004), “Person-affecting Moralities”; and McMahan (1981), “Problems of Population Theory”), bringing into existence lives that have “positive wellbeing” is at best neutral. It could well be negative.
The kind of Asymmetry Magnus is defending here doesn’t imply the intuition of neutrality, and so isn’t vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.
If bringing into existence lives that have positive wellbeing is at best neutral (and presumably strongly negative for lives with negative wellbeing) — why have children at all? Is it their instrumental value they bring in their lives that we’re after under this philosophy? (Sorry, I’m almost surely missing something very basic here — not a philosopher.)
I’m struggling to interpret this statement? What is the underlying sense in which pain and pleasure are measures in the same units and are thus ‘equal even though the pain is morally weighted more highly?’
Knutson states the problem well IMO [1]
Maybe you have some ideas and intuition into how to think about this?
Thanks MSJ for this reference
One way of thinking about this would be in relation to self-reported life satisfaction.
Consider someone who rates their life satisfaction at 1⁄10, citing extreme hunger. Now suppose you give a certain amount of food to bring them up to 2⁄10. You have essentially reduced suffering by 1 unit.
Now consider someone who rates their satisfaction at 10⁄10, believing that their life could not be any better. Then consider that you do something for them (e.g. you give them a wonderful present) and they realise that their life is even better than before and retrospectively think they have actually increased from 9⁄10 to 10⁄10. We might say that happiness has been increased by one unit (I take this ‘retrospection’ approach to try to avoid that I might also be ‘reducing suffering’ here by implying there was no suffering at all to begin with—not sure if it really works, or if it’s actually necessary).
If someone finds it more important to bring the person to 2⁄10 from 1⁄10 than it is to bring the other person to 10⁄10 from 9⁄10 one might be weighting removing a unit of suffering as more important than creating a unit of happiness.
But how would I know that we were comparing the same ‘amount of change’ in these cases?
What makes going from 1⁄10 to 2⁄10 constitute “one unit” and going from 9⁄10 to 10⁄10 as also “one unit”?
And if these are not the same ‘unit’ then how do I know that the person who finds the first movement more valuable ‘cares about suffering more’? Instead it might be that a 1-2 movement is just “a larger quantity” than a 9-10 movement.
In practice you would have to make an assumption that people generally report on the same scale. There is some evidence from happiness research that this is the case (I think) but I’m not sure where this has got to.
From your original question I thought you were essentially trying to understand, in theory, what weighting one unit of pain as greater than one unit of pleasure might mean. As per my example above, one could prioritise a one unit change on a self-reported scale if the change occurs at a lower position on the scale (assuming different respondents are using the same scale).
Another perspective is that one could consider two changes that are the same in “intensity”, but one involves alleviating suffering (giving some food to a starving person) and one involves making someone happier (giving someone a gift) - and then prioritising giving someone the food. For these two actions to be the same in intensity, you can’t be giving all that much food to the starving person because it will generally be easy to alleviate a large amount of suffering with a ‘small’ amount of food, but relatively difficult to increase happiness of someone who isn’t suffering much, even with an expensive gift.
Not sure if I’m answering your questions at all but still interesting to think through!
Thank you for clarifying!
This is true for utility/social welfare functions that are additive even over uncertainty (and maybe some other classes), but not in general. See this thread of mine.
Is this related to lexical amplifications of nonlexical theories like CU under MEC? Or another approach to moral uncertainty? My impression from your co-authored book on moral uncertainty is that you endorse MEC with intertheoretic comparisons (I get the impression Ord endorses a parliamentary approach from his other work, but I don’t know about Bykvist).
Good post! I mostly agree with sections. (2) and (4) and would echo other comments that various points made are under-discussed.
My “disagreement”—if you can call it that—is that I think the general case here can be made more compelling by using assumptions and arguments that are weaker/more widely shared/more likely to be true. Some points:
The uncertainty Will fails to discuss (in short, the Very Repugnant Conclusion) can be framed as fundamental moral uncertainty, but I think it’s better understood as the more prosaic, sorta-almost-empirical question “Would a self-interested rational agent with full knowledge and wisdom choose to experience every moment of sentience in a given world over a given span of time?”
I personally find this framing more compelling because it puts one in the position of answering something more along the lines of “would I live the life of a fish that dies by asphyxiation?” than”does some (spooky-seeming) force called ‘moral outweighing’ exist in the universe”
Even a fully-committed total utilitarian who would maintain that all amounts of suffering are in principle outweighable can have this kind of quasi-empirical uncertainty of where the equilibrium moral balance lies
More, utilitarians of all types would find it better to create less future suffering, all else equal, which is a question I don’t recall Will directly addressing.
Maybe this is relying too much on generalizing from my own intuitions/beliefs, but I’d also guess that objecting to the belief that it’s “good to make happy people” both weakens the argument as a whole and distracts from its most compelling points
You can agree with Will on all his explicit claims made in the book (as I think I do?) and still think he made a pretty major sin of omission by failing to discuss whether the creation of happiness can/does/will cause and justify the creation of suffering
Thanks for writing. You’re right that MacAskill doesn’t address these non-obvious points, though I want to push back a bit. Several of your arguments are arguments for the view that “intrinsically positive lives do not exist,” and more generally that intrinsically positive moments do not exist. Since we’re talking about repugnant conclusions, readers should note that this view has some repugnant conclusions of its own.
[Edit: I stated the following criticism too generally; it only applies when one makes an additional assumption: that experiences matter, while things that don’t affect anyone’s experiences don’t matter. As I argue in the below comment thread, that strong focus on experiences seems necessary for some of the original post’s main arguments to work.]
It implies that there wouldn’t be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn’t be destroying anything positive. It also implies that there was nothing good (in an absolute sense) in the best moment of every reader’s life—nothing actively good about laughing with friends, or watching a sunset, or hugging a loved one. To me, that’s deeply and obviously counter-intuitive. (And, as the survey you brought up shows, the large majority of people don’t hold these semi-nihilistic views.) Still, to caveat:
In practice, to their credit, people sympathetic with these views tend to appreciate that, all things considered, they have many reasons to be cooperative toward others and avoid violence.
None of the above criticisms apply to weak versions of the asymmetry. People can think that reducing suffering is somewhat more important than increasing happiness—rather than infinitely so—and then they avoid most of these criticisms. But they can’t ground these views on the premise that good experiences don’t exist.
Also, as long as we’re reviving old debates, readers may be interested in Toby Ord’s arguments against many of these views (and e.g. this response).
That’s not how many people with the views Magnus described would interpret their views.
For instance, let’s take my article on tranquilism, which Magnus cites. It says this in the introduction:
Further in the text, it contains the following passage:
And at the end in the summary:
I generally think EAs are too fond of single-minded conceptions of morality. I see ethics as being largely about people’s interests/goals. On that perspective, it would be preposterous to kill people against their will to prevent future suffering.
That said, people’s “goals” are often under-defined, and population ethics as a whole is under-defined (it isn’t fixed how many people there will be or what types of goals new people will have), so there’s also room for an experience-focused “axiology” like tranquilism to deal with cases that are under-defined according to goal-focused morality.
I think there’s a bit of confusion around the conclusion “there’s nothing with intrinsic value.” You seem to be assuming that the people who come to this conclusion completely share your framework for how to think about population ethics and then conclude that where you see “intrinsic value,” there’s nothing in its place. So you interpret them as thinking that killing people is okay (edit: or would be okay absent considerations around cooperation or perhaps moral uncertainty). However, when I argue that “nothing has intrinsic value,” I mostly mean “this way of thinking is a bit confused and we should think about population ethics in an entirely different way.” (Specifically, things can beconditionally valuable if they’re grounded in people’s interests/goals, but they aren’t “intrinsically valuable” in the sense that it’s morally pressing to bring them about regardless of circumstances.)
Thanks for the thoughtful reply. You’re right, you can avoid the implications I mentioned by adopting a preference/goal-focused framework. (I’ve edited my original comment to flag this; thanks for helping me recognize it.) That does resolve some problems, but I think it also breaks most of the original post’s arguments, since they weren’t made in (and don’t easily fit into) a preference-focused framework. For example:
The post argues that making happy people isn’t good and making miserable people is bad, because creating happiness isn’t good and creating suffering is bad. But it’s unclear how this argument can be translated into a preference-focused framework.
Could it be that “satisfying preferences isn’t good, and frustrating preferences is bad”? That doesn’t make sense to me; it’s not clear to me there’s a meaningful distinction between satisfying a preference and keeping it from being frustrated.
Could it be that “satisfying positive preferences isn’t good, and satisfying negative preferences is good?” But that seems pretty arbitrary, since whether we call some preference positive or negative seems pretty arbitrary (e.g. do I have a positive preference to eat or a negative preference to not be hungry? Is there a meaningful difference?).
The second section of the original post emphasizes extreme suffering and how it might not be outweighable. But what does this mean in a preference-focused context? Extreme preference frustration? I suspect, for many, that doesn’t have the intuitive horribleness that extreme suffering does.
The third section of the post focuses on surveys that ask questions about happiness and suffering, so we can’t easily generalize these results to a preference-focused framework.
(I also agree—as I tried to note in my original comment’s first bullet point—that pluralistic or “all-things-considered” views avoid the implications I mentioned. But I think ethical views should be partly judged based on the implications they have on their own. The original post also seems to assume this, since it highlights the implications total utilitarianism has on its own rather than as a part of some broader pluralistic framework.)
My impression of the OP’s primary point was that asymmetric views are under-discussed. Many asymmetric views are preference-based and this is mentioned in the OP (e.g., the link to Anti-frustrationism or mention of Benatar).
Of the experience-based asymmetric views discussed in the OP, my posts on tranquilism and suffering-focused ethics mention value pluralism and the idea that things other than experiences (i.e., preferences mostly) could also be valuable. Given these explicit mentions it seems false to claim that “these views don’t easily fit into a preference-focused framework.”
Probably similarly, the OP links to posts by Teo Ajantaival which I’ve only skimmed but there’s a lengthy and nuanced-seeming discussion on why minimalist axiologies, properly construed, don’t have the implications you ascribed to them.
The NU FAQ is a bit more single-minded in its style/approach, but on the question “Does negative utilitarianism solve ethics” it says “ethics is nothing that can be ‘solved.’” This at least tones down the fanaticism a bit and opens up options to incorporate other principles or other perspectives. (Also, it contains an entire section on NIPU – negative idealized preference utilitarianism. So, that may count as another preference-based view alluded in the OP, since the NU FAQ doesn’t say whether it finds N(H)U or NIPU “more convincing.”)
I’m not sure why you think the argument would have to be translated into a preference-focused framework. In my previous comment I wanted to say the following: (1) The OP mentions that asymmetric positions are underappreciated and cites some examples, including Anti-Frustrationism, which is (already) a preference-based view.
(2) While the OP does discuss experience-focused views that say nothing is of intrinsic value, those views are compatible with a pluralistic conception of “ethics/morality” where preferences could matter too. Therefore, killing people against their will to reduce suffering isn’t a clear implication of the views.
Neither (1) or (2) require translating a specific argument from experiences to preferences. (That said, I think it’s actually easier to argue for an asymmetry in preference contexts. The notion that acquiring a new preference and then fulfilling it is a good in itself seems counterintuitive. Relatedly, the tranquilist conception of suffering is more like a momentary preference rather than an ‘experience’ and this shift IMO made it easier to justify the asymmetry.)
Why do you want to pack the argument into the framing “What is good and what is bad?” I feel like that’s an artificially limited approach to population ethics, this approach of talking about what’s good or bad. When something is good, it means that we have to create as much of it as possible? That’s a weird framework! At the very least, I want to emphasize that this is far from the only way to think about what matters.
In my post Dismantling Hedonism-inspired Moral Realism, I wrote the following:
Pleasure’s “goodness” is under-defined
that, all else equal, it would be a mistake not to value all pleasures
that no mental states without pleasure are in themselves desirable
that, all else equal, more pleasure is always better than less pleasure
In the quoted passages above, I argued that the way hedonists think of “pleasure is good” smuggles in unwarranted connotations. Similarly and more generally, I think the concept “x is good,” the way you and others use it for framing discussions on population ethics, bakes in an optimizing mindset around “good things ought to be promoted.” This should be labelled as an assumption we can question, rather than as the default for how to hold any discussion on population ethics. It really isn’t the only way to do moral philosophy. (In addition, I personally find it counterintuitive.)
(I make similar points in my recent post on a framework proposal for population ethics, which I’ve linked to in previous comments her.)
Okay, that helps me understand where you’re coming from. I feel like “ethical views should be partly judged based on the implications they have on their own” is downstream of the question of pluralism vs. single-minded theory. In other words, when you evaluate a particular view, it already has to be clear what scope it has. Are we evaluating the view as “the solution to everything in ethics?” or are we evaluating it as “a theory about the value of experiences that doesn’t necessary say that experiences are all that matters?” If the view is presented as the latter (which, again, is explicitly the case for at least two articles the OP cited), then that’s what it should be evaluated as. Views should be evaluated on exactly the scope that they aspire to have.
Overall, I get the impression that you approach population ethics with an artificially narrow lens about what sort of features views “should” have and this seems to lead to a bunch of interrelated misunderstandings about how some others think about their views. I think this applies to probably >50% of the views the OP discussed rather than just edge cases. That said, your criticisms apply to some particular proponents of suffering-focused ethics and some texts.
I think this misunderstands the point I was making. I meant to highlight how, if you’re adopting a pluralistic view, then to defend a strong population asymmetry (the view emphasized in the post’s title), you need reasons why none of the components of your pluralistic view value making happy people.* This gets harder the more pluralistic you are, especially if you can’t easily generalize hedonic arguments to other values. As you suggest, you can get the needed reasons by introducing additional assumptions/frameworks, like rejecting the principle that it’s better for there to be more good things. But I wouldn’t call that an “easy fit”; that’s substantial additional argument, sometimes involving arguing against views that many readers of this forum find axiomatically appealing (like that it’s better for there to be more good things).
(* Technically you don’t need reasons why none of the views consider the making of happy people valuable, just reasons why overall they don’t. Still, I’d guess those two claims are roughly equivalent, since I’m not aware of any prominent views which hold the creation of purely happy people to be actively bad.)
Besides that, I think at this point we’re largely in agreement on the main points we’ve been discussing?
I’ve mainly meant to argue that some of the ethical frameworks that the original post draws on and emphasizes, in arguing for a population asymmetry, have implications that many find very counterintuitive. You seem to agree.
If I’ve understood, you’ve mainly been arguing that there are many other views (including some that the original post draws on) which support a population asymmetry while avoiding certain counterintuitive implications. I agree.
Your most recent comment seems to frame several arguments for this point as arguments against the first bullet point above, but I don’t think they’re actually arguments against the above, since the views you’re defending aren’t the ones my most-discussed criticism applies to (though that does limit the applicability of the criticism).
Thanks for elaborating! I agree I misunderstood your point here.
(I think preference-based views fit neatly into the asymmetry. For instance, Peter Singer initially weakly defended an asymmetric view in Practical Ethics, as arguably the most popular exponent of preference utilitarianism at the time. He only changed his view on population ethics once he became a hedonist. I don’t think I’m even aware of a text that explicitly defends preference-based totalism. By contrast, there are several texts defending asymmetric preference-based views: Benatar, Fehige, Frick, younger version of Singer.)
Or that “(intrinsically) good things” don’t have to be a fixed component in our “ontology” (in how we conceptualize the philosophical option space). Or, relatedly, that the formula “maximize goods minus bads” isn’t the only way to approach (population) ethics. Not because it’s conceptually obvious that specific states of the world aren’t worthy of taking serious effort (and even risks, if necessary) to bring about. Instead, because it’s questionable to assume that “good states” are intrinsically good, that we should bring them about regardless of circumstances, independently of people’s interests/goals.
I agree that we’re mainly in agreement. To summarize the thread, I think we’ve kept discussing because we both felt like the other party was presenting a slightly unfair summary of how many views a specific criticism applies or doesn’t apply to (or applies “easily” vs. “applies only with some additional, non-obvious assumptions”).
I still feel a bit like that now, so I want to flag that out of all the citations from the OP, the NU FAQ is really the only one where it’s straightforward to say that one of the two views within the text – NHU but not NIPU – implies that it would (on some level, before other caveats) be good to kill people against their will (as you claimed in your original comment).
From further discussion, I then gathered that you probably meant that specific arguments from the OP could straightforwardly imply that it’s good to kill people. I see the connection there. Still, two points I tried to make that speak against this interpretation:
(1) People who buy into these arguments mostly don’t think their views imply killing people. (2) To judge what an argument “in isolation” implies, we need some framework for (population) ethics. The framework that totalists in EA rely on is question begging and often not shared by proponents of the asymmetry.
Fair points!
Here I’m moving on from the original topic, but if you’re interested in following this tangent—I’m not quite getting how preference-based views (specifically, person-affecting preference utilitarianism) maintain the asymmetry while avoiding (a slightly/somewhat weaker version of) “killing happy people is good.”
Under “pure” person-affecting preference utilitarianism (ignoring broader pluralistic views of which this view is just one component, and also ignoring instrumental justifications), clearly one reason why it’s bad to kill people is that this would frustrate some of their preferences. Under this view, is another (pro tanto) reason why it’s bad to kill (not-entirely-satisfied) people that their satisfaction/fulfillment is worth preserving (i.e. is good in a way that outweighs associated frustration)?
My intuition is that one answer to the above question breaks the asymmetry, while the other revives some very counterintuitive implications.
If we answer “Yes,” then, through that answer, we’ve accepted a concept of “actively good things” into our ethics, rejecting the view that ethics is just about fixing states of affairs that are actively problematic. Now we’re back in (or much closer to?) a framework of “maximize goods minus bads” / “there are intrinsically good things,” which seems to (severely) undermines the asymmetry.
If we answer “No,” on the grounds that fulfillment can’t outweigh frustration, this would seem to imply that one should kill people, whenever their being killed would frustrate them less than their continued living. Problematically, that seems like it would probably apply to many people, including many pretty happy people.
After all, suppose someone is fairly happy (though not entirely, constantly fulfilled), is quite myopic, and only has a moderate intrinsic preference against being killed. Then, the preference utilitarianism we’re considering seems to endorse killing them (since killing them would “only” frustrate their preferences for a short while, while continued living would leave them with decades of frustration, amid their general happiness).
There seem to be additional bizarre implications, like “if someone suddenly gets an unrealizable preference, even if they mistakenly think it’s being satisfied and are happy about that, this gives one stronger reasons to kill them.” (Since killing them means the preference won’t go unsatisfied as long.)
(I’m assuming that frustration matters (roughly) in proportion to its duration, since e.g. long-lasting suffering seems especially bad.)
(Of course, hedonic utilitarianism also endorses some non-instrumental killing, but only under what seem to be much more restrictive conditions—never killing happy people.)
I would answer “No.”
The preference against being killed is as strong as the happy person wants it to be. If they have a strong preference against being killed then the preference frustration from being killed would be lot worse than the preference frustration from an unhappy decade or two – it depends how the person herself would want to make these choices.
I haven’t worked this out as a formal theory but here are some thoughts on how I’d think about “preferences.”
(The post I linked to primarily focuses on cases where people have well-specified preferences/goals. Many people will have under-defined preferences and preference utilitarians would also want to have a way to deal with these cases. One way to deal with under-defined preferences could be “fill in the gaps with what’s good on our experience-focused account of what matters.”)
This is not true. The view that killing is bad and morally wrong can be, and has been, grounded in many ways besides reference to positive value.[1]
First, there are preference-based views according to which it would be bad and wrong to thwart preferences against being killed, even as the creation and satisfaction of preferences does not create positive value (cf. Singer, 1980; Fehige, 1998). Such views could imply that killing and extinction would overall be bad.
Second, there are views according to which death itself is bad and a harm, independent of — or in addition to — preferences against it (cf. Benatar, 2006, pp. 211-221).
Third, there are views (e.g. ideal utilitarianism) that hold that certain acts such as violence and killing, or even intentions to kill and harm (cf. Hurka, 2001; Knutsson, 2022), are themselves disvaluable and make the world worse.
Fourth, there are nonconsequentialist views according to which we have moral duties not to harm or kill, and such duties may be combined with a wide range of axiologies, including those that deny positive intrinsic value. (“For deontologists, a killing is a wrong under most circumstances, and its wrongness does not depend on its consequences or its effects on overall welfare.” Sunstein & Vermeule, 2005.) Such duties can, yet need not, rest on a framework of moral rights.
As for experientialist minimalist views in particular (i.e. views that say that the reduction of experienced bads is all that matters), I would highly recommend reading Teo Ajantaival’s essay Peacefulness, nonviolence, and experientialist minimalism. It provides an elaborate discussion of cessation/non-creation implications from the perspective of that specific class of minimalist views.
Teo’s post also makes the important point that offsetting consequentialist views (e.g. classical utilitarianism) arguably have worse theoretical cessation implications than do minimalist experientialist views (see also the footnote below). Last but not least, the post highlights the importance of distinguishing purely hypothetical questions from practical questions, and highlights the strong reasons to not only pursue a cooperative approach, but also (“as far as is possible and practicable”) a nonviolent and nonaggressive approach.
I would strongly resist that characterization. For instance, a Buddhist axiology focused on the alleviation of suffering and unmet needs on behalf of all sentient beings is, to my mind at least, the Kryptonite opposite of nihilism. Its upshot, in essence, is that it recommends us to pursue a deeply meaningful and compassionate purpose, aimed at alleviating the burdens of the world. Indeed, not only do I find this to be positively anti-nihilistic, but also supremely beautiful.
(Perhaps also see this post, especially the final section on meaning and motivation.)
I have recently written a point-by-point reply to Ord’s essay.
And, FWIW, I think reference to positive value is not a promising way to ground the view that killing is wrong. As many have noted, moral views that ground the wrongness of killing purely in, say, the loss of pleasurable experiences tend to be vulnerable to elimination arguments, which say that we should, at least in theory, kill people if we can replace them with happier beings.
Thus, to borrow from your comment (in bold), one could likewise make the following claim about classical utilitarianism:
“It implies that there wouldn’t be anything wrong with immediately killing everyone reading this, their families, and everyone else, if we could in turn create isolated matrix lives that experience much more pleasure. Indeed, unlike suffering-focused views, classical utilitarianism would allow each of these killings to involve vast amounts of unrelenting torture, provided that ‘sufficiently many’ happy matrix lives are created in turn.”
I take this to be a worse implication. Of course, a classical utilitarian would be quick to highlight many nuances and caveats here, and not least to highlight the hypothetical nature of this scenario. But such points will generally also apply in the case of experientialist minimalist views.
Thanks for the thoughtful reply; I’ve replied to many of these points here.
On a few other ends:
I agree that strong negative utilitarian views can be highly purposeful and compassionate. By “semi-nihilistic” I was referring to how some of these views also devalue much (by some counts, half) of what others value. [Edit: Admittedly, many pluralists could say the same to pure classical utilitarians.]
I agree classical utilitarianism also has bullets to bite (though many of these look like they’re appealing to our intuitions in scenarios where we should expect to have bad intuitions, due to scope insensitivity).
edit: I wrote this comment before I refreshed the page and I now see that these points have been raised!
Thanks for flagging that all ethical views have bullets to bite and for pointing at previous discussion of asymmetrical views!
However, I’m not really following your argument.
This doesn’t necessarily follow, as Magnus explicitly notes that “many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing a new life into existence.” So given that everyone reading this already exists, there is in fact potential positive value in continuing our existences.
However, I may have missed some stronger views that Magnus mentions that would lead to this implication. The closest I can find is when Magnus writes, some “views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do.” As I understand that, though, this means that there can be positive value in lives, specifically lives that are interacting with others?
I wouldn’t be surprised if I’d just missed the relevant view that you are describing here, so I’d appreciate if you could point to the specific quotes that you were thinking of.
Finally, you are implicitly assuming hedonism + consequentialism — so if it turned out that happiness had no intrinsic value, there’s no reason to continue life. But you could hold a suffering-focused view that cares about other values (e.g. preference satisfaction), or a form of non-consequentialism that sees intrinsic value in life beyond happiness. (Thanks to Sean Richardson for making this point to me!)
Thanks for the thoughtful reply; I’ve replied to many of these points here.
In short, I think you’re right that Magnus doesn’t explicitly assume consequentialism or hedonism. I understood him to be implicitly assuming these things because of the post’s focus on creating happiness and suffering, as well as the apparent prevalence of these assumptions in the suffering-focused ethics community (e.g. the fact that it’s called “suffering-focused ethics” rather than “frustration-focused ethics”). But I should have more explicitly recognized those assumptions and how my arguments are limited to them.
I understand that you feel that the asymmetry is true & important, but despite your arguments to the contrary, it still feels like it is a pretty niche position, and as such it feels ok not to have addressed it in a popular book.
Edit: Nope, a quick poll reveals this isn’t the case, see this comment.
The Procreative Asymmetry is very widely held, and much discussed, by philosophers who work on population ethics (and seemingly very common in the general population). If anything, it’s the default view, rather than a niche position (except among EA philosophers). If you do a quick search for it on philpapers.org there’s quite a lot there.
You might think the Asymmetry is deeply mistaken, but describing it as a ‘niche position’ is much like calling non-consequentialism a ‘niche position’.
The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a “niche view”.
I’m not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it’s an attractive topic to write about by the standards of academic philosophy)?
I’d be pretty interested in understanding the actual distribution of views among professional philosophers, with the caveat that I don’t think this is necessarily that much evidence for what view on population ethics should ultimately guide our actions. The caveat is roughly because I think the incentives of academic philosophy aren’t strongly favoring beliefs on which it’d be overall good to act on, as opposed to views one can publish well about (of course there are things pushing in the other direction as well, e.g. these are people who’ve thought about it a lot and use criteria for critizing and refining views that are more widely endorsed, so it is certainly some evidence, hence my interest).
FWIW my own impression is closer to:
The Asymmetry is widely held to be an intuitive desideratum for theories of population ethics.
As usual (cf. the founding impetus of ‘experimental philosophy’), philosophers don’t usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.
As usual, there are also at least some philosophers trying to ‘explain away’ the intuition (e.g. in this case Chappell 2017).
However, it turns out that it is hard to find a theory of population ethics that rationalizes the Asymmetry without having other problems. My sense is that this assessment – in part due to prominent impossibility theorems – is widely shared, and that there is likely no single widely held specific view that implies the Asymmetry.
This is basically the kind of situation that tends to spawn an ‘industry’ in academic philosophy, in which people come up with increasingly complex views that avoid known problems with previous views, other people point out new problems, and so on. And this is precisely what happened.
Overall, it is pretty hard to tell from this how many philosophers ‘actually believe’ the Asymmetry, in part because many participants in the conversation may not think of themselves as having any settled beliefs on the matter and in part because the whole language game seems to often involve “beliefs” that are at best pretty compartmentalized (e.g. don’t explain an agent’s actions in the world at large) and at worst not central examples of belief at all (perhaps more similar to how an actor relates to the beliefs of a character while enacting a play).
I think in many ways, the Asymmetry is like the view that there is some kind of principled difference between ideas and matter or that humans have free will of some sort – a perhaps widely held intuition, and certainly a fertile ground for long debates between philosophers, from which, however, it is hard to draw any clear conclusion if you are an agent who (unlike the debating philosophers) faces a high-stakes, real-world action depending on the matter. (It’s also different in some ways, e.g. it seems easier to agree on a precise statement of the Asymmetry than for some of these other issues.)
Curious how well this impression matches yours? I could imagine that the impression one gets (like me) primarily from reading the literature may be somewhat different from e.g. the vibe at conferences.
I agree with the ‘spawned an industry’ point and how that makes it difficult to assess how widespread various views really are.
Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who’s a co-author of the paper) told me recently that he thinks these types of surveys are not worth updating on by much [edit: but “casts some doubt on” is still accurate if we previously believed people would have clear answers that favor the asymmetry] because the subjects often interpret things in all kinds of ways or don’t seem to have consistent views across multiple answers. (The publication itself mentions in the “Supplementary Materials” that framing effects play a huge role.)
Thank you, that’s interesting and I hadn’t seen this.
(I now wrote a comment elaborating on some of these inconsistencies here.)
This impression strikes me as basically spot on. It would have been more accurate for me to say it’s taken to be a “widely held to be an intuitive desideratum for theories of population ethics”. It does have its defenders, though, e.g. Frick, Roberts, Bader. I agree that there does not seem to be any theory that rationalises this intuition without having other problems (but this is merely a specific instance of the general case that there seems to be no theory of population ethics that retains all our intuitions—hence Arrhenius’ famous impossibility result).
I’m not aware of any surveys of philosophers on their views on population ethics. AFAIT, the number of professional philosophers who are experts in population ethics—depending on how one wants to define those terms—could probably fit into one lecture room.
So consider the wording in the post:
If we do a survey of 100 Americans on Positly, with that exact wording, what percentage of randomly chosen people do you think would agree? I happen to respect Positly, but I am open to other survey methodologies.
I was intuitively thinking 5% tops, but the fact that you disagree strongly takes me aback a little bit.
Note that I think you were mostly thinking about philosophers, whereas I was mostly thinking about the general population.
I’m surprised you’d have such a low threshold—I would have thought noise, misreading the question, trolling, misclicks etc. alone would push above that level.
You can imagine survey designs which would filter trolls &c, but you right I should have been slightly higher based on that.
It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.’s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.
Makes sense.
Wow, I’d have said 30-65% for my 50% confidence interval, and <5% is only about 5-10% of my probability mass. But maybe we’re envisioning this survey very differently.
Did a test run with 58 participants (I got two attempted repeats):
So you were right, and I’m super surprised here.
There is a paper by Lucius Caviola et al of relevance:
The study design is quite different from Nuno’s, though. No doubt the study design matters.
In 2a, it looks like they didn’t explicitly get subjects to try to control for impacts on other people in their question like Nuno did, and (I’m not sure if this matters) they assumed the extra person would be added to a world of a million neutral life people. They just asked, for each of adding a neutral life, adding a bad life and adding a good life:
2b was pretty similar, but used either an empty world or world of a billion neutral life people.
2b involves an empty world—where there can’t be an effect on other people—and replicates 2a afaict.
Fair, my mistake.
I wonder if the reason for adding the happy person to the empty world is not welfarist, though, e.g. maybe people really dislike empty worlds, value life in itself or think empty worlds lack beauty or something. EDIT: Indeed, it seemed some people preferred to add an unhappy life than not, basically no one preferred not to add a happy life and people tended to prefer adding a neutral life than not, based on figure 5 (an answer of 4 means “equally good”, above means better and below means worse). Maybe another explanation compatible with welfarist symmetry is that if there’s at least one life, good or bad, they expect good lives eventually, and for them to outweigh the bad.
Also, does the question actually answer whether anyone in particular holds the asymmetry, or are they just averaging responses across people? You could have some people who actually give greater weight to adding a happy life to an empty world than adding a miserable life to an empty world (which seems to be the case, based on Figure 5), along with people holding the standard asymmetry or weaker versions, and they could roughly cancel out in aggregate to support symmetry.
Words cannot express how much I appreciate your presence Nuno.
Sorry for being off-topic but I just can’t help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.
It is “very widely held” by philosophers only in the sense that it is a pre-theoretic intuition that many people, including philosophers, share. It is not “very widely held” by philosophers on reflection.
The intuition seems to be almost universally held. I agree many philosophers (and others) think that this intuition must, on reflection, be mistaken. But many philosophers, even after reflection, still think the procreative asymmetry is correct. I’m not sure how interesting it would be to argue about the appropriate meaning of the phrase “very widely held”. Based on my (perhaps atypical) experience, I guess that if you polled those who had taken a class on population ethics, I expect about 10% would agree with the statement “the procreative asymmetry is a niche position”.
Which version of the intuition? If you just mean ‘there is greater value in preventing the creation of a life with X net utils of suffering than in creating a life with X net utils of pleasure’, then maybe. But people often claim that ‘adding net-happy people is neutral, whilst adding net-suffering people is bad’ is intuitive, and there was a fairly recent paper claiming to find that this wasn’t what ordinary people thought when surveyed: https://www.iza.org/publications/dp/12537/the-asymmetry-of-population-ethics-experimental-social-choice-and-dual-process-moral-reasoning
I haven’t actually read the paper to check if it’s any good though...
I upvoted this comment because I think there’s something to it.
That said, see the comment I made elsewhere in this thread about the existence of selection effects. The asymmetry is hard to justify for believers in an objective axiology, but philosophers who don’t believe in an objective axiology likely won’t write paper after paper on population ethics.
Another selection effect is that consequentialists are morally motivated to spread their views, which could amplify consensus effects (even if it applies to consequentialists on both sides of the split, one group being larger and better positioned to start with can amplify the proportions after a growth phase). For instance, before the EA-driven wave of population ethics papers, presumably the field would have been split more evenly?
Of course, if EA were to come out largely against any sort of population-ethical asymmetry, that’s itself evidence for (a lack of) convincingness of the position. (At the same time, a lot of EAs take moral realism seriously* and I don’t think they’re right – I’d be curious what a poll of anti-realist EAs would tell us about population-ethical asymmetries of various kinds and various strengths.)
*I should mention that this includes Magnus, author of the OP. I probably don’t agree with his specific arguments for there being an asymmetry, but I do agree with the claim that the topic is underexplored/underappreciated.
What exactly do you mean by “have an objective axiology” and why do you think it makes it (distinctively) hard to defend asymmetry? (I have an eccentric philosophical view that the word “objective” nearly always causes more trouble than it’s worth and should be tabooed.)
The short answer:
Thinking in terms of “something has intrinsic value” privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
The comment presupposes that there’s “something that is bad” and “something that is good” (in a sense independent of particular people’s judgments – this is what I meant by “objective”). If we grant this framing, any arguments for why “create what’s good” is less important than “don’t create what’s bad” will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like “what’s good” or “something has intrinsic value.” I think things are good when they’re connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) “conditional value,” but I don’t understand “intrinsic value.”
The longer answer:
Here’s a related intuition:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
In my post, “Population Ethics Without [an Objective] Axiology,” I defended a specific framework for thinking about population ethics. From the post:
If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesn’t give sufficient weight to things that are intrinsically good according to the objective axiology, then I’m making some kind of mistake. I think it’s occasionally possible for people to make “mistakes” about their goals/values if they’re insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I don’t think it’s possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I don’t think “becoming well-informed” leads to convergence of life goals among people/reasoners.
I’d say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:
“We want to figure out what’s best for morally relevant others. Well-being differences in morally relevant others should always matter – if they don’t matter on someone’s account, then this particular account couldn’t be concerned with what’s best for morally relevant others.”
As you know, person-affecting views tend to come out in such a way that they say things like “it’s neutral to create the perfect life and (equally) neutral to create a merely quite good life.” (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)
These features of person-affecting views show that well-being differences don’t always matter on those views. Some people will interpret this as “person-affecting views are incompatible with the goal of ethics – figuring out what’s best for morally relevant others.”
However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/beings as well as possible people/beings? If there’s an objective axiology, it’s implicit that the same rules would apply (why wouldn’t they?). However, without an objective axiology, all we’re left is the following:
Ethics is about interests/goals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” has a lot of overlap with something like preference utilitarianism. (Though there are instances where people’s life goals are under-defined, in which case people with different takes on “do the most moral/altruistic thing” may wish to fill in the gaps according to subjectivist “axiologies” that they endorse.)
On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results: (1) The number of interests/goals isn’t fixed (2) The types of interests/goals aren’t fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.
So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:
“‘Doing the most moral/altruistic thing’ isn’t about creating new people with new interests/goals. Instead, it’s about benefitting existing (or sure-to-exist) people/beings according to their interests/goals.”
In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!
Still, we’re left with the question, “If your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?”
Someone with person-affecting views could reply the following:
“While I concentrate my caring budget on one perspective (existing and sure-to-exist people/beings), that doesn’t mean my concern for the interests of possible people/beings is zero. My approach to dealing with merely possible people is essentially ‘don’t be a jerk.’ That’s exactly why I’m sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/beings, but since I concentrate my caring budget on existing (and sure-to-exist) people/beings, bringing the happier person into existence usually isn’t a priority to me. Lastly, you’re probably going to ask ‘why is your notion of ‘don’t be a jerk’ asymmetric?.′ I.e., why not ‘don’t be a jerk’ by creating people who would be grateful to be alive (at least in instances where it’s easy/cheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/beings) in a way that not creating them does not. There’s no answer to ‘What do possible people/beings want?’ that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that I’m arguably failing to benefit a particular subset of possible people/beings (the ones who would be grateful to get the slot). Still, other possible people/beings don’t mind not getting the spot, so there’s at least a sense in which I didn’t disrespect possible people/beings as a whole interest group. By contrast, if I create someone who hates being alive, saying ‘Other people would be grateful in your spot’ doesn’t seem like a defensible excuse. ‘Not creating happy people’ only means I’m not giving maximum concern to possible people/beings, whereas ‘creating a miserable person’ means I’m flat-out disrespecting someone specific, who I chose to ‘highlight’ from the sea of all possible people/beings (in the most real sense) – there doesn’t seem to be a defensible excuse for that.”
The long answer: My post Population Ethics Without ((an Objective)) Axiology: A Framework.
I’m not sure I really follow (though I admit I’ve only read the comment, not the post you’ve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already? Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference? (Also, on the word “objective”: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of “objective”. Hence why I think “objective” should be tabooed.)
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.
As I say in the longer post:
I agree with what you write about “objective” – I’m guilty of violating your advice.
(That said, I think there’s a sense in which preference utilitarianism would be unsatisfying as a “moral realist” answer to all of ethics because it doesn’t say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism – what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesn’t resonate with me?)
I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because I’m relying on a distinction between “ambitious morality” and “minimal morality” ( = “don’t be a jerk”) which also only makes sense if there’s no objective axiology.
I don’t expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section “minimal morality vs. ambitious morality” here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. (“Care morality” vs. “cooperation morality” is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.
I’d guess contractualists and rights-based theorists (less sure about deontologists generally) would normally take the asymmetry to be true, because if someone is never born, there are no claims or rights of theirs to be concerned with.
I don’t know how popular it is among consequentialists, virtue ethicists or those with mixed views. I wouldn’t expect it to be extremely uncommon or for the vast majority to accept it.
Just to clarify, I wouldn’t say that. :)
But the book does briefly take up the Asymmetry, and makes a couple of arguments against it. The point I was trying to make in the first section is that these arguments don’t seem convincing.
The questions that aren’t addressed are those regarding interpersonal outweighing — e.g. can purported goods morally outweigh extreme suffering? Can happy lives morally outweigh very bad lives? (As I hint in the post, one can reject the Asymmetry while also rejecting interpersonal moral outweighing of certain kinds, such as those that would allow some to experience extreme suffering for the pleasure of others, or those that would allow extremely miserable lives to be morally outweighed by a large number of happy lives, cf. Vinding, 2020, ch. 3.)
These questions do seem of critical importance to our future priorities. Even if one doesn’t think that they need to be raised in a popular book that promises a deep dive on population ethics, they at least deserve to be discussed in depth by aspiring effective altruists.
That doesn’t seem true to me (see MichaelPlant’s comment).
Also, there’s a selection effect in academic moral philosophy where people who don’t find the concept of “intrinsic value” / “the ethical value of a life” compelling won’t go on to write paper after paper about it. For instance, David Heyd wrote one of the earliest books on “population ethics” (the book was called “Genethics” but the term didn’t catch on) and argued that it’s maybe “outside the scope of ethics.” Once you said that, there isn’t a lot else to say. Similarly, according to this comment by peterhartree, Bernard Williams also has issues with the way other philosophers approach population ethics. He argues for his position of reasons anti-realism, which says that there’s no perspective external to people’s subjective reasons for action that has the authority to tell us how to live.
If you want an accurate count on philosophers’ views on population ethics, you have to throw the net wide to include people who looked at the field, considered that it’s a bit confused because of reasons anti-realism, and then moved on rather than repeating arguments for reasons anti-realism. (The latter would be a bit boring because you’d conclude by saying something like “different positions on population ethics are similarly defensible – it depends on what people care to emphasize.”)
Could a focus on reducing suffering flatten the interpretation of life into a simplistic pleasure / pain dichotomy that does not reflect the complexity of nature? I find it counterintuitive to assume, that wild nature plausibly is net negative because of widespread wild animal suffering (WWOTF p.213).