Researcher at the Center on Long-Term Risk. All opinions my own.
Anthony DiGiovanni
[linkpost] When does technical work to reduce AGI conflict make a difference?: Introduction
...Having said that, I do think the “deeper intuition that the existing Ann must in some way come before need-not-ever-exist-at-all Ben” plausibly boils down to some kind of antifrustrationist or tranquilist intuition. Ann comes first because she has actual preferences (/experiences of desire) that get violated when she’s deprived of happiness. Not creating Ben doesn’t violate any preferences of Ben’s.
certainly don’t reflect the kinds of concerns expressed by Setiya that I was responding to in the OP
I agree. I happen to agree with you that the attempts to accommodate the procreation asymmetry without lexically disvaluing suffering don’t hold up to scrutiny. Setiya’s critique missed the mark pretty hard, e.g. this part just completely ignores that this view violates transitivity:
But the argument is flawed. Neutrality says that having a child with a good enough life is on a par with staying childless, not that the outcome in which you have a child is equally good regardless of their well-being. Consider a frivolous analogy: being a philosopher is on a par with being a poet—neither is strictly better or worse—but it doesn’t follow that being a philosopher is equally good, regardless of the pay.
appeal to some form of partiality or personal prerogative seems much more appropriate to me than denying the value of the beneficiaries
I don’t think this solves the problem, at least if one has the intuition (as I do) that it’s not the current existence of the people who are extremely harmed to produce happy lives that makes this tradeoff “very repugnant.” It doesn’t seem any more palatable to allow arbitrarily many people in the long-term future (rather than the present) to suffer for the sake of sufficiently many more added happy lives. Even if those lives aren’t just muzak and potatoes, but very blissful. (One might think that is “horribly evil” or “utterly disastrous,” and isn’t just a theoretical concern either, because in practice increasing the extent of space settlement would in expectation both enable many miserable lives and many more blissful lives.)
ETA: Ideally I’d prefer these discussions not involve labels like “evil” at all. Though I sympathize with wanting to treat this with moral seriousness!
I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
It really isn’t clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the “intuition of neutrality.” In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don’t mean to pick on you in particular!) devoted to those three views. And I’m not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.
I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that’s been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.
The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
You seem to be using a different definition of the Asymmetry than Magnus is, and I’m not sure it’s a much more common one. On Magnus’s definition (which is also used by e.g. Chappell; Holtug, Nils (2004), “Person-affecting Moralities”; and McMahan (1981), “Problems of Population Theory”), bringing into existence lives that have “positive wellbeing” is at best neutral. It could well be negative.
The kind of Asymmetry Magnus is defending here doesn’t imply the intuition of neutrality, and so isn’t vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.
Are you saying that from your and Teo’s POVs, there’s a way to ‘improve a mental state’ that doesn’t amount to decreasing suffering (/preventing it)?
No, that’s precisely what I’m denying. So, the reason I mentioned that “arbitrary” view was that I thought Jack might be conflating my/Teo’s view with one that (1) agrees that happiness intrinsically improves a mental state, but (2) denies that improving a mental state in this particular way is good (while improving a mental state via suffering-reduction is good).
Such an understanding seems plausible in a self-intimating way when one valence state transitions to the next, insofar as we concede that there are states of more or less pleasure, outside an negatively valanced states.
It’s prima facie plausible that there’s an improvement, sure, but upon reflection I don’t think my experience that happiness has varying intensities implies that moving from contentment to more intense happiness is an improvement. Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I’m comparing to no one suffering from the lack of more intense happiness), there’s no “improvement” to the painting.
It seems that one could do this all the while maintaining that such improvements are never capable of outweighing the mitigation of problematic, suffering states.
You could, yeah, but I think “improvement” has such a strong connotation to most people that something of intrinsic value has been added. So I’d worry that using that language would be confusing, especially to welfarist consequentialists who think (as seems really plausible to me) that you should do an act to the extent that it improves the state of the world.
Some things I liked about What We Owe the Future, despite my disagreements with the treatment of value asymmetries:
The thought experiment of imagining that you live one big super-life composed of all sentient beings’ experiences is cool, as a way of probing moral intuitions. (I’d say this kind of thought experiment is the core of ethics.)
It seems better than e.g. Rawls’ veil of ignorance because living all lives (1) makes it more salient that the possibly rare extreme experiences of some lives still exist even if you’re (un)lucky enough not to go through them, and (2) avoids favoring average-utilitarian intuitions.
Although the devil is very much in the details of what measure of (dis)value the total view totals up, the critiques of average, critical level, and symmetric person-affecting views are spot-on.
There’s some good discussion of avoiding lock-in of bad (/not-reflected-upon) values as a priority that most longtermists can get behind.
I was already inclined to think dominant values can be very contingent on factors that don’t seem ethically relevant, like differences in reproduction rates (biological or otherwise) or flukes of power imbalances. So I didn’t update much from reading about this. But I have the impression that many longtermists are a bit too complacent about future people converging to the values we’d endorse with proper reflection (strangely, even when they’re less sympathetic to moral realism than I am). And the vignettes about e.g. Benjamin Lay were pretty inspiring.
Relatedly, it’s great that premature space settlement is acknowledged as a source of lock-in / reduction of option value. Lots of discourse on longtermism seems to gloss over this.
I think one crux here is that Teo and I would say, calling an increase in the intensity of a happy experience “improving one’s mental state” is a substantive philosophical claim. The kind of view we’re defending does not say something like, “Improvements of one’s mental state are only good if they relieve suffering.” I would agree that that sounds kind of arbitrary.
The more defensible alternative is that replacing contentment (or absence of any experience) with increasingly intense happiness / meaning / love is not itself an improvement in mental state. And this follows from intuitions like “If a mind doesn’t experience a need for change (and won’t do so in the future), what is there to improve?”
Is it thought experiments such as the ones Magnus has put forward? I think these argue that alleviating suffering is more pressing than creating happiness, but I don’t think these argue that creating happiness isn’t good.
I think they do argue that creating happiness isn’t intrinsically good, because you can always construct a version of the Very Repugnant Conclusion that applies to a view that says suffering is weighed some finite X times more than happiness, and I find those versions almost as repugnant. E.g. suppose that on classical utilitarianism we prefer to create 100 purely miserable lives plus some large N micro-pleasure lives over creating 10 purely blissful lives. On this new view, we’d prefer to create 100 purely miserable lives plus X*N micro-pleasure lives over the 10 purely blissful lives. Another variant you could try is a symmetric lexical view where only sufficiently blissful experiences are allowed to outweigh misery. But while some people find that dissolves the repugnance of the VRC, I can’t say the same.
Increasing the X, or introducing lexicalities, to try to escape the VRC just misses the point, I think. The problem is that (even super-awesome/profound) happiness is treated as intrinsically commensurable with miserable experiences, as if giving someone else happiness in itself solves the miserable person’s urgent problem. That’s just fundamentally opposed to what I find morally compelling.
(I like the monk example given in the other response to your question, anywho. I’ve written about why I find strong SFE compelling elsewhere, like here and here.)
You could try to use your pareto improvement argument here i.e. that it’s better if parents still have a preference for their child not to have been killed, but also not to feel any sort of pain related to it.
Yeah, that is indeed my response; I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario. This view seems to clearly conflate intrinsic with instrumental value. “Disrespect” and “grotesqueness” are just not things that seem intrinsically important to me, at all.
having a preference that the child wasn’t killed, but also not feeling any sort of hedonic pain about it...is this contradictory?
Depends how you define a preference, I guess, but the point of the thought experiment is to suspend your disbelief about the flow-through effects here. Just imagine that literally nothing changes about the world other than that the suffering is relieved. This seems so obviously better than the default that I’m at a loss for a further response.
This only applies to flavors of the Asymmetry that treat happiness as intrinsically valuable, such that you would pay to add happiness to a “neutral” life (without relieving any suffering by doing so). If the reason you don’t consider it good to create new lives with more happiness than suffering is that you don’t think happiness is intrinsically valuable, at least not at the price of increasing suffering, then you can’t get Dutch booked this way. See this comment.
I didn’t directly respond to the other one because the principle is exactly the same. I’m puzzled that you think otherwise.
Removing their sadness at separation while leaving their desire to be together intact isn’t a clear Pareto improvement unless one already accepts that pain is what is bad.
I mean, in thought experiments like this all one can hope for is to probe intuitions that you either do or don’t have. It’s not question-begging on my part because my point is: Imagine that you can remove the cow’s suffering but leave everything else practically the same. (This, by definition, assesses the intrinsic value of relieving suffering.) How could that not be better? It’s a Pareto improvement because, contra the “drugged into happiness” image, the idea is not that you’ve relieved the suffering but thwarted the cow’s goal to be reunited with its child; the goals are exactly the same, but the suffering is gone, and it just seems pretty obvious to me that that’s a much better state of the world.
Here’s another way of saying my objection to your original comment: What makes “happiness is intrinsically good” more of an axiom than “sufficiently intense suffering is morally serious in a sense that happiness (of the sort that doesn’t relieve any suffering) isn’t, so the latter can’t compensate for the former”? I don’t see what answer you can give that doesn’t appeal to intuitions about cases.
That case does run counter to “suffering is intrinsically bad but happiness isn’t,” but it doesn’t run counter to “suffering is bad,” which is what your last comment asked about. I don’t see any compelling reasons to doubt that suffering is bad, but I do see some compelling reasons to doubt that happiness is good.
That’s just an intuition, no? (i.e. that everyone painlessly dying would be bad.) I don’t really understand why you want to call it an “axiom” that happiness is intrinsically good, as if this is stronger than an intuition, which seemed to be the point of your original comment.
See this post for why I don’t think the case you presented is decisive against the view I’m defending.
For all practical purposes suffering is dispreferred by beings who experience it, as you know, so I don’t find this to be a counterexample. When you say you don’t want someone to make you less sad about the problems in the world, it seems like a Pareto improvement would be to relieve your sadness without changing your motivation to solve those problems—if you agree, it seems you should agree the sadness itself is intrinsically bad.
No, I know of no thought experiments or any arguments generally that make me doubt that suffering is bad. Do you?
On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the ground).
It seems like you’re just relying on your intuition that pleasure is intrinsically good, and calling that an axiom we have to accept. I don’t think we have to accept that at all — rejecting it does have some counterintuitive consequences, I won’t deny that, but so does accepting it. It’s not at all obvious (and Magnus’s post points to some reasons we might favor rejecting this “axiom”).
This is how Parfit formulated the Repugnant Conclusion, but the way it’s usually referred to in population ethics discussions about the (de)merits of total symmetric utilitarianism, it need not be the case that the muzak and potatoes lives never suffer.
The real RC that some kinds of total views face is that world A with lives of much more happiness than suffering is worse than world Z with more lives of just barely more happiness than suffering. How repugnant this is, for some people like myself, depends on how much happiness or suffering is in those lives on each side. I wrote about this here and here.
which goes against the belief in a net-positive future upon which longtermism is predicated
Longtermism per se isn’t predicated on that belief at all—if the future is net-negative, it’s still (overwhelmingly) important to make future lives less bad.
I don’t really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person “completely unbearable”? Who is “desperate” to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn’t exist in that counterfactual.
To me, the clear case for excluding intrinsic concern for those happy moments is:
“Gratitude” just doesn’t seem like compelling evidence in itself that the grateful individual has been made better off. You have to compare to the counterfactual. In daily cases with existing people, gratitude is relevant as far as the grateful person would have otherwise been dissatisfied with their state of deprivation. But that doesn’t apply to people who wouldn’t feel any deprivation in the counterfactual, because they wouldn’t exist.
I take it that the thrust of your argument is, “Ethics should be about applying the same standards we apply across people as we do for intrapersonal prudence.” I agree. And I also find the arguments for empty individualism convincing. Therefore, I don’t see a reason to trust as ~infallible the judgment of a person at time T that the bundle of experiences of happiness and suffering they underwent in times T-n, …, T-1 was overall worth it. They’re making an “interpersonal” value judgment, which, despite being informed by clear memories of the experiences, still isn’t incorrigible. Their positive evaluation of that bundle can be debunked by, say, this insight from my previous bullet point that the happy moments wouldn’t have felt any deprivation had they not existed.
In any case, I find upon reflection that I don’t endorse tradeoffs of contentment for packages of happiness and suffering for myself. I find I’m generally more satisfied with my life when I don’t have the “fear of missing out” that a symmetric axiology often implies. Quoting myself: