I’m Anthony DiGiovanni, a suffering-focused AI safety researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.
Anthony DiGiovanni
A longtermist critique of “The expected value of extinction risk reduction is positive”
[Apologies for length, but I think these points are worth sharing in full.]
As someone who is highly sympathetic to the procreation asymmetry, I have to say, I still found this post quite moving. I’ve had, and continue to have, joys profound enough to know the sense of awe you’re gesturing at. If there were no costs, I’d want those joys to be shared by new beings too.
Unfortunately, assuming that we’re talking about practically relevant cases where creating a “happy” life also entails suffering of the created person and other beings, there are costs in expectation. (I assume no one has moral objections to creating utterly flawless lives, so the former is the sense in which I read “neutrality.” See also this comment. Please let me know if I’ve misunderstood.) And I find those costs qualitatively more serious than the benefits. Let me see if I can convey where I’m coming from.
I found it surprising that you wrote:
I have refrained, overall, from framing the preceding discussion in specifically moral terms — implying, for example, that I am obligated to create Michael, instead of going on my walk. I think I have reasons to create Michael that have to do with the significance of living for Michael; but that’s not yet to say, for example, that I owe it to Michael to create him, or that I am wronging Michael if I don’t.
Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life. (If one is a welfarist consequentialist, a fortiori this calls into question the idea that the uncreated happy person is “wronged” in any prudential sense.)
To flesh that out a bit: You acknowledged, in sketching out Michael’s hypothetical life, these pains:
I see a fight with that same woman, a sense of betrayal, months of regret. … I see him on his deathbed … cancer blooming in his stomach
When I imagine the prospect of creating Michael, these moments weigh pretty gravely. I feel the pang of knowing just how utterly crushing a conflict with the most important person in one’s life can be; the pit in the gut, the fear, shock, and desperation. I haven’t had cancer, but I at least know the fear of death, and can only imagine it gets more haunting when one actually expects to die soon. By all reports, cancer is clearly a fate I couldn’t possibly wish on anyone, and suffering it slowly in a hospital sounds nothing short of harrowing.
I simply can’t comprehend creating those moments in good conscience, short of preventing greater pain broadly construed. It seems cruel to do so. By contrast, although Michael-while-happy would feel grateful to exist, it doesn’t seem cruel to me at all to not invite his nonexistent self to the “party,” in your words. As you acknowledge, the objection is that “if [he] hadn’t been created, [he] wouldn’t exist, and there would be no one that [my] choice was ‘worse for.’” I don’t see a strong enough reason to think the Michael-while-happy experiences override the Michael-while-miserable experiences, given the difference in moral gravity. It seems cold comfort to tell the moments of Michael that beg for relief, “I’m sorry for the pain I gave you, but it’s worth it for the party to come.”
I feel inclined, not to “disagree” with them, but rather to inform them that they are wrong
Likewise I feel inclined to inform the Michael-creators that they are wrong, in implicitly claiming that the majority vote of Michael-while-happy can override the pleas of Michael-while-miserable. Make no mistake, I abhor scope neglect. But this is not a question of ignoring numbers, any more than someone who would not torture a person for any number of beautiful lifeless planets created in a corner of the universe where no one could ever observe them. It’s about prioritizing needs over wants, the tragic over the precious.
Lastly, you mention the golden rule as part of your case. I personally would not want to be forced by anyone—including my past self, who often acts in a state of myopia and doesn’t remember how awful the worst moments are—to suffer terribly because they judged it was worth it for the goods in life.
I do of course have some moral uncertainty on this. There are some counterintuitive implications to the view I sketched here. But I wouldn’t say this is an unnecessary addition to the hardness of population ethics.
- EA Forum Prize: Winners for March 2021 by 22 May 2021 4:34 UTC; 26 points) (
- 16 Dec 2021 23:46 UTC; 10 points) 's comment on Against Negative Utilitarianism by (
[linkpost] When does technical work to reduce AGI conflict make a difference?: Introduction
Why you should consider trying SSRIs
I was initially hesitant to post this, out of some vague fear of stigma and stating the obvious, and not wanting people to pathologize my ethical views based on the fact that I take antidepressants. This is pretty silly for two reasons. First, I think that if my past self had read something like this, he could have been spared years of suffering, and there are probably several readers in his position. EAs are pretty open about mental illness anyway. Second, if anything the fact that I am SFE “despite” currently not being depressed at all (indeed quite consistently happy), thanks to SSRIs, should make readers less likely to attribute my views to a mental illness.[1]
I claim that even if you don’t feel so bad as to qualify as capital-D depressed, you might feel noticeably less bad on a daily basis if you try SSRIs.[2] That has been my experience, and I can honestly say this has likely been the cheapest sustainable boost in my well-being I’ve ever found. Being happier has also probably made me more effective/productive, though this is harder to assess.
(Obviously, my experience is not universal, I’m probably just lucky to an extent, this is not expert medical advice, and you might either find that SSRIs are ineffective for you or that the side effects are less tolerable than I have found them. You should definitely do your own research!)
In the months (at least) prior to SSRIs, my level of depression was “mild” according to the Burns checklist. I felt rather lonely during downtime, and like a bit of a failure for not having an exciting social life. I didn’t endorse the latter judgment, and felt pretty fulfilled by my altruistic work, but that dissatisfaction persisted even when I tried to reason myself out of it (or tried taking up new hobbies). This wasn’t debilitating by any means—so much so that I didn’t really feel like I “deserved” to receive treatment intended for depression, and yes I realize how dumb that sounds in hindsight—but it was something of a pall hanging over my life all the same.
SSRIs just dispelled those feelings.
Waiting so long to give these things a try was a mistake. I made that mistake out of a combination of the aforementioned suspicion that I wasn’t depressed enough to need them, and overestimation of how difficult it would be to get a prescription.[3] Just because my suffering wasn’t as deep as others’, that didn’t mean it needed to exist.
This medication isn’t magic; my life isn’t perfect, and I still have some other dissatisfactions I’m working on. But, for the amount of difference this has made for me, it seemed negligent not to share my experience and hopefully encourage others in an analogous position to show themselves a bit of compassion.
[1] Yes, I have seen people do this before—not to me personally, but to other SFEs.
[2] This probably holds for other antidepressants too. I’m just focusing on SSRIs here because I have experience with them, and they incidentally have a worse reputation than, e.g., Wellbutrin.
[3] At least in the U.S., there are online services where you can share your symptoms with a doctor and just get a prescription at a pretty low price. For some reason, I expected a lot more awkward bureaucracy and mandatory therapy than this. I won’t get specific here because I don’t want to be a shill, but if you’re curious, feel free to PM me.
e.g. 2 minds with equally passionate complete enthusiasm (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future). They can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we’re in the business of helping others for their own sakes rather than ours, I don’t see the case for excluding either one’s concern from our moral circle.
…
But when I’m in a mindset of trying to do impartial good I don’t see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.
I don’t really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person “completely unbearable”? Who is “desperate” to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn’t exist in that counterfactual.
To me, the clear case for excluding intrinsic concern for those happy moments is:
“Gratitude” just doesn’t seem like compelling evidence in itself that the grateful individual has been made better off. You have to compare to the counterfactual. In daily cases with existing people, gratitude is relevant as far as the grateful person would have otherwise been dissatisfied with their state of deprivation. But that doesn’t apply to people who wouldn’t feel any deprivation in the counterfactual, because they wouldn’t exist.
I take it that the thrust of your argument is, “Ethics should be about applying the same standards we apply across people as we do for intrapersonal prudence.” I agree. And I also find the arguments for empty individualism convincing. Therefore, I don’t see a reason to trust as ~infallible the judgment of a person at time T that the bundle of experiences of happiness and suffering they underwent in times T-n, …, T-1 was overall worth it. They’re making an “interpersonal” value judgment, which, despite being informed by clear memories of the experiences, still isn’t incorrigible. Their positive evaluation of that bundle can be debunked by, say, this insight from my previous bullet point that the happy moments wouldn’t have felt any deprivation had they not existed.
In any case, I find upon reflection that I don’t endorse tradeoffs of contentment for packages of happiness and suffering for myself. I find I’m generally more satisfied with my life when I don’t have the “fear of missing out” that a symmetric axiology often implies. Quoting myself:
Another takeaway is that the fear of missing out seems kind of silly. I don’t know how common this is, but I’ve sometimes felt a weird sense that I have to make the most of some opportunity to have a lot of fun (or something similar), otherwise I’m failing in some way. This is probably largely attributable to the effect of wanting to justify the “price of admission” (I highly recommend the talk in this link) after the fact. No one wants to feel like a sucker who makes bad decisions, so we try to make something we’ve already invested in worth it, or at least feel worth it. But even for opportunities I don’t pay for, monetarily or otherwise, the pressure to squeeze as much happiness from them as possible can be exhausting. When you no longer consider it rational to do so, this pressure lightens up a bit. You don’t have a duty to be really happy. It’s not as if there’s a great video game scoreboard in the sky that punishes you for squandering a sacred gift.
I suspect there are examples of things EAs do out of consideration for other humans that are just as costly, and they justify them on the grounds that this comes out of their “fuzzies” budget. e.g. Investing in serious romantic or familial relationships. I’m personally rather skeptical that I would spend any time and money saved by being non-vegan on altruistically important things, even if I wanted to. (Plus there is Nikola’s point that if you already do care a lot about animals, the emotional cost of acting in a way that financially supports factory farming could be nontrivial.)
I found several of these arguments uncompelling. While you acknowledge that your approach is one of “many weak arguments,” the overall case doesn’t seem persuasive.
Specifically:
#1: This seems to be a non sequitur. If relatively short-term problems are also neglected, why exactly does this suggest that interventions that improve the long-term future would converge with those that improve the relatively short-term (yet not extremely short-term)? All that you’ve shown here is that we shouldn’t be surprised if interventions that improve the relatively short term may be quite different from those that people typically prioritize.
#2: Prima facie this is fair enough. But I’d expect the tractability of effectively preventing global catastrophic and/or x-risks to not be high enough for this to be competitive with malaria nets, if one is only counting present lives.
#4: Conversely, though, we’ve seen that the longtermist community has identified AI as one of the most plausible levers for positive or negative impact on the long term future, and increasing economic growth will on average increase AI capabilities more than safety. Re: EA meta, if your point is that getting more people into EA increases efforts on more short- and long-term interventions, sure, but this is entirely consistent with the view that the most effective interventions to improve the long vs short term will diverge. Maybe the most effective EA meta work from a longtermist perspective is to spread longtermism specifically, not EA in broad strokes. Tensions between short-term animal advocacy and long-term (wild) animal welfare have already been identified, e.g., here and here.
#5: For those who don’t have the skills for longtermist direct work and instead purely earn to give, this is fair enough, but my impression is that longtermists don’t focus much on effective donations anyway. So this doesn’t persuade me that longtermists with the ability to skill up in direct work that aims at the long term would just as well aim at the short term.
#6: If you grant longtermist ethics, by the complex cluelessness argument, aiming at improving the short term doesn’t help you avoid this feedback loop problem. Your interventions at the short term will still have long term effects that probably dominate the short term effects, and I don’t see why the feedback on short term effectiveness would help you predict the sign and magnitude of the long term effects. (Having said this, I don’t think longtermists have really taken complex cluelessness for long term-aimed interventions as seriously as they should, e.g., see Michael’s comment here. My critiques of your post here should not be taken as a wholesale endorsement of the most popular longtermist interventions.)
#8: I agree with the spirit of this point, that long term plans will be extremely brittle. But even if the following is true:
making the world in 10 years time or 25 years time as strong as possible to deal with the challenges beyond 10 or 25 years from now is likely the best way to plan for the long-term
I would expect “as strong as possible” to differ significantly from a near- vs longtermist perspective. Longtermists will probably want to build the sorts of capacities that are effective at achieving longtermist goals (conditional on your other arguments not providing a compelling case to the contrary), which would be different from those that non-longtermists are incentivized to build.
#9: I don’t agree that this is “common sense.” The exact opposite seems common sense to me—if you want to optimize X, it’s common sense that you should do things aimed at improving X, not aimed at Y. This is analogous to how “charity begins at home” doesn’t seem commonsensical, i.e., if the worst off people on this planet are in poor non-industrialized nations, it would be counterintuitive if the best way to help them were to help people in industrialized nations. Or if the best way to help farmed and wild animals were to help humans. (Of course there can be defeaters to common sense, but I’m addressing your argument on its own terms.)
+1, the dismissive tone of the following passage especially left a bad taste in my mouth:
After all, when thinking about what makes some possible universe good, the most obvious answer is that it contains a predominance of awesome, flourishing lives. How could that not be better than a barren rock? Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.
It should be pretty clear to someone who has studied alternatives to total symmetric utilitarianism—not all of which are averagist or person-affecting views! - that some of these alternatives are thoroughly motivated by “humane,” rather than “nihilistic,” intuitions.
I am (clearly) not Tobias, but I’d expect many people familiar with EA and LW would get something new out of Ch 2, 4, 5, and 7-11. Of these, seems like the latter half of 5, 9, and 11 would be especially novel if you’re already familiar with the basics of s-risks along the lines of the intro resources that CRS and CLR have published. I think the content of 7 and 10 is sufficiently crucial that it’s probably worth reading even if you’ve checked out those older resources, despite some overlap.
which goes against the belief in a net-positive future upon which longtermism is predicated
Longtermism per se isn’t predicated on that belief at all—if the future is net-negative, it’s still (overwhelmingly) important to make future lives less bad.
For what it’s worth, my experience hasn’t matched this. I started becoming concerned about the prevalence of net-negative lives during a particularly happy period of my own life, and have noticed very little correlation between the strength of this concern and the quality of my life over time. There are definitely some acute periods where, if I’m especially happy or especially struggling, I have more or less of a system-1 endorsement of this view. But it’s pretty hard to say how much of that is a biased extrapolation, versus just a change in the size of my empathy gap from others’ suffering.
This is how Parfit formulated the Repugnant Conclusion, but the way it’s usually referred to in population ethics discussions about the (de)merits of total symmetric utilitarianism, it need not be the case that the muzak and potatoes lives never suffer.
The real RC that some kinds of total views face is that world A with lives of much more happiness than suffering is worse than world Z with more lives of just barely more happiness than suffering. How repugnant this is, for some people like myself, depends on how much happiness or suffering is in those lives on each side. I wrote about this here and here.
But I want to be clear that this normative disagreement isn’t evidence of any philosophical defect on our part.
Oh I absolutely agree with this. My objections to that quote have no bearing on how legitimate your view is, and I never claimed as much. What I find objectionable is that by using such dismissive language about the view you disagree with, not merely critical language, you’re causing harm to population ethics discourse. Ideally readers will form their views on this topic based on their merits and intuitions, not based on claims that views are “too divorced from humane values to be worth taking seriously.”
complaining that we didn’t preface every normative claim with the tedious disclaimer “in our opinion”
Personally I don’t think you need to do this.
This sociological claim isn’t philosophically relevant. There’s nothing inherently objectionable about concluding that some people have been mistaken in their belief that a certain view is worth taking seriously. There’s also nothing inherently objectionable about making claims that are controversial.
Again, I didn’t claim that your dismissiveness bears on the merit of your view. The objectionable thing is that you’re confounding readers’ perceptions of the views with labels like “[not] worth taking seriously.” The fact that many people do take this view seriously suggests that that kind of label is uncharitable. (I suppose I’m not opposed in principle to being dismissive to views that are decently popular—I would have that response to the view that animals don’t matter morally, for example. But what bothers me about this case is partly that your argument for why it’s not worth taking seriously is pretty unsatisfactory.)
I’m certainly not calling for you to pass no judgments whatsoever on philosophical views, and “merely report on others’ arguments,” and I don’t think a reasonable reading of my comment would lead you to believe that.
And certainly if we’re making philosophical errors, or overlooking important counterarguments, I’m happy to have any of that drawn to my attention.
Indeed, I gave substantive feedback on the Population Ethics page a few months back, and hope you and your coauthors take it into account. :)
Longtermism, as a worldview, does not want present day people to suffer; instead, it wants to work towards a future with as little suffering as possible, for everyone.
This is a bit misleading. Some longtermists, myself included, prioritizing minimizing suffering in the future. But this is definitely not a consensus among longtermists, and many popular longtermist interventions will probably increase future suffering (by increasing future sentient life, including mostly-happy lives, in general).
I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
It really isn’t clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the “intuition of neutrality.” In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don’t mean to pick on you in particular!) devoted to those three views. And I’m not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.
I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that’s been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.
I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
+1. I’d say that applying for and participating in their fellowship was probably the best career decision I’ve made so far. Maybe 60-70% of this was due to the benefits of entering a network of people whose altruistic efforts I greatly respect, the rest was the direct value of the fellowship itself. (I haven’t thought a lot about this point, but on a gut level it seems like the right breakdown.)
appeal to some form of partiality or personal prerogative seems much more appropriate to me than denying the value of the beneficiaries
I don’t think this solves the problem, at least if one has the intuition (as I do) that it’s not the current existence of the people who are extremely harmed to produce happy lives that makes this tradeoff “very repugnant.” It doesn’t seem any more palatable to allow arbitrarily many people in the long-term future (rather than the present) to suffer for the sake of sufficiently many more added happy lives. Even if those lives aren’t just muzak and potatoes, but very blissful. (One might think that is “horribly evil” or “utterly disastrous,” and isn’t just a theoretical concern either, because in practice increasing the extent of space settlement would in expectation both enable many miserable lives and many more blissful lives.)
ETA: Ideally I’d prefer these discussions not involve labels like “evil” at all. Though I sympathize with wanting to treat this with moral seriousness!
Longtermism is probably not really worth it if the far future contains much more suffering than happiness
Longtermism isn’t synonymous with making sure more sentient beings exist in the far future. That’s one subset, which is popular in EA, but an important alternative is that you could work to reduce the suffering of beings in the far future.
unless I think that I’m at least as well informed than the average respondent about where this money should go
This applies if your ethics are very aligned with the average respondent, but if not, it is a decent incentive. I’d be surprised if almost all of EAs’ disagreement on cause prioritization were strictly empirical.
No, longtermism is not redundant
I’m not keen on the recent trend of arguments that persuading people of longtermism is unnecessary, or even counterproductive, for encouraging them to work on certain cause areas (e.g., here, here). This is for a few reasons:
It’s not enough to believe that extinction risks within our lifetimes are high, and that extinction would constitute a significant moral problem purely on the grounds of harms to existing beings. Arguments for the tractability of reducing those risks, sufficient to outweigh the nearterm good done by focusing on global human health or animal welfare, seem lacking in the arguments I’ve seen for prioritizing extinction risk reduction on non-longtermist grounds.
Take the AI alignment problem as one example (among the possible extinction risks, I’m most familiar with this one). I think it’s plausible that the collective efforts of alignment researchers and people working on governance will prevent extinction, though I’m not prepared to put a number on this. But as far as I’ve seen, there haven’t been compelling cost-effectiveness estimates suggesting that the marginal dollar or work-hour invested in alignment is competitive with GiveWell charities or interventions against factory farming, from a purely neartermist perspective. (Shulman discusses this in this interview, but without specifics about tractability that I would find persuasive.)
More importantly, not all longtermist cause areas are risks that would befall currently existing beings. MacAskill discusses this a bit here, including the importance of shaping the values of the future rather than (I would say “complacently”) supposing things will converge towards a utopia by default. Near-term extinction risks do seem likely to be the most time-sensitive thing that non-downside-focused longtermists would want to prioritize. But again, tractability makes a difference, and for those who are downside-focused, there simply isn’t this convenient convergence between near- and long-term interventions. As far I can tell, s-risks affecting beings in the near future fortunately seem highly unlikely.