The harm of preventing extinction
I am disturbed at the absolutely horrific things that some humans go through. The very worst things I can think of include child sex trafficking and the fact that young children are sometimes raped and abused by family members, including their parents. I have read stories about the torture of children by psychopaths. The suffering these children must go through must be unimaginable to those that have not experienced it.
I was thinking about sharing specific details of the most disturbing acts I have read about but decided that may be inappropriate, even though I think sharing specific details of atrocities may strengthen my argument. If anyone’s interested, read the Wikipedia page of serial killer Albert Fish (not for the faint of heart).
My point is that preventing human extinction inevitably subjects many, many more children to these atrocities. This doesn’t sit at all well with me and I don’t think it should sit well with any reasonable person.
I suspect the main comeback to this is that as humanity improves we will eventually see a day where these atrocities don’t occur. I think this is just way too optimistic. Even if this is achieved it could be millenia before we completely eradicate all abuse. I doubt that millions more abused children is a price worth paying.
I’m not saying we should encourage extinction, I’m saying we should cease efforts to prevent it. We should redirect these resources to making the world a better place, not prolonging its existence.
On the other hand, there are also arguments for why one should work to prevent extinction even if one did have the kind of suffering-focused view that you’re arguing for; see e.g. this article. To briefly summarize some of its points:
If humanity doesn’t go extinct, then it will eventually colonize space; if we don’t colonize space, it may eventually be colonized by an alien species with even more cruelty than us.
A specific extinction risk is the creation of unaligned AI, which might first destroy humanity and then go on to colonize space; if it lacked empathy, it might create a civilization where none of the agents cared about the suffering of others, causing vastly more suffering to exist.
Trying to prevent extinction also helps avoid global catastrophic risks (GCRs); GCRs could set social progress back, causing much more violence and other kinds of suffering than we have today.
Efforts to reduce extinction risk often promote coordination, peace and stability, which can be useful for reducing the kinds of atrocities that you’re talking about.
My rough answer to this is: If someone wants to die (after thinking about it for a long time and having time to reflect on it), let them die. If they want to live, help them do that. The vast majority of people want to continue living. I don’t see how the atrocities that are experienced by humans outweigh the benefits, given that the vast majority of humans seem to have a pretty decent will to live.
(This does not hold for animals, and I think the strongest arguments for antinatalism and promoting extinction come from considering non-human suffering, but that seems different from the case you are making)
Some people don’t have the choice to die, because they’re prevented from it, like victims of abuse/torture or certain freak accidents.
I think this is a problem with the idea of “outweigh”. Utilitarian interpersonal tradeoffs can be extremely cruel and unfair. If you think the happiness can aggregate to outweigh the worst instances of suffering:
1. How many additional happy people would need to be born to justify subjecting a child to a lifetime of abuse and torture?
2. How many extra years of happy life for yourself would you need to justify subjecting a child to a lifetime of abuse and torture?
The framings might invoke very different immediate reactions (2 seems much more accusatory because the person benefitting from another’s abuse and torture is the one making the decision to subject them to it), but for someone just aggregating by summation, like a classical utilitarian, they’re basically the same.
I think it’s put pretty well here, too:
Counterpoint (for purposes of getting it into the discussion; I’m undecided about antinatalism myself): that argument only applied to people who are already alive, and thus not to most of the people who would be affected by the decision whether to extend the human species or not (I.e. those who don’t yet exist). David Benatar argues (podcast, book) that while, as you point out, many human lives may well be worth continuing, those very same lives (he thinks all lives, but that’s more than I need to make this argument) may nevertheless not have been worth starting. If this is the case, then some or all of the lives that would come into existence by preventing extinction may also not be worth starting.
Do you have a short summary of why he thinks that someone answering the question of “would you have preferred to die right after child birth?” with “No?” is not strong evidence that they should have been born? Seems like the same thing to me. I surely prefer to exist and would be pretty sad about a world in which I wasn’t born (in that I would be willing to endure significant additional suffering in order to cause a world in which I was born).
I don’t know what Benatar’s response to this is, but—consider this comment by Eliezer in a discussion of the Repugnant Conclusion:
As a more extreme version, suppose that we could create arbitrary minds, and chose to create one which, for its entire existence, experienced immense suffering which it wanted to stop. Say that it experienced the equivalent of being burned with a hot iron, for every second of its existence, and never got used to it. Yet, when asked whether it wanted to die, or would have preferred to die right after it was born, we’d design it in such a way that it would consider death even worse and respond “no”. Yet it seems obvious to me that it outputting this response is not a compelling reason to create such a mind.
If people already exist, then there are lots of strong reasons about respecting people’s autonomy etc. for why we should respect their desire to continue existing. But if we’re making the decision about what kinds of minds should come to existence, those reasons don’t seem to be particularly compelling. Especially not since we can construct situations in which we could create a mind that preferred to exist, but where it nonetheless seems immoral to create it.
You can of course reasonably argue that whether a mind should exist, depends on whether they would want to exist and some additional criteria about e.g. how happy they would be. Then if we really could create arbitrary minds, then we might as well (and should) create ones that were happy and preferred to exist, as opposed to ones which were unhappy and preferred to exist. But in that case we’ve already abandoned the simplicity of just basing our judgment on asking whether they’re happy with having survived to their current age.
This doesn’t seem coherent to me; once you exist, you can certainly prefer to continue existing, but I don’t think it makes sense to say “if I didn’t exist, I would prefer to exist”. If we’ve assumed that you don’t exist, then how can you have preferences about existing?
If I ask myself the question, “do I prefer a world where I hadn’t been born versus a world where I had been born”, and imagine that my existence would actually hinge on my answer, then that means that I will in effect die if I answer “I prefer not having been born”. So then the question that I’m actually answering is “would I prefer to instantly commit a painless suicide which also reverses the effects of me having come into existence”. So that’s smuggling in a fair amount of “do I prefer to continue existing, given that I already exist”. And that seems to me unavoidable—the only way we can get a mind to tell us whether or not it prefers to exist, is by instantiating it, and then it will answer from a point of view where it actually exists.
I feel like this makes the answer to the question “if a person doesn’t exist, would they prefer to exist” either “undefined” or “no” (“no” as in “they lack an active desire to exist”, though of course they also lack an active desire to not-exist). Which is probably for the better, given that there exist all kinds of possible minds that would probably be immoral to instantiate, even though once instantiated they’d prefer to exist.