This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.
Taking that further
It might be that the suffering that would happen along the way to our achievement of pain-free, joyous existence will outweigh our gained benefits. Also, our struggle for such a joyous existence and the suffering that happened along the way might have been a waste because nonexistence is actually not that bad.
Moral presumption
It seems that an argument for moral presumption can be made against preventing extinction. We already know there is great suffering in the world. We do not yet know whether we can end suffering and create a joyous existence. Therefore, it might be more prudent to go extinct.
2. Negative Utilitarianism
This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.
3. Argument from S-Risks
S-Risks are a familiar concept in the EA community, defined as any scenario in which an astronomical amount of suffering is caused, potentially outweighing any benefit of existence. According to this argument, the human race threatens to create such scenarios, especially with more advanced AI and brain mapping technology, and for the sake of these suffering beings we ought to go extinct now and avoid the risk.
4. Argument from “D-Risks”
Short for “destruction risks”, I am coining this term to express a concept analogous to S-Risks. If an S-Risk is a scenario in which astronomical suffering is caused, then a D-Risk is a scenario in which astronomical destruction is caused. For example, if future humans were to develop a relativistic kill vehicle (a near-light-speed missile), we could use it to destroy entire planets that potentially harbor life (including Earth). According to this argument, we must again go extinct for the sake of these potentially destroyed lifeforms.
Counterargument that is relevant to all three
We already know that there are many species on Earth, and new ones are evolving all the time. If we let ourselves go extinct, in our absence, species will continue to evolve. It is possible that these species, whether non-human and/or new forms of humans, will evolve to live lives of even more suffering and destruction than we are currently experiencing. We already know that we can create net positive lives for individuals, so we could probably create a species that has virtually zero suffering in the future. Therefore, it is upon us to bring this about.
What’s more, the fact that we have such self awareness to consider the possible utility of our own species going extinct might indicate that we are the species that is empowered to ensure that the existing human and nonhuman species, in addition to future species, will be ones that don’t suffer.
Maybe we could destroy all species and their capacity to evolve, thus avoiding the dilemma in the latter paragraph. But then we’d need to be certain that all other species are better off extinct.
We already know that we can create net positive lives for individuals
Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)
That is a good point. I was actually considering that when I was making my statement. I suspect self-delusion might be the core of the belief of many individuals who think their their lives are net positive. In order to adapt/avoid great emotional pain, humans might self-delude when faced with the question of whether their life is overall positive.
Even if it is not possible for human lives to be net positive, my first counterargument would still hold for two different reason.
First, we’d still be able to improve the lives of other species.
Second, it would still be valuable to prevent much more negative lives that might happen if other kinds of humans were allowed to evolve in our absence. It might be difficult to ensure our extinction was permanent. If we took care to make ourselves extinct and that we somehow wouldn’t come back, it’s possible that within, say, a billion years the universe would change in such a way as to make the spark of life that would lead to humans happen again. Cosmological and extremely long processes might undo any precautions we took.
Alternatively, maybe different kinds of humans that would evolve in our absence would be more capable of having positive lives than we are.
I don’t think I am familiar with anything by Thomas Ligotti. I’ll look into them.
Note, however, that (a) Ligotti isn’t a philosopher himself, he just compiled some pessimistic outlooks, representing them the way he understood them, (b) his book is very dark and can be too depressing even for another pessimist. I mean, proceed with caution, take care of your mental well-being while getting acquainted with his writings, he’s a reasonably competent pessimist but a renowned master of, for the lack of a better word, horror-like texts :)
Thank you for that reminder. As with many things in philosophy, this discussion can wander into some pretty dark territory, and it’s important to take care of our mental health.
I read this post about Thomas Ligotti on LessWrong. So far, it wasn’t that disconcerting for me. I think that because I read a lot of Stephen King novels and some other horror stories when I was a teenager, I would be able to read more of his thoughts without being disconcerted.
If I ever find it worthwhile to look more into pessimistic views on existence, I will remember his name.
One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.
That’s an interesting way of looking at it. That view seems nihilistic and like it could lead to hedonism since if our only purpose is to make sure we completely destroy ourselves and the universe, nothing really matters.
I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.
Taking that further
It might be that the suffering that would happen along the way to our achievement of pain-free, joyous existence will outweigh our gained benefits. Also, our struggle for such a joyous existence and the suffering that happened along the way might have been a waste because nonexistence is actually not that bad.
Moral presumption
It seems that an argument for moral presumption can be made against preventing extinction. We already know there is great suffering in the world. We do not yet know whether we can end suffering and create a joyous existence. Therefore, it might be more prudent to go extinct.
Counterargument that is relevant to all three
We already know that there are many species on Earth, and new ones are evolving all the time. If we let ourselves go extinct, in our absence, species will continue to evolve. It is possible that these species, whether non-human and/or new forms of humans, will evolve to live lives of even more suffering and destruction than we are currently experiencing. We already know that we can create net positive lives for individuals, so we could probably create a species that has virtually zero suffering in the future. Therefore, it is upon us to bring this about.
What’s more, the fact that we have such self awareness to consider the possible utility of our own species going extinct might indicate that we are the species that is empowered to ensure that the existing human and nonhuman species, in addition to future species, will be ones that don’t suffer.
Maybe we could destroy all species and their capacity to evolve, thus avoiding the dilemma in the latter paragraph. But then we’d need to be certain that all other species are better off extinct.
Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)
That is a good point. I was actually considering that when I was making my statement. I suspect self-delusion might be the core of the belief of many individuals who think their their lives are net positive. In order to adapt/avoid great emotional pain, humans might self-delude when faced with the question of whether their life is overall positive.
Even if it is not possible for human lives to be net positive, my first counterargument would still hold for two different reason.
First, we’d still be able to improve the lives of other species.
Second, it would still be valuable to prevent much more negative lives that might happen if other kinds of humans were allowed to evolve in our absence. It might be difficult to ensure our extinction was permanent. If we took care to make ourselves extinct and that we somehow wouldn’t come back, it’s possible that within, say, a billion years the universe would change in such a way as to make the spark of life that would lead to humans happen again. Cosmological and extremely long processes might undo any precautions we took.
Alternatively, maybe different kinds of humans that would evolve in our absence would be more capable of having positive lives than we are.
I don’t think I am familiar with anything by Thomas Ligotti. I’ll look into them.
Note, however, that (a) Ligotti isn’t a philosopher himself, he just compiled some pessimistic outlooks, representing them the way he understood them, (b) his book is very dark and can be too depressing even for another pessimist. I mean, proceed with caution, take care of your mental well-being while getting acquainted with his writings, he’s a reasonably competent pessimist but a renowned master of, for the lack of a better word, horror-like texts :)
Thank you for that reminder. As with many things in philosophy, this discussion can wander into some pretty dark territory, and it’s important to take care of our mental health.
I read this post about Thomas Ligotti on LessWrong. So far, it wasn’t that disconcerting for me. I think that because I read a lot of Stephen King novels and some other horror stories when I was a teenager, I would be able to read more of his thoughts without being disconcerted.
If I ever find it worthwhile to look more into pessimistic views on existence, I will remember his name.
One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.
Side note: I love that “paperclipping” is a verb now.
That’s an interesting way of looking at it. That view seems nihilistic and like it could lead to hedonism since if our only purpose is to make sure we completely destroy ourselves and the universe, nothing really matters.
I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.
Great points. If you assume a negative utilitarian worldview, you can make strong arguments both for and against human extinction.