I agree there should be more reflection (moral or factual) into the assumption that we should prioritize preventing human extinction. :)
That being said, we should emphasize that some of the risk factors for extinction also seem to be risk factors for more suffering and s-risks—which suggests that negative utilitarians as well as s-risk reducers wouldn’t support shifting focus away from those dealing with those risk factors—unless there are better opportunities for impact. Examples of these risk factors include more conflict, polarization and the unsafe development of AI, especially without concern for cooperative aspects to prevent potential conflict between different AI or their operators.
Of course, this might not apply to all risk factors of extinction. Still, s-risk reducers and suffering reducers might think that it’s bad to (intentionally or otherwise) act in a way that results in people trying to bring it about (see https://www.utilitarianism.com/nu/nufaq.html#3.2 ) which might raise the question of precisely how much emphasis to put on this as a community.
More considerations include whether other civilizations exist (e.g. aliens), and if so, how many. This also makes it unclear what antinatalism suggests. If the focus is on fewer births then we need to find out whether human civilization would increase or decrease the total number of births in the future compared to alternative scenarios where, e.g., aliens own the resources humans would have owned.
An argument against advocating human extinction is that cosmic rescue missions might eventually be possible. If the future of posthuman civilization converges toward utilitarianism, and posthumanity becomes capable of expanding throughout and beyond the entire universe, it might be possible to intervene in far-flung regions of the multiverse and put an end to suffering there.
Excellent point. Playing devil’s advocate, one might be skeptical that humanity is good enough to perform these “cosmic rescue missions”, either out of cruelty/indifference or simply because we will never be advanced enough. Still, it’s a good concept to keep in mind.
I agree there should be more reflection (moral or factual) into the assumption that we should prioritize preventing human extinction. :)
That being said, we should emphasize that some of the risk factors for extinction also seem to be risk factors for more suffering and s-risks—which suggests that negative utilitarians as well as s-risk reducers wouldn’t support shifting focus away from those dealing with those risk factors—unless there are better opportunities for impact. Examples of these risk factors include more conflict, polarization and the unsafe development of AI, especially without concern for cooperative aspects to prevent potential conflict between different AI or their operators.
Of course, this might not apply to all risk factors of extinction. Still, s-risk reducers and suffering reducers might think that it’s bad to (intentionally or otherwise) act in a way that results in people trying to bring it about (see https://www.utilitarianism.com/nu/nufaq.html#3.2 ) which might raise the question of precisely how much emphasis to put on this as a community.
More considerations include whether other civilizations exist (e.g. aliens), and if so, how many. This also makes it unclear what antinatalism suggests. If the focus is on fewer births then we need to find out whether human civilization would increase or decrease the total number of births in the future compared to alternative scenarios where, e.g., aliens own the resources humans would have owned.
Also remember that an existential risk (x-risk) is a “risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential”. This means existential risks aren’t the same as extinction risks. S-risks that destroy humanity’s longterm potential are also x-risks.
An argument against advocating human extinction is that cosmic rescue missions might eventually be possible. If the future of posthuman civilization converges toward utilitarianism, and posthumanity becomes capable of expanding throughout and beyond the entire universe, it might be possible to intervene in far-flung regions of the multiverse and put an end to suffering there.
Excellent point. Playing devil’s advocate, one might be skeptical that humanity is good enough to perform these “cosmic rescue missions”, either out of cruelty/indifference or simply because we will never be advanced enough. Still, it’s a good concept to keep in mind.