What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings?

I’ve been thinking recently about what it would mean to expand my moral circle concretely in terms of what actions it would logically force me to take, and I’ve really run up against a wall toward being anything but a speciest from a utilitarian perspective.

I assume a long-termist perspective based largely on the arguments touched upon in this video by Hilary Greaves and the papers she cites. Broadly, it is really, really hard to measure the consequences of our actions over the short-term and especially over the long-term. So if we care about the long-term future, then we must focus our efforts on interventions whose effects on the further future are more predictable—like those reducing existential risks.

I assume:

  1. this basic long-termist argument for mitigating existential risks

  2. the expansion of my moral circle to include at least certain animals (i.e. at least monkeys or domesticated animals)

  3. the highest risks to the extinction of humanity are anthropogenic

Thinking about the highest risks to the extinction of humanity based on Toby Ord’s estimates in the Precipice:

  • AI − 110

  • engineered pandemics − 130

  • climate change/​nuclear war − 1/​1000

Many outcomes of AI, nuclear war, and climate change seem very likely to also pose extinction risks for most animals, especially those which tend to be given priority in the expansion of one’s moral circle (i.e. monkeys, domesticated animals).

I believe there is more uncertainty for engineered pandemics. Only 61% of all human diseases are zoonotic in origin, though 75% of new diseases discovered in the last decade are zoonotic. It seems unlikely that even an engineered pandemic (unless it was specifically designed to destroy all life on Earth) would affect all animals. So maybe the risk to animals from engineered pandemics is more like 1100, 1/​1000, or even less.

Despite discounting the this animal extinction rate, anthropogenic animal extinction risks still likely dwarf non-anthropogenic animal extinction risks.

So—assuming the expansion of my moral circle to include at least certain animals—it seems that getting rid of humans would be the best thing to do over the long-term. The source of those risks is clearly us, humans. So the best thing to do in the long-term is to get rid of humans.

But I, like the Avengers in their fight against Thanos, do not believe that getting rid of humans -those most likely to cause extinction (or suffering, in the case of the Avengers) - is the answer. Thus, I am clearly overvaluing humans to a very, very large degree over the long-term. Why am I wrong?

Some arguments I thought of to counter this argument but which didn’t seem very strong to me:

A. The relative value of a human compared to an animal is so high that even 4 billion more years of animal life without humanity is worth it. I think the quantities of life here are difficult to conceptualize, but it seems unlikely that- from a utilitarian perspective putting human lives and animal lives in the same moral circle—humanity’s existence is worth it compared the billions of years of animals that would continue to live without us.

B. The opportunity cost is worth it to have humanity try to protect life on Earth from natural existential risks and extend animal life past whatever natural risks Earth may encounter. This doesn’t seem reasonable based on the current order-of-magnitude differences between anthopogenic and non-anthropogenic extinction risks.

Additionally, all of Ord’s largest existential risks were created in the past couple hundred years. Nukes, engineered pandemics, and AI were created in the last hundred. From this historical evidence, it seems likely that future human life will result in greater animal existential risk rather than less animal existential risk.

C. The opportunity cost is worth it to have humanity try to extend animal life past the 4 billion-year mark. This argument seems stronger because it creates infinite potential for future animal life. But there is a pretty big ‘if’ that we (humans) will make it that far and solve the uninhabitable Earth problem. Is this risk worth billions of years of animal life? I don’t think so.

Additionally, if we include potential non-Earth animal life in the mix, then our solving the uninhabitable Earth problem would likely present exaggerated risks to all of that non-Earth animal life.

D. We expand out moral circle to include animals that are more likely to survive anthropogenic existential risks and weight their life comparably high over 4 billion years to counteract all of the animal life unlikely to survive anthropogenic existential risks. This argument can be combined with the prior three to strengthen them. But (towards argument C) I still think that most animal life for up to 4 billion years beats very limited animal life for 4 billion years and the slim potential of infinite human and animal life.

E. This argument isn’t practical because humans cannot be wiped out. I agree, but I don’t think that the impracticality of the thought experiment invalidates the merits of the arguments.