The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet
Edit: OK almost done being nerdsniped by this, I think it basically comes down to:
What’s the probability that the existential catastrophe comes from a powerful optimizer or something that turns into a powerful optimizer, which is arbitrarily close to paperclipping?
Maybe something survives a paperclipper. It wants to turn all energy into data centers but it’s at least conceivable that something survives this. The optimizer might, say, dissassemble Mercury and Venus to turn it into a Matryoshka brain but not need further such materials from Earth. Earth still might get some emanent heat from the sun despite all of the solar panels nested around it, and be the right temperature to turn the whole thing into data centers. But not all materials can be turned into data centers, so maybe some of the ocean is left in place. Maybe the Earth’s atmosphere is intentionally cooled for faster data centers, but there’s still geothermal heat for some bizarre animals.
But probably not. As @Davidmanheim points out (who changed my mind on this), you’ll probably still want to disassemble the Earth to mine out all of the key resources for computing, whether for the Matryoshka brain or the Jupiter brain, and the most efficient way to do this probably isn’t cautious precision mining.
Absent a powerful optimizer you’d expect some animals to survive. There’s a lot of fish, some of them very deep in the ocean, and ocean life seems pretty wildly adaptive, particularly down at the bottom where they do crazy stuff like feeding off volcanic heat vents to turn their bodies into iron and withstand pressures that crumble submarines.
So by far the biggest parameter is going to be how much you expect the world to end from a powerful optimizer. This is the biggest threat in the near term, though if we don’t build ASI or build it safely other existential threats loom larger.
It’s epistemically inaccessible, explanatorily redundant, unnecessary for any pragmatic aim, just a relic of the way our language and cooperative schemes work. I’m not sure the idea can even really be made clear. Empirically, convergence through cooperative, argumentative means looks incredibly unlikely in any normal future. I voted for the strongest position because relative to my peers I have the most relativistic view I know and because of my high (>.99) credence in antirealism. But obviously morality is sort of kind of objective in certain contexts and among certain groups of interlocutors, given socially ingrained shared values.