If humanity wipes itself out, those wild animals are going to continue suffering forever.
If we only partially destroy civilization, we’re going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).
If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we’re foregoing the creation of an immeasurably larger measure of extremely positive experiences to balance things out.
On the other hand, if we just manage to pass through the imminent bottleneck of potential destruction and emerge victorious on the other side—where we have solved coordination and AI—we will have the capacity to solve problems like wild animal suffering, global poverty, or climate change with a snap of our fingers, so to speak.
That is to say, problems like wild animal suffering will either be solved with trivial effort a few decades from now, or we will have much, much bigger problems. Either way—this is my personal view, not necessarily other “long-termists”—current work on these issues will be mostly in vain.
If humanity wipes itself out, those wild animals are going to continue suffering forever.
Not forever. Only until the planet becomes too hot to support complex life (<1 billion years from now). Giving that the universe can support life 1-100 trillion years, this is a relatively short amount of suffering compared to what could be.
And also only on our planet! Which is much less restricted than the suffering that can spread if humanity remains alive. (Although, as I write in my own answer, I don’t think humanity would spread wild animals beyond the solar system.)
Thanks for this. I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.
As far as very bad outcomes, I’m not worried about extinction that much; dead people cannot suffer, at least. What I’m most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano’s first tale of doom, and then spreading that hell across the universe.
The very good outcomes would mean that we’re recognizably beyond the point where bad things could happen; we’ve built a superintelligence, it’s well-aligned, it’s clear to everyone that there are no risks anymore. The superintelligence will prevent wars, pandemics, asteroids, supervolcanos, disease, death, poverty, suffering, you name it. There will be no such thing as “existential risk”.
Of course, I’m keeping an eye on the developments and I’m ready to reconsider this position at any time; but right now this is the way I see the world.
If humanity wipes itself out, those wild animals are going to continue suffering forever.
If we only partially destroy civilization, we’re going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).
If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we’re foregoing the creation of an immeasurably larger measure of extremely positive experiences to balance things out.
On the other hand, if we just manage to pass through the imminent bottleneck of potential destruction and emerge victorious on the other side—where we have solved coordination and AI—we will have the capacity to solve problems like wild animal suffering, global poverty, or climate change with a snap of our fingers, so to speak.
That is to say, problems like wild animal suffering will either be solved with trivial effort a few decades from now, or we will have much, much bigger problems. Either way—this is my personal view, not necessarily other “long-termists”—current work on these issues will be mostly in vain.
Not forever. Only until the planet becomes too hot to support complex life (<1 billion years from now). Giving that the universe can support life 1-100 trillion years, this is a relatively short amount of suffering compared to what could be.
And also only on our planet! Which is much less restricted than the suffering that can spread if humanity remains alive. (Although, as I write in my own answer, I don’t think humanity would spread wild animals beyond the solar system.)
Thanks for this. I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I’m going to speak for myself again:
I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.
As far as very bad outcomes, I’m not worried about extinction that much; dead people cannot suffer, at least. What I’m most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano’s first tale of doom, and then spreading that hell across the universe.
The very good outcomes would mean that we’re recognizably beyond the point where bad things could happen; we’ve built a superintelligence, it’s well-aligned, it’s clear to everyone that there are no risks anymore. The superintelligence will prevent wars, pandemics, asteroids, supervolcanos, disease, death, poverty, suffering, you name it. There will be no such thing as “existential risk”.
Of course, I’m keeping an eye on the developments and I’m ready to reconsider this position at any time; but right now this is the way I see the world.