Thanks for this. I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.
As far as very bad outcomes, I’m not worried about extinction that much; dead people cannot suffer, at least. What I’m most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano’s first tale of doom, and then spreading that hell across the universe.
The very good outcomes would mean that we’re recognizably beyond the point where bad things could happen; we’ve built a superintelligence, it’s well-aligned, it’s clear to everyone that there are no risks anymore. The superintelligence will prevent wars, pandemics, asteroids, supervolcanos, disease, death, poverty, suffering, you name it. There will be no such thing as “existential risk”.
Of course, I’m keeping an eye on the developments and I’m ready to reconsider this position at any time; but right now this is the way I see the world.
Thanks for this. I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I’m going to speak for myself again:
I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.
As far as very bad outcomes, I’m not worried about extinction that much; dead people cannot suffer, at least. What I’m most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano’s first tale of doom, and then spreading that hell across the universe.
The very good outcomes would mean that we’re recognizably beyond the point where bad things could happen; we’ve built a superintelligence, it’s well-aligned, it’s clear to everyone that there are no risks anymore. The superintelligence will prevent wars, pandemics, asteroids, supervolcanos, disease, death, poverty, suffering, you name it. There will be no such thing as “existential risk”.
Of course, I’m keeping an eye on the developments and I’m ready to reconsider this position at any time; but right now this is the way I see the world.