Thanks for this. I’d like to ask you the same question I’m asking others in this thread.
I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I think EA’s believe that this is definitely possible, most likely by the creation of an aligned superintelligence. That could reduce x-risk to infinitessimal levels, if there are no other intelligent actors that we could encounter. I think the general strategy could be summarized as ‘reduce extinction risk as much as possible until we can safely build and deploy an aligned superintelligence, then let the superintelligence (dis)solve all other problems’.
After the creation of an aligned superintelligence, society’s resources could focus on other problems. However, I think some people also think there are no other problems anymore once there is an aligned superintelligence: with superintelligence all the other problems like animal suffering are trivial to solve.
But most people—including myself—seem to not have given very much thought to what other problems might still exist in an era of superintelligence.
If you believe a strong version superintelligence is impossible this complicates the whole picture, but you’d at least have to include the consideration that in the future it is likely we have substantially higher (individual and/or collective) intelligence.
Thanks for this. I’d like to ask you the same question I’m asking others in this thread.
I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I think EA’s believe that this is definitely possible, most likely by the creation of an aligned superintelligence. That could reduce x-risk to infinitessimal levels, if there are no other intelligent actors that we could encounter. I think the general strategy could be summarized as ‘reduce extinction risk as much as possible until we can safely build and deploy an aligned superintelligence, then let the superintelligence (dis)solve all other problems’.
After the creation of an aligned superintelligence, society’s resources could focus on other problems. However, I think some people also think there are no other problems anymore once there is an aligned superintelligence: with superintelligence all the other problems like animal suffering are trivial to solve.
But most people—including myself—seem to not have given very much thought to what other problems might still exist in an era of superintelligence.
If you believe a strong version superintelligence is impossible this complicates the whole picture, but you’d at least have to include the consideration that in the future it is likely we have substantially higher (individual and/or collective) intelligence.