I think EA’s believe that this is definitely possible, most likely by the creation of an aligned superintelligence. That could reduce x-risk to infinitessimal levels, if there are no other intelligent actors that we could encounter. I think the general strategy could be summarized as ‘reduce extinction risk as much as possible until we can safely build and deploy an aligned superintelligence, then let the superintelligence (dis)solve all other problems’.
After the creation of an aligned superintelligence, society’s resources could focus on other problems. However, I think some people also think there are no other problems anymore once there is an aligned superintelligence: with superintelligence all the other problems like animal suffering are trivial to solve.
But most people—including myself—seem to not have given very much thought to what other problems might still exist in an era of superintelligence.
If you believe a strong version superintelligence is impossible this complicates the whole picture, but you’d at least have to include the consideration that in the future it is likely we have substantially higher (individual and/or collective) intelligence.
I think EA’s believe that this is definitely possible, most likely by the creation of an aligned superintelligence. That could reduce x-risk to infinitessimal levels, if there are no other intelligent actors that we could encounter. I think the general strategy could be summarized as ‘reduce extinction risk as much as possible until we can safely build and deploy an aligned superintelligence, then let the superintelligence (dis)solve all other problems’.
After the creation of an aligned superintelligence, society’s resources could focus on other problems. However, I think some people also think there are no other problems anymore once there is an aligned superintelligence: with superintelligence all the other problems like animal suffering are trivial to solve.
But most people—including myself—seem to not have given very much thought to what other problems might still exist in an era of superintelligence.
If you believe a strong version superintelligence is impossible this complicates the whole picture, but you’d at least have to include the consideration that in the future it is likely we have substantially higher (individual and/or collective) intelligence.