If one doesn’t have strong time discounting in favor of the present, the vast majority of the value that can be theoretically realized exists in the far future.
As a toy model, suppose the world is habitable for a billion years, but there is an extinction risk in 100 years which requires substantial effort to avert.
If resources are dedicated entirely to mitigating extinction risks, there is net −1 utility each year for 100 years but a 90% chance that the world can be at +5 utility every year afterwards once these resources are freed up for direct work. (In the extinction case, there is no more utility to be had by anyone.)
If resources are split between extinction risk and improving current subjective experience, there is net +2 utility each year for 100 years, and a 50% chance that the world survives to the positive longterm future state above. It’s not hard to see that the former case has massively higher total utility, and remains such under almost any numbers in the model so long as we can expect billions of years of potential future good.
A model like this relies crucially on the idea that at some point we can stop diverting resources to global catastrophic risk, or at least do so less intensively, but I think this is an accurate assumption. We currently live in an unusually risk-prone world; it seems very plausible that pandemic risk, nuclear warfare, catastrophic climate change, unfriendly AGI, etc. are all safely dealt with in a few centuries if modern civilization endures long enough to keep working on them.
One’s priorities can change over time as their marginal value shifts; ignoring other considerations for the moment doesn’t preclude focusing on them once we’ve passed various x-risk hurdles.
Thanks for this. I’d like to ask you the same question I’m asking others in this thread.
I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I think EA’s believe that this is definitely possible, most likely by the creation of an aligned superintelligence. That could reduce x-risk to infinitessimal levels, if there are no other intelligent actors that we could encounter. I think the general strategy could be summarized as ‘reduce extinction risk as much as possible until we can safely build and deploy an aligned superintelligence, then let the superintelligence (dis)solve all other problems’.
After the creation of an aligned superintelligence, society’s resources could focus on other problems. However, I think some people also think there are no other problems anymore once there is an aligned superintelligence: with superintelligence all the other problems like animal suffering are trivial to solve.
But most people—including myself—seem to not have given very much thought to what other problems might still exist in an era of superintelligence.
If you believe a strong version superintelligence is impossible this complicates the whole picture, but you’d at least have to include the consideration that in the future it is likely we have substantially higher (individual and/or collective) intelligence.
If one doesn’t have strong time discounting in favor of the present, the vast majority of the value that can be theoretically realized exists in the far future.
As a toy model, suppose the world is habitable for a billion years, but there is an extinction risk in 100 years which requires substantial effort to avert.
If resources are dedicated entirely to mitigating extinction risks, there is net −1 utility each year for 100 years but a 90% chance that the world can be at +5 utility every year afterwards once these resources are freed up for direct work. (In the extinction case, there is no more utility to be had by anyone.)
If resources are split between extinction risk and improving current subjective experience, there is net +2 utility each year for 100 years, and a 50% chance that the world survives to the positive longterm future state above. It’s not hard to see that the former case has massively higher total utility, and remains such under almost any numbers in the model so long as we can expect billions of years of potential future good.
A model like this relies crucially on the idea that at some point we can stop diverting resources to global catastrophic risk, or at least do so less intensively, but I think this is an accurate assumption. We currently live in an unusually risk-prone world; it seems very plausible that pandemic risk, nuclear warfare, catastrophic climate change, unfriendly AGI, etc. are all safely dealt with in a few centuries if modern civilization endures long enough to keep working on them.
One’s priorities can change over time as their marginal value shifts; ignoring other considerations for the moment doesn’t preclude focusing on them once we’ve passed various x-risk hurdles.
Thanks for this. I’d like to ask you the same question I’m asking others in this thread.
I do wonder about the prospect of ‘solving’ extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
I think EA’s believe that this is definitely possible, most likely by the creation of an aligned superintelligence. That could reduce x-risk to infinitessimal levels, if there are no other intelligent actors that we could encounter. I think the general strategy could be summarized as ‘reduce extinction risk as much as possible until we can safely build and deploy an aligned superintelligence, then let the superintelligence (dis)solve all other problems’.
After the creation of an aligned superintelligence, society’s resources could focus on other problems. However, I think some people also think there are no other problems anymore once there is an aligned superintelligence: with superintelligence all the other problems like animal suffering are trivial to solve.
But most people—including myself—seem to not have given very much thought to what other problems might still exist in an era of superintelligence.
If you believe a strong version superintelligence is impossible this complicates the whole picture, but you’d at least have to include the consideration that in the future it is likely we have substantially higher (individual and/or collective) intelligence.