One key issue is that we very likely do not know enough about what utopia mean or how to achieve it. We also don’t know enough about the current expected value of the long run future (even conditional on survival). And we likely won’t make much progress on these difficult questions before AI or other X risks. Reducing P(extinction) seems to be a necessary condition to be in a position to use safe AI that makes progress on important fields that we need to figure out to understand what Utopia mean and how to increase P(Utopia) (and avoid downside risks such as S-risks in the process).
Example of fields that could be particularly important to point our safe and aligned AI towards :
-Moral philosophy (and in particular to check whether total utilitarianism is correct or if we can update to better alternatives)
-Governance mechanisms and Economics to implement our extrapolated ideal moral system in the world
It might be preferable to focus on reducing P(doom) AND reducing the risks of a premature irreversible race to the universe to give us ample time to use our safe and aligned AI to solve others important problems and make substantial progress in natural, social sciences and philosophy. (a “long reflexion” with AI that does not need to be long on astronomical scales)
One key issue is that we very likely do not know enough about what utopia mean or how to achieve it. We also don’t know enough about the current expected value of the long run future (even conditional on survival). And we likely won’t make much progress on these difficult questions before AI or other X risks. Reducing P(extinction) seems to be a necessary condition to be in a position to use safe AI that makes progress on important fields that we need to figure out to understand what Utopia mean and how to increase P(Utopia) (and avoid downside risks such as S-risks in the process).
Example of fields that could be particularly important to point our safe and aligned AI towards :
-Moral philosophy (and in particular to check whether total utilitarianism is correct or if we can update to better alternatives)
-Governance mechanisms and Economics to implement our extrapolated ideal moral system in the world
It might be preferable to focus on reducing P(doom) AND reducing the risks of a premature irreversible race to the universe to give us ample time to use our safe and aligned AI to solve others important problems and make substantial progress in natural, social sciences and philosophy. (a “long reflexion” with AI that does not need to be long on astronomical scales)