I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So let’s assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, “How much does existential risk from AI decrease by delaying it?” If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So let’s assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, “How much does existential risk from AI decrease by delaying it?” If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.