I think it is very unclear whether building AI would decrease or increase non-AI risks.
My guess is that a decentralized / tool AI would increase non-AI x-risk by e.g. making it easier to build biological weapons, and a world government / totalizing ASI would, conditional on not killing everyone, decrease x-risk.
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So let’s assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, “How much does existential risk from AI decrease by delaying it?” If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
Increase relative to what counterfactual? I think it might be true both that annual risk of bad event goes up from AI, and that all-time risk decreases (on the assumption that we’re basically reaching a hurdle we have to pass anyway, and that I’m highly sceptical that we gain much in practice by implementing forceful procedures that slow us down from getting there).
I think it is very unclear whether building AI would decrease or increase non-AI risks.
My guess is that a decentralized / tool AI would increase non-AI x-risk by e.g. making it easier to build biological weapons, and a world government / totalizing ASI would, conditional on not killing everyone, decrease x-risk.
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So let’s assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, “How much does existential risk from AI decrease by delaying it?” If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
Increase relative to what counterfactual? I think it might be true both that annual risk of bad event goes up from AI, and that all-time risk decreases (on the assumption that we’re basically reaching a hurdle we have to pass anyway, and that I’m highly sceptical that we gain much in practice by implementing forceful procedures that slow us down from getting there).