I think we should weigh reducing AI risk by slowing it down against other continuing sources of X-risk. Iām also concerned about a pause becoming permanent, or increasing risk when unpaused, or only getting one chance to pause. However, if AI progress is much faster than now, I think a pause could increase the expected value of the long-run future.
I think it is very unclear whether building AI would decrease or increase non-AI risks.
My guess is that a decentralized /ā tool AI would increase non-AI x-risk by e.g. making it easier to build biological weapons, and a world government /ā totalizing ASI would, conditional on not killing everyone, decrease x-risk.
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So letās assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, āHow much does existential risk from AI decrease by delaying it?ā If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
Increase relative to what counterfactual? I think it might be true both that annual risk of bad event goes up from AI, and that all-time risk decreases (on the assumption that weāre basically reaching a hurdle we have to pass anyway, and that Iām highly sceptical that we gain much in practice by implementing forceful procedures that slow us down from getting there).
I think we should weigh reducing AI risk by slowing it down against other continuing sources of X-risk. Iām also concerned about a pause becoming permanent, or increasing risk when unpaused, or only getting one chance to pause. However, if AI progress is much faster than now, I think a pause could increase the expected value of the long-run future.
I think it is very unclear whether building AI would decrease or increase non-AI risks.
My guess is that a decentralized /ā tool AI would increase non-AI x-risk by e.g. making it easier to build biological weapons, and a world government /ā totalizing ASI would, conditional on not killing everyone, decrease x-risk.
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So letās assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, āHow much does existential risk from AI decrease by delaying it?ā If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
Increase relative to what counterfactual? I think it might be true both that annual risk of bad event goes up from AI, and that all-time risk decreases (on the assumption that weāre basically reaching a hurdle we have to pass anyway, and that Iām highly sceptical that we gain much in practice by implementing forceful procedures that slow us down from getting there).