6. Other risks are non-negligible but donât guarantee our extinction before aligned AI is developed
Doesnât the argument go through even if other (non-AI) risks are negligible, as long as AI risk is not negligible? I think you just want âOther risks donât guarantee our extinction before aligned AI is developedâ.
Yep, I agree you can generate the time of perils conclusion if AI risk is the only x-risk we face. I was attempting to empirically describe a view that seem to be popular in the x-risk space here, that other x-risks beside AI are also cause for concern, but youâre right that we donât necessarily need this full premise.
Doesnât the argument go through even if other (non-AI) risks are negligible, as long as AI risk is not negligible? I think you just want âOther risks donât guarantee our extinction before aligned AI is developedâ.
Yep, I agree you can generate the time of perils conclusion if AI risk is the only x-risk we face. I was attempting to empirically describe a view that seem to be popular in the x-risk space here, that other x-risks beside AI are also cause for concern, but youâre right that we donât necessarily need this full premise.