6. Other risks are non-negligible but don’t guarantee our extinction before aligned AI is developed
Doesn’t the argument go through even if other (non-AI) risks are negligible, as long as AI risk is not negligible? I think you just want “Other risks don’t guarantee our extinction before aligned AI is developed”.
Yep, I agree you can generate the time of perils conclusion if AI risk is the only x-risk we face. I was attempting to empirically describe a view that seem to be popular in the x-risk space here, that other x-risks beside AI are also cause for concern, but you’re right that we don’t necessarily need this full premise.
Doesn’t the argument go through even if other (non-AI) risks are negligible, as long as AI risk is not negligible? I think you just want “Other risks don’t guarantee our extinction before aligned AI is developed”.
Yep, I agree you can generate the time of perils conclusion if AI risk is the only x-risk we face. I was attempting to empirically describe a view that seem to be popular in the x-risk space here, that other x-risks beside AI are also cause for concern, but you’re right that we don’t necessarily need this full premise.