[ETA: I now realize that I think the following is basically just restating what Pablo already suggested in another comment.]
I think the following is a plausible & stronger concern, which could be read as a stronger version of your crisp concern #3.
“Humanity has not had meaningful control over its future, but AI will now take control one way or the other. Shaping the transition to a future controlled by AI is therefore our first and last opportunity to take control. If we mess up on AI, not only have we failed to seize this opportunity, there also won’t be any other.”
Of course, AI being our first and only opportunity to take control of the future is a strictly stronger claim than AI being one such opportunity. And so it must be less likely. But my impression is that the stronger claim is sufficiently more important that it could be justified to basically ‘wager’ most AI risk work on it being true.
I agree with most of what you say here.
[ETA: I now realize that I think the following is basically just restating what Pablo already suggested in another comment.]
I think the following is a plausible & stronger concern, which could be read as a stronger version of your crisp concern #3.
“Humanity has not had meaningful control over its future, but AI will now take control one way or the other. Shaping the transition to a future controlled by AI is therefore our first and last opportunity to take control. If we mess up on AI, not only have we failed to seize this opportunity, there also won’t be any other.”
Of course, AI being our first and only opportunity to take control of the future is a strictly stronger claim than AI being one such opportunity. And so it must be less likely. But my impression is that the stronger claim is sufficiently more important that it could be justified to basically ‘wager’ most AI risk work on it being true.