Very competent strategizing, of the “treacherous turn” variety
Self-improvement
Alien values are guaranteed unless we explicitly impart non-alien ethics to AI, which we currently don’t know how to do, and don’t know (or can’t agree) what that ethics should be like. Next two points are synonyms and are also basically synonyms to “alien values”. The treacherous turn is indeed unlikely (link).
Self-improvement is given, the only question is where is the “ceiling” of this improvement. It might not be that “far”, by some measure, from human intelligence, or that difference may still not allow AI to plan that far ahead due to the intrinsic unpredictability of the world. So the world may start to move extremely fast (see below), but the horizon of planning and predictability of that movement may not be longer than it is now (or it could be even shorter).
For a given operationalization of AGI, e.g., good enough to be forecasted on, I think that there is some possibility that we will reach such a level of capabilities, and yet that this will not be very impressive or world-changing, even if it would have looked like magic to previous generations. More specifically, it seems plausible that AI will continue to improve without soon reaching high shock levels which exceed humanity’s ability to adapt.
I think you implicitly underestimate the cost of coordination among humans. Huge corporations are powerful but also very slow to act. AI corporations will be very powerful and also very fast and potentially very coherent in their strategy. This will be a massive change.
Alien values are guaranteed unless we explicitly impart non-alien ethics to AI, which we currently don’t know how to do, and don’t know (or can’t agree) what that ethics should be like. Next two points are synonyms and are also basically synonyms to “alien values”. The treacherous turn is indeed unlikely (link).
Self-improvement is given, the only question is where is the “ceiling” of this improvement. It might not be that “far”, by some measure, from human intelligence, or that difference may still not allow AI to plan that far ahead due to the intrinsic unpredictability of the world. So the world may start to move extremely fast (see below), but the horizon of planning and predictability of that movement may not be longer than it is now (or it could be even shorter).
I think you implicitly underestimate the cost of coordination among humans. Huge corporations are powerful but also very slow to act. AI corporations will be very powerful and also very fast and potentially very coherent in their strategy. This will be a massive change.