The ‘unambitious’ thing you ask the AI to do would create worldwide political change. It is absurd to think that it wouldn’t. Even ordinary technological change creates worldwide political change at that scale!
And an AGI having that little impact is also not plausible; if that’s all you do, the second mover—and possibly the third, fourth, fifth, if everyone moves slow—spits out an AGI and flips the table, because you can’t be that unambitious and still block other AGIs from performing pivotal acts, and even if you want to think small, the other actors won’t. Even if they are approximately as unambitious, they will have different goals, and the interaction will immediately amp up the chaos.
There is just no way for an actual AGI scenario to meet these guidelines. Any attempt to draw a world which meets them has written the bottom line first and is torturing its logic trying to construct a vaguely plausible story that might lead to it.
I believe that you are too quick to label this story as absurd. Ordinary technology does not have the capacity to correct towards explicitly smaller changes that still satisfy the objective. If the AGI wants to prevent wars while minimally disturbing the worldwide politics, I find it plausible that it would succeed.
Similarly, just because an AGI has very little visible impact, does not mean that it isn’t effectively in control. For a true AGI, it should be trivial to interrupt the second mover without any great upheaval. It should be able to surpress other AGIs from coming into existence without causing too much of a stir.
I do somewhat agree with your reservations, but I find that your way of adressing them seems uncharitable (i.e. “at best completely immoral”).
The ‘unambitious’ thing you ask the AI to do would create worldwide political change. It is absurd to think that it wouldn’t. Even ordinary technological change creates worldwide political change at that scale!
And an AGI having that little impact is also not plausible; if that’s all you do, the second mover—and possibly the third, fourth, fifth, if everyone moves slow—spits out an AGI and flips the table, because you can’t be that unambitious and still block other AGIs from performing pivotal acts, and even if you want to think small, the other actors won’t. Even if they are approximately as unambitious, they will have different goals, and the interaction will immediately amp up the chaos.
There is just no way for an actual AGI scenario to meet these guidelines. Any attempt to draw a world which meets them has written the bottom line first and is torturing its logic trying to construct a vaguely plausible story that might lead to it.
I believe that you are too quick to label this story as absurd. Ordinary technology does not have the capacity to correct towards explicitly smaller changes that still satisfy the objective. If the AGI wants to prevent wars while minimally disturbing the worldwide politics, I find it plausible that it would succeed.
Similarly, just because an AGI has very little visible impact, does not mean that it isn’t effectively in control. For a true AGI, it should be trivial to interrupt the second mover without any great upheaval. It should be able to surpress other AGIs from coming into existence without causing too much of a stir.
I do somewhat agree with your reservations, but I find that your way of adressing them seems uncharitable (i.e. “at best completely immoral”).