I agree that we should be considering “grey area” cases where both motivation and ability may not ensure domination. Indeed I argue we need to further muddy the motivational landscape to include “rouge” human motivations as we find in abundance today.
The first AGI systems won’t need to ‘go rogue’ in order for its behavior to be ‘rogue’ as viewed by humanity at large. It is overwhelmingly that likely these systems will be borne-into and harnessed-in-service-of the corporate and military struggles that already exist today.
These systems will be ‘rouge’ and in conflict by design. And just like their master’s they will be very acquainted with many ‘rogue’ methods of achieving their goals.
It seems likely humans over time will voluntarily relinquish control and understanding to their AIs in service of the shared goal they have with their AIs.
Of course over time the AI’s goals may not stay aligned, but even knowing this we are financially and politically incapable of resisting this path, where ever it may lead. It is an obvious prisoner’s dilemma where humans the defect stand to gain enormously.
Ironically I think this nearly unavoidable path LOWERS our chances for a unilateral fast take off. Long before one is possible many human+machine corporations and militaries will already be very focused on detecting and stopping any OTHER rogue AIs attempting this.
Still I am not sanguine. This militarized community will be fast moving and hard for humans to understand. The chances we can manage shape such a society seems lower than our proven inability to shape human society today. And our competing interests will make unified human action nearly impossible.
I agree that we should be considering “grey area” cases where both motivation and ability may not ensure domination. Indeed I argue we need to further muddy the motivational landscape to include “rouge” human motivations as we find in abundance today.
The first AGI systems won’t need to ‘go rogue’ in order for its behavior to be ‘rogue’ as viewed by humanity at large. It is overwhelmingly that likely these systems will be borne-into and harnessed-in-service-of the corporate and military struggles that already exist today.
These systems will be ‘rouge’ and in conflict by design. And just like their master’s they will be very acquainted with many ‘rogue’ methods of achieving their goals.
It seems likely humans over time will voluntarily relinquish control and understanding to their AIs in service of the shared goal they have with their AIs.
Of course over time the AI’s goals may not stay aligned, but even knowing this we are financially and politically incapable of resisting this path, where ever it may lead. It is an obvious prisoner’s dilemma where humans the defect stand to gain enormously.
Ironically I think this nearly unavoidable path LOWERS our chances for a unilateral fast take off. Long before one is possible many human+machine corporations and militaries will already be very focused on detecting and stopping any OTHER rogue AIs attempting this.
Still I am not sanguine. This militarized community will be fast moving and hard for humans to understand. The chances we can manage shape such a society seems lower than our proven inability to shape human society today. And our competing interests will make unified human action nearly impossible.