EAs and longtermists rightly place a lot of emphasis on aligning powerful AI systems with human values. But I have always wondered, if super-human AI starts doing the bidding of some subset of humans, what is the governance equilibrium? When considering this question, two sub-questions stand out to me.
1) Does transformational AI most likely result in the end of democracy? After all, if all useful work is done by AI, most of the leverage held by the average person disappears. They can no longer strike, and protests or revolts may be entirely futile in the face of drone-based weapons and crowd control systems.
2) Is a unipolar or multipolar world more likely? The most powerful AI systems might be developed by American tech companies, the Chinese government, or some other actor. If a multipolar world is possible, how high is the risk of war? In general, it seems that if a state feels like it is facing an existential threat, its willingness to use WMDs or take other drastic measures increases. If, hypothetically, a victory by a Chinese state AI system would imply complete and indefinite subjugation of the American state AI system, and visa-versa, wouldn’t the risk of conflict be extraordinarily high?
I’m curious to hear what the EA community has been thinking about these topics, and whether anyone has tried to estimate the likelihood of different governance outcomes in a world with aligned AI.
Who will be in charge once alignment is achieved?
EAs and longtermists rightly place a lot of emphasis on aligning powerful AI systems with human values. But I have always wondered, if super-human AI starts doing the bidding of some subset of humans, what is the governance equilibrium? When considering this question, two sub-questions stand out to me.
1) Does transformational AI most likely result in the end of democracy? After all, if all useful work is done by AI, most of the leverage held by the average person disappears. They can no longer strike, and protests or revolts may be entirely futile in the face of drone-based weapons and crowd control systems.
2) Is a unipolar or multipolar world more likely? The most powerful AI systems might be developed by American tech companies, the Chinese government, or some other actor. If a multipolar world is possible, how high is the risk of war? In general, it seems that if a state feels like it is facing an existential threat, its willingness to use WMDs or take other drastic measures increases. If, hypothetically, a victory by a Chinese state AI system would imply complete and indefinite subjugation of the American state AI system, and visa-versa, wouldn’t the risk of conflict be extraordinarily high?
I’m curious to hear what the EA community has been thinking about these topics, and whether anyone has tried to estimate the likelihood of different governance outcomes in a world with aligned AI.