In her article Counterarguments to the basic AI risk case, Katja Grace argues that the track record of corporations is a reason to doubt what she presents as the basic argument for AI risk.
Corporations, however, are not the largest or most powerful human organisations. The governments of the USA and China are much larger and more powerful than any corporation on Earth. Just as we should expect the largest risks to come from the most powerful AI systems (or organisations of cooperating AIs), we should expect the most powerful human organisations to pose the largest risks.
Grace suggests that the argument for AI risk implies that we should consider human organisations to pose a substantial risk in one or both of the following ways:
A corporation would destroy humanity rapidly. This may be via ultra-powerful capabilities at e.g. technology design and strategic scheming, or through gaining such powers in an ‘intelligence explosion‘ (self-improvement cycle). Either of those things may happen either through exceptional heights of intelligence being reached or through highly destructive ideas being available to minds only mildly beyond our own.
Superhuman AI would gradually come to control the future via accruing power and resources. Power and resources would be more available to the corporation than to humans on average, because of the corporation having far greater intelligence.
Powerful governments have constructed large stockpiles of nuclear weapons that many believe poses a large risk to human flourishing (though the precise magnitude is controversial). Furthermore, there are many instances in which these weapons were apparently close to being used. Thus governments pose a substantial risk of bringing about a disaster similar to scenario 1 above, albeit not as severe.
There have been governments in our history which have seized a great deal of power and whose actions brought about great disasters for many people. No single governments has ever held power over everybody, and governments do not seem to have an infinite lifespan. In my view it’s plausible (but not probable) that technological and economic changes could mean that neither of these trends holds in the future. Thus, governments also seem to pose some risk of bringing about a disaster similar to scenario 2.
I also think it’s plausible that if corporations, not governments, were the most powerful human organisations then we might have seen similar actions from corporations. For example, governments would obviously not allow large corporations to maintain their own nuclear arsenals, and it is plausible that some corporations would maintain an arsenal if they were allowed. We could also speculate that the most powerful governments may also limit the power of any corporation that threatened to become a rival.
The track record of corporations on its own may seem to undermine the standard AI risk argument, but I think we should consider governments as well, and it is not clear whether the record of governments supports or undermines the argument.
Governments pose larger risks than corporations: a brief response to Grace
In her article Counterarguments to the basic AI risk case, Katja Grace argues that the track record of corporations is a reason to doubt what she presents as the basic argument for AI risk.
Corporations, however, are not the largest or most powerful human organisations. The governments of the USA and China are much larger and more powerful than any corporation on Earth. Just as we should expect the largest risks to come from the most powerful AI systems (or organisations of cooperating AIs), we should expect the most powerful human organisations to pose the largest risks.
Grace suggests that the argument for AI risk implies that we should consider human organisations to pose a substantial risk in one or both of the following ways:
Powerful governments have constructed large stockpiles of nuclear weapons that many believe poses a large risk to human flourishing (though the precise magnitude is controversial). Furthermore, there are many instances in which these weapons were apparently close to being used. Thus governments pose a substantial risk of bringing about a disaster similar to scenario 1 above, albeit not as severe.
There have been governments in our history which have seized a great deal of power and whose actions brought about great disasters for many people. No single governments has ever held power over everybody, and governments do not seem to have an infinite lifespan. In my view it’s plausible (but not probable) that technological and economic changes could mean that neither of these trends holds in the future. Thus, governments also seem to pose some risk of bringing about a disaster similar to scenario 2.
I also think it’s plausible that if corporations, not governments, were the most powerful human organisations then we might have seen similar actions from corporations. For example, governments would obviously not allow large corporations to maintain their own nuclear arsenals, and it is plausible that some corporations would maintain an arsenal if they were allowed. We could also speculate that the most powerful governments may also limit the power of any corporation that threatened to become a rival.
The track record of corporations on its own may seem to undermine the standard AI risk argument, but I think we should consider governments as well, and it is not clear whether the record of governments supports or undermines the argument.