Paul Christiano has a notion of competitiveness, which seems relevant. Directions and desiderata for AI control seems to be the the place it’s stated most clearly.
The following quote (emphasis in the original) is one of the reasons he gives for desiring competitiveness, and seems to be in the same ballpark as the reason you gave:
You can’t unilaterally use uncompetitive alignment techniques; we would need global coordination to avoid trouble. If we _don’t know how to build competitive benign AI, then users/designers of AI systems have to compr_omise efficiency in order to maintain reliable control over those systems. The most efficient systems will by default be built by whoever is willing to accept the largest risk of catastrophe (or perhaps by actors who consider unaligned AI a desirable outcome).
It may be possible to avert this kind of race to the bottom by effective coordination by e.g. enforcing regulations which mandate adequate investments in alignment or restrict what kinds of AI are deployed. Enforcing such controls domestically is already a huge headache. But internationally things are even worse: a country that handicapped its AI industry in order to proceed cautiously would face the risk of being overtaken by a less prudent competitor, and avoiding that race would require effective international coordination.
Ultimately society will be able and willing to pay some efficiency cost to reliably align AI with human interests. But the higher that cost, the harder the coordination problem that we will need to solve. I think the research community should be trying to make that coordination problem as easy as possible.
Thanks for the link. So I guess I should amend what Paul and OpenAI’s goal seems like to me, to “create AGI, make sure it’s aligned, and make sure it’s competitive enough to become widespread.”
Paul Christiano has a notion of competitiveness, which seems relevant. Directions and desiderata for AI control seems to be the the place it’s stated most clearly.
The following quote (emphasis in the original) is one of the reasons he gives for desiring competitiveness, and seems to be in the same ballpark as the reason you gave:
Thanks for the link. So I guess I should amend what Paul and OpenAI’s goal seems like to me, to “create AGI, make sure it’s aligned, and make sure it’s competitive enough to become widespread.”
Seems right, though I don’t know to what extent Paul’s view is representative of OpenAI’s overall view.