I have a few questions and a lot of things that give me pause:
Even assuming that the pursuers come to know the risks—perhaps that AGI may ultimately betray its users—why would that diminish its appeal? Some % of people have always been drawn to the pursuit of power without much concern for the potential risks.
Why would leaders in China view AGI they controlled as a threat to their power? It seems that artificial intelligence is a key part of how the Chinese government currently preserves its power internally, and it’s not a stretch to see how artificial intelligence could help massively in external power projection, as well as in economic growth.
Why assume Chinese incompetence in the area of AI? China invests a lot of money into AI, uses it in almost all areas of society, and aims for global leadership in this area by 2030. China also has a large pool of AI researchers and engineers, a lot of data, and few data protections for individuals. Assuming incompetence is not only unwise, it disregards genuine Chinese achievements, and in some cases it’s prejudiced. Do you really want to say that China does not perform innovative technology research?
If China is genuinely struggling (economically, technologically, etc.), why would leaders abandon the pursuit of AGI? I would have thought the opposite. History suggests that countries which see themselves as having a narrow window of opportunity to achieve victory are the most dangerous. And fuzzy assumptions of benevolence are unwise: Xi Jinping has told the Chinese military to prepare for war, while overseeing one of the fastest military expansions in history, and he has consolidated authority around himself.
Given the potential risks associated with the development of AGI, what approach do you recommend for slowing down its pursuit: a unilateral approach where countries like the US and UK take the initiative, or a multilateral approach where countries like China are included and formal agreements (including verification arrangements) are established? How would you establish trust while also preventing authoritarian regimes from gaining AGI supremacy? The article you linked mentions a lot of “maybes”—maybe China would not gain supremacy—but to be honest, given the high stakes, Western policymakers would want much higher confidence.
I have a few questions and a lot of things that give me pause:
Even assuming that the pursuers come to know the risks—perhaps that AGI may ultimately betray its users—why would that diminish its appeal? Some % of people have always been drawn to the pursuit of power without much concern for the potential risks.
Why would leaders in China view AGI they controlled as a threat to their power? It seems that artificial intelligence is a key part of how the Chinese government currently preserves its power internally, and it’s not a stretch to see how artificial intelligence could help massively in external power projection, as well as in economic growth.
Why assume Chinese incompetence in the area of AI? China invests a lot of money into AI, uses it in almost all areas of society, and aims for global leadership in this area by 2030. China also has a large pool of AI researchers and engineers, a lot of data, and few data protections for individuals. Assuming incompetence is not only unwise, it disregards genuine Chinese achievements, and in some cases it’s prejudiced. Do you really want to say that China does not perform innovative technology research?
If China is genuinely struggling (economically, technologically, etc.), why would leaders abandon the pursuit of AGI? I would have thought the opposite. History suggests that countries which see themselves as having a narrow window of opportunity to achieve victory are the most dangerous. And fuzzy assumptions of benevolence are unwise: Xi Jinping has told the Chinese military to prepare for war, while overseeing one of the fastest military expansions in history, and he has consolidated authority around himself.
Given the potential risks associated with the development of AGI, what approach do you recommend for slowing down its pursuit: a unilateral approach where countries like the US and UK take the initiative, or a multilateral approach where countries like China are included and formal agreements (including verification arrangements) are established? How would you establish trust while also preventing authoritarian regimes from gaining AGI supremacy? The article you linked mentions a lot of “maybes”—maybe China would not gain supremacy—but to be honest, given the high stakes, Western policymakers would want much higher confidence.