It’s not a new idea, but in the long-run it is plausible, and government are starting to think about it:
At a recent AI conference in Washington, Senator Mark Warner, chair of the Senate intelligence committee, wondered aloud whether “it would be in the national security interest of our country to [merge] Open AI, Microsoft, Anthropic, Google, maybe throw in Amazon.” He noted that the US didn’t have “three Manhattan Projects, we had one”.
In addition to the advantages you describe, there are several huge disadvantages from a safety point-of-view:
it could increase investment (US govt annual expenditure is $6T whereas Alphabet is $0.2T), and that it could increase racing between nations.
it could increase racing between nations, which could be more hostile than racing between companies, perhaps even to the point of violence
the government currently cares much less about, and has much less expertise in, existential risk from AI, than leading AGI companies.
On balance, I think most people who care about x-risk wouldn’t want to see AI nationalised at the moment. But this could change in the future. In particular, point (1) becomes less important as Google’s research budget grows. Point (2) might go up or down depending on China’s growth and AI progress. Finally, point (3) becomes less important as more x-risk experts enter government.
Why do you think the increase in racing between nations would outweigh the decrease in racing between companies? I have the opposite intuition, especially if the government strikes a cosmopolitan tone: “This isn’t an arms race; this is a global project to better humanity. We want every talented person from every nation to come work on this with us. We publicly commit to using AGI to do ___ and not ___.”
I have trouble understanding why a nation would initiate violent conflict with the U.S. over this. What might that scenario look like?
Finally, if the government hired AGI companies’ ex-employees, people concerned about x-risk would be heavily represented. (Besides, I think government is generally more inclined to care about negative externalities/x-risk than companies- the current problem is ignorance, not indifference.)
I agree with different parts of your comment to different extents.
Regarding cosmopolitanism, I think your pro-government hopes just need to be tempered by the facts. The loudest message on AI from the US government is that they want to maintain a lead over China, which is the opposite of a “cosmopolitan tone”, whereas at least in their public statements, AGI companies talk about public benefit.
Regarding violent conflict, I don’t think it should be so hard to imagine. Suppose that China and Russia are in a new cold war, and are both racing to develop a new AI superweapon. Then they might covertly sabotage each others’ efforts in similar ways to how the US and Israel currently interfere with Iran’s efforts to build the bomb.
Regarding ignorance vs indifference, it’s true that government is better-incentivised to mitigate negative externalities on their population, and one-day might include a comparable amount of people who care about and know about existential risks to the companies themselves. This is why I said things could change in the future. Just currently they don’t.
It’s not a new idea, but in the long-run it is plausible, and government are starting to think about it:
In addition to the advantages you describe, there are several huge disadvantages from a safety point-of-view:
it could increase investment (US govt annual expenditure is $6T whereas Alphabet is $0.2T), and that it could increase racing between nations.
it could increase racing between nations, which could be more hostile than racing between companies, perhaps even to the point of violence
the government currently cares much less about, and has much less expertise in, existential risk from AI, than leading AGI companies.
On balance, I think most people who care about x-risk wouldn’t want to see AI nationalised at the moment. But this could change in the future. In particular, point (1) becomes less important as Google’s research budget grows. Point (2) might go up or down depending on China’s growth and AI progress. Finally, point (3) becomes less important as more x-risk experts enter government.
Thanks for the thoughtful response, Ryan.
Why do you think the increase in racing between nations would outweigh the decrease in racing between companies? I have the opposite intuition, especially if the government strikes a cosmopolitan tone: “This isn’t an arms race; this is a global project to better humanity. We want every talented person from every nation to come work on this with us. We publicly commit to using AGI to do ___ and not ___.”
I have trouble understanding why a nation would initiate violent conflict with the U.S. over this. What might that scenario look like?
Finally, if the government hired AGI companies’ ex-employees, people concerned about x-risk would be heavily represented. (Besides, I think government is generally more inclined to care about negative externalities/x-risk than companies- the current problem is ignorance, not indifference.)
I agree with different parts of your comment to different extents.
Regarding cosmopolitanism, I think your pro-government hopes just need to be tempered by the facts. The loudest message on AI from the US government is that they want to maintain a lead over China, which is the opposite of a “cosmopolitan tone”, whereas at least in their public statements, AGI companies talk about public benefit.
Regarding violent conflict, I don’t think it should be so hard to imagine. Suppose that China and Russia are in a new cold war, and are both racing to develop a new AI superweapon. Then they might covertly sabotage each others’ efforts in similar ways to how the US and Israel currently interfere with Iran’s efforts to build the bomb.
Regarding ignorance vs indifference, it’s true that government is better-incentivised to mitigate negative externalities on their population, and one-day might include a comparable amount of people who care about and know about existential risks to the companies themselves. This is why I said things could change in the future. Just currently they don’t.