Why do you think the increase in racing between nations would outweigh the decrease in racing between companies? I have the opposite intuition, especially if the government strikes a cosmopolitan tone: “This isn’t an arms race; this is a global project to better humanity. We want every talented person from every nation to come work on this with us. We publicly commit to using AGI to do ___ and not ___.”
I have trouble understanding why a nation would initiate violent conflict with the U.S. over this. What might that scenario look like?
Finally, if the government hired AGI companies’ ex-employees, people concerned about x-risk would be heavily represented. (Besides, I think government is generally more inclined to care about negative externalities/x-risk than companies- the current problem is ignorance, not indifference.)
I agree with different parts of your comment to different extents.
Regarding cosmopolitanism, I think your pro-government hopes just need to be tempered by the facts. The loudest message on AI from the US government is that they want to maintain a lead over China, which is the opposite of a “cosmopolitan tone”, whereas at least in their public statements, AGI companies talk about public benefit.
Regarding violent conflict, I don’t think it should be so hard to imagine. Suppose that China and Russia are in a new cold war, and are both racing to develop a new AI superweapon. Then they might covertly sabotage each others’ efforts in similar ways to how the US and Israel currently interfere with Iran’s efforts to build the bomb.
Regarding ignorance vs indifference, it’s true that government is better-incentivised to mitigate negative externalities on their population, and one-day might include a comparable amount of people who care about and know about existential risks to the companies themselves. This is why I said things could change in the future. Just currently they don’t.
Thanks for the thoughtful response, Ryan.
Why do you think the increase in racing between nations would outweigh the decrease in racing between companies? I have the opposite intuition, especially if the government strikes a cosmopolitan tone: “This isn’t an arms race; this is a global project to better humanity. We want every talented person from every nation to come work on this with us. We publicly commit to using AGI to do ___ and not ___.”
I have trouble understanding why a nation would initiate violent conflict with the U.S. over this. What might that scenario look like?
Finally, if the government hired AGI companies’ ex-employees, people concerned about x-risk would be heavily represented. (Besides, I think government is generally more inclined to care about negative externalities/x-risk than companies- the current problem is ignorance, not indifference.)
I agree with different parts of your comment to different extents.
Regarding cosmopolitanism, I think your pro-government hopes just need to be tempered by the facts. The loudest message on AI from the US government is that they want to maintain a lead over China, which is the opposite of a “cosmopolitan tone”, whereas at least in their public statements, AGI companies talk about public benefit.
Regarding violent conflict, I don’t think it should be so hard to imagine. Suppose that China and Russia are in a new cold war, and are both racing to develop a new AI superweapon. Then they might covertly sabotage each others’ efforts in similar ways to how the US and Israel currently interfere with Iran’s efforts to build the bomb.
Regarding ignorance vs indifference, it’s true that government is better-incentivised to mitigate negative externalities on their population, and one-day might include a comparable amount of people who care about and know about existential risks to the companies themselves. This is why I said things could change in the future. Just currently they don’t.