I know very little about this topic, so please forgive me if this is an ignorant question.
It seems to me that it would be a good idea for the U.S. federal government to ban all private companies from working with sufficiently large models/building AGI (and encourage other countries to do the same). It should then create a new agency dedicated to creating aligned AGI and hire all the bright minds who were just put out of work. This agency’s activity should be autonomous; the point is to get everyone on the same team, not to police the research process.
Here’s my argument:
AGI development is a winner-takes-all situation.
Because it is a winner-takes-all situation, in the status quo, companies are highly incentivized to prioritze capabilities over safety/alignment; they don’t want anyone else to beat them to AGI.
Merging all these companies will improve their capabilities, and there will be less external competition, so the agency will have less fear of a competitor beating it to AGI; thus, it can spend relatively more time on alignment than capabilities. This increase the odds of creating an aligned AGI.
As a bonus, this would probably increases the odds that the AGI is aligned with someone who is benevolent and cares about the general public.
I’ve never heard anyone suggest that AI be nationalized before, so I feel like I must be missing something. What are some of the problems with this proposal?
It’s not a new idea, but in the long-run it is plausible, and government are starting to think about it:
In addition to the advantages you describe, there are several huge disadvantages from a safety point-of-view:
it could increase investment (US govt annual expenditure is $6T whereas Alphabet is $0.2T), and that it could increase racing between nations.
it could increase racing between nations, which could be more hostile than racing between companies, perhaps even to the point of violence
the government currently cares much less about, and has much less expertise in, existential risk from AI, than leading AGI companies.
On balance, I think most people who care about x-risk wouldn’t want to see AI nationalised at the moment. But this could change in the future. In particular, point (1) becomes less important as Google’s research budget grows. Point (2) might go up or down depending on China’s growth and AI progress. Finally, point (3) becomes less important as more x-risk experts enter government.
Thanks for the thoughtful response, Ryan.
Why do you think the increase in racing between nations would outweigh the decrease in racing between companies? I have the opposite intuition, especially if the government strikes a cosmopolitan tone: “This isn’t an arms race; this is a global project to better humanity. We want every talented person from every nation to come work on this with us. We publicly commit to using AGI to do ___ and not ___.”
I have trouble understanding why a nation would initiate violent conflict with the U.S. over this. What might that scenario look like?
Finally, if the government hired AGI companies’ ex-employees, people concerned about x-risk would be heavily represented. (Besides, I think government is generally more inclined to care about negative externalities/x-risk than companies- the current problem is ignorance, not indifference.)
I agree with different parts of your comment to different extents.
Regarding cosmopolitanism, I think your pro-government hopes just need to be tempered by the facts. The loudest message on AI from the US government is that they want to maintain a lead over China, which is the opposite of a “cosmopolitan tone”, whereas at least in their public statements, AGI companies talk about public benefit.
Regarding violent conflict, I don’t think it should be so hard to imagine. Suppose that China and Russia are in a new cold war, and are both racing to develop a new AI superweapon. Then they might covertly sabotage each others’ efforts in similar ways to how the US and Israel currently interfere with Iran’s efforts to build the bomb.
Regarding ignorance vs indifference, it’s true that government is better-incentivised to mitigate negative externalities on their population, and one-day might include a comparable amount of people who care about and know about existential risks to the companies themselves. This is why I said things could change in the future. Just currently they don’t.
It’s a pretty logical thought if you really think AI is as dangerous as nuclear weapons!