Why don’t governments seem to mind that companies are explicitly trying to make AGIs?

Epistemic Status: Quickly written, uncertain. I’m fairly sure there’s very little in terms of the public or government concerned about AGI claims, but I’m sure there’s a lot I’m missing. I’m not at all an expert on government or policy and AI.

This was originally posted to Facebook here, where it had some discussion. Many thanks to Rob Bensinger, Lady Jade Beacham, and others who engaged in the discussion there.


Multiple tech companies now are openly claiming to be working on developing AGI (Artificial General Intelligence).

As written in a lot of work on AGI (See Superintelligence, as an example), if any firm does establish sufficient dominance in AGI, they might have some really powerful capabilities.

  • Write bots that could convince (some) people to do almost anything

  • Hack into government weapons systems

  • Dominate vital parts of the economy

  • Find ways to interrupt other efforts to make AGI

And yet, from what I can tell, almost no one seems to really mind? Governments, in particular, seem really chill with it. Companies working on AGI get treated similarly to other exciting AI companies.

If some company were to make a claim like,

“We’re building advanced capabilities that can hack and modify any computer on the planet”

or,

“We’re building a private nuclear arsenal”,

I’d expect that to draw attention.

But with AGI, crickets.

I assume that governments dismiss corporate claims of AGI development as overconfident marketing-speak or something.

You might think,

“But concerns about AGI are really remote and niche. State actors wouldn’t have come across them.”

That argument probably applied 10 years ago. But at this point, the conversation has spread a whole lot. Superintelligence was released in 2014 and was an NYT bestseller. There are hundreds of books out now about concerns about increasing AI capabilities. Elon Musk and Bill Gates both talked about it publicly. This should be one of the easiest social issues at this point for someone technically savvy to find.

The risks and dangers (of a large power-grab, not of alignment failures, though those too) are really straightforward and have been public for a long time.

Responses

In the comments to my post, a few points were made, some of which I was roughly expecting. Points include:

  1. Companies saying they are making AGI are ridiculously overconfident

  2. Governments are dramatically incompetent

  3. AGI will roll out gradually and not give one company a dominant advantage

My quick responses would be:

  1. I think many longtermist effective altruists believe these companies might have a legitimate chance in the next 10 to 50 years, in large part because of a lot of significant research (see everything on AI and forecasting on LessWrong and the EA Forum). At the same time, my impression is that most of the rest of the world is indeed incredibly skeptical of serious AGI transformation.

  2. I think this is true to an extent. My impression is that government nonattention can change dramatically and quickly, particularly in the United States, so if this is the crux, it might be a temporary situation.

  3. I think there’s substantial uncertainty here. But I would be very hesitant to put over a 70% chance that: (a) one, or a few, of these companies will gain a serious advantage, and (b) the general-purpose capabilities of these companies will come with significant global power capabilities. AGI is general-purpose, it seems difficult to be sure that your company can make it without it being an international security issue of some sort or other.

Updates

This post was posted to Reddit and Hacker News, where it had a total of around 100 more comments. The Hacker News crowd mostly suggested Response #1 (“AGI is a pipe dream that we don’t need to worry about”)