AI Regulation is Unsafe

Link post

Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be.

There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests.

Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives to care about long term, global, costs or benefits and they do have strong incentives to push the development of AI forwards for their own purposes.

Noticing that AI companies put the world at risk is not enough to support greater government involvement in the technology. Government involvement is likely to exacerbate the most dangerous parts of AI while limiting the upside.

Default government incentives

Governments are not social welfare maximizers. Government actions are an amalgam of the actions of thousands of personal welfare maximizers who are loosely aligned and constrained. In general, governments have strong incentives for myopia, violent competition with other governments, and negative sum transfers to small, well organized groups. These exacerbate existential risk and limit potential upside.

The vast majority of the costs of existential risk occur outside of the borders of any single government and beyond the election cycle for any current decision maker, so we should expect governments to ignore them.

We see this expectation fulfilled in governments reactions to other long term or global externalities e.g debt and climate change. Governments around the world are happy to impose trillions of dollars in direct cost and substantial default risk on future generations because costs and benefits on these future generations hold little sway in the next election. Similarly, governments spend billions subsidizing fossil fuel production and ignore potential solutions to global warming, like a carbon tax or geoengineering, because the long term or extraterritorial costs and benefits of climate change do not enter their optimization function.

AI risk is no different. Governments will happily trade off global, long term risk for national, short term benefits. The most salient way they will do this is through military competition. Government regulations on private AI development will not stop them from racing to integrate AI into their militaries. Autonomous drone warfare is already happening in Ukraine and Israel. The US military has contracts with Palantir and Andruil which use AI to augment military strategy or to power weapons systems. Governments will want to use AI for predictive policing, propaganda, and other forms of population control.

The case of nuclear tech is informative. This technology was strictly regulated by governments, but they still raced with each other and used the technology to create the most existentially risky weapons mankind has ever seen. Simultaneously, they cracked down on civilian use. Now, we’re in a world where all the major geopolitical flashpoints have at least one side armed with nuclear weapons and where the nuclear power industry is worse than stagnant.

Government’s military ambitions mean that their regulation will preserve the most dangerous misuse risks from AI. They will also push the AI frontier and train larger models, so we will still face misalignment risks. These may be exacerbated if governments are less interested or skilled in AI safety techniques. Government control over AI development is likely to slow down AI progress overall. Accepting the premise that this is good is not sufficient to invite regulation, though, because government control will cause a relative speed up of the most dystopian uses for AI.

In short term, governments are primarily interested in protecting well-organized groups from the effects of AI. E.g copyright holders, drivers unions, and other professional lobby groups. Here’s a summary of last years congressional AI hearing from Zvi Mowshowitz.

The Senators care deeply about the types of things politicians care deeply about. Klobuchar asked about securing royalties for local news media. Blackburn asked about securing royalties for Garth Brooks. Lots of concern about copyright violations, about using data to train without proper permission, especially in audio models. Graham focused on section 230 for some reason, despite numerous reminders it didn’t apply, and Howley talked about it a bit too.

This kind of regulation has less risk than misaligned killbots, but it does limit the potential upside from the technology.

Private incentives for AI development are far from perfect. There are still large externalities and competitive dynamics that may push progress too fast. But identifying this problem is not enough to justify government involvement. We need a reason to believe that governments can reliably improve the incentives facing private organizations. Government’s strong incentives for myopia, military competition, and rent-seeking make it difficult to find such a reason.

Negative Spillovers

The default incentives of both governments and profit seeking companies are imperfect. But the whole point of AI safety advocacy is to change these incentives or to convince decision makers to act despite them, so you can buy that governments are imperfect and still support calls for AI regulation. The problem with this is that even extraordinarily successful advocacy in government can be redirected into opposite and catastrophic effects.

Consider Sam Altman’s testimony in congress last May. No one was convinced of anything except the power of AI fear for their own pet projects. Here is a characteristic quote:

Senator Blumenthal addressing Sam Altman: I think you have said, in fact, and I’m gonna quote, ‘Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.’ You may have had in mind the effect on jobs. Which is really my biggest nightmare in the long term.

A reasonable upper bound for the potential of AI safety lobbying is the environmental movement of the 1970s. It was extraordinarily effective. Their advocacy led to a series of laws, including the National Environmental Policy Act (NEPA) that are among the most comprehensive and powerful regulations ever passed. These laws are not clearly in service of some pre-existing government incentive. Indeed, they regulate the federal government more strictly than anything else and often got it in its way. The cultural and political advocacy of the environmental movement made a large counterfactual impact with laws that still have massive influence today.

This success has turned sour, though, because the massive influence of these laws is now a massive barrier to decarbonization. NEPA has exemptions for oil and gas but not for solar or windfarms. Exemptions for highways but not highspeed rail. The costs of compliance with NEPA’s bureaucratic proceduralism hurts Terraform Industries a lot more than Shell. The standard government incentives for concentrating benefits to large legible groups and diffusing costs to large groups and the future redirected the political will and institutional power of the environmental movement into some of the most environmentally damaging and economically costly laws ever.

AI safety advocates should not expect to do much better than this, especially since many of their proposals are specifically based on permitting AI models like NEPA permits construction projects.

Belief in the potential for existential risk from AI does not imply that governments should have greater influence over its development. Government’s incentives make them misaligned with the goal of reducing existential risk. They are not rewarded or punished for costs or benefits outside of their borders or term limits and this is where nearly all of the importance of existential risk lies. Governments are rewarded for rapid development of military technology that empowers them over their rivals. They are also rewarded for providing immediate benefits to well-organized, legible groups, even when these rewards come at great expense to larger or more remote groups of people. These incentives exacerbate the worst misuse and misalignment risks of AI and limit the potential economic upside.