Standard policy frameworks for AI governance
CW: reference to sexual violence
A way I’ve been thinking about AI governance recently is in terms of different standard frames that strands of AI governance map onto. The current frame that’s most in vogue is treating AI as a standard product that carries substantial safety risks, and then applying standard policy tools from this area to regulate AI systems. One reason I find this useful is as a way of finding relevant case studies.
Some other frames that I think AI governance work falls within are:
A product that carries large negative externalities
This is the frame that’s being focused on the most at the moment, and I think motivates work like looking at how the aircraft and nuclear power industries achieve such low accident rates. I think lots of evals work also makes sense in this framework, insofar as evaluations aim to be the basis for standards regulators can adopt. I think some corporate governance work also falls into this category, like whistleblower protections. I think this frame makes sense when governments can unilaterally enact strong, competent regulations without being undone by national security concerns and regulatory arbitrage.
Innovation policy
One strand of AI governance work looks like innovation policy where the focus is on creating incentives for researchers to create new safety technologies. Lots of the responses to climate change fall within this frame, like subsidies for solar panels and funding for fundamental science aimed at technologies to reduce the harms of climate change. I think this type of policy is useful when coordination and regulation are really hard for some reason and so instead of adopting coordination measures (like the Kyoto or Paris climate agreements), it makes more sense to change the technology available to actors so it’s cheaper for them to produce fewer negative externalities. I think this approach works less well when, not only are actors not coordinating, but they’re competing with each other. Nuclear weapons probably look like this—it seems pretty hard to come up with innovations that change the incentives countries face so that they give up nuclear weapons of their own free will.
A national security risk
In this framework, AI is a potentially dangerous technology that adversarial state and or non-state actors can use this technology to pose criminal or terrorist threat. I think things like bio-evals are working in this framework, as is work focused on controlling open-source models, at least to some degree. In this frame, AI is treated like other WMD technologies like nuclear and chemical weapons, as well as other technologies that significantly enhance criminal capabilities like malware.
Preventing competitive dynamics
Work in compute governance aimed at preventing Chinese AI firms from accessing cutting-edge chips is in this category, as is work ensuring that AI firms have good information security. This work aims to generally prevent competition between actors that make it more expensive for them to adopt safety-enhancing technologies. This looks like a policy aimed at preventing race-to-the-bottom dynamics, for instance, a policy that tries to reduce the use of corporate tax havens.
As an instrument of great power conflict
This frames views AI as an important strategic technology that it’s important that—something like—the free world leads in. Some AI policy aimed at ensuring that the US has access to high-skilled immigration is in this camp. Other examples of this kind of policy are early nuclear weapons policy and early space technology.
Improving consumer welfare
A huge amount of normal policy work is aimed at improving consumer welfare. Some examples include competition policy, policy regulating consumer financial products, advertising restrictions, and arguably privacy policy. Quite a lot of non-catastrophic risk-focused AI policies might fall into this camp, like policies aimed at trying to ensure that LLM services have the same standards as relevant professional groups e.g., that when LLMs give financial advice it doesn’t carry the same risks that financial advice from individuals without financial planning licenses might give. Another example might be work trying to prevent chatbots from having the same quasi-addictive qualities that social media has.
Political economy
This is a very large category that includes work in bias and fairness camp, but also work around ensuring that there’s broadly shared prosperity following large-scale automation and ensuring election integrity. I’m grouping all of these categories because I think that they’re focused on questions of the distribution of power and the economic consequences of this. Lots of the history of liberalism and the left has been focused on these kinds of questions, like the passage of the lords reform and the people’s budget under the British Liberal government of the very early 20th century, and LBJ’s great society and civil rates bills.
Potentially, work on AI sentience will be in this camp, structured similarly to anti-slavery work in Britain in the early 19th century, where the oppressed group wasn’t the primary locus for change (unlike say 2nd wave feminism.)
Military technology
In this frame, AI is a technology with the potential to make war worse and more dangerous. The campaign to stop killer robots fits in this category. Other examples of this kind of policy work include the banning of particularly nasty types of weapons like bouncing bombs, as well as more prosaic work like reducing the use of tactical nuclear weapons.
My impression is that for this kind of work to be effective it has to not put militaries that adopt these limitations at a severe disadvantage. I suspect this is part of the reason why laws of war that prevent targeting civilians have been reasonably effective—armies can still fight effectively while not, for instance, committing sexual violence, whereas they can fight effectively by using rubber bullets.
Innovation policy part 2 (electric boogaloo)
This frame is focused on the economic growth benefits of AI and focuses on questions like leveraging AI to improve healthcare delivery and drug discovery. I think this just looks like various standard innovation policy questions, like how to structure research grants to incentivize socially useful innovation, and loan programs for startups developing socially useful products.
This is a very interesting read. I have some feedback (mostly neutral) to help further shape these ideas. Mostly just to prompt and direct some interest extra thought directions. It’s not a list of criticisms, but myself just spitballing ideas on top of your foundations:
A product that carries large negative externalities
You mention aerospace and nuclear as good demonstrations of regulations and to an extent I agree, but a potential weakness here is the strength of regulation comes a lot from the fact that these developers are often either monopsonies or close. That is, there are very few builders of these systems and they have either very few or singular customers, as well as access to top-tier legal talent. AI development is much more diverse, and I think this makes regulation harder. Not saying your idea on this element is bad—it’s very good—but it’s something to bear in mind. This would be an interesting governance category to maybe split into subcategories.
Innovation policy
This is a good idea again, and the food-for-thought relates to the above quite closely too again with the nuclear mention. One thing I’d look into is positive influence on procurement—make it more rewarding for an organisation to buy (or make) safer AI than the financial reward is for not doing that. Policing in England and Wales is experiencing a subtle shift like this right now which has actually been very impactful.
A national security risk
Obviously it’s hard to get detailed in an overview post, but WMDs are regulated in a specific way which doesn’t necessarily marry well to NatSec. There’s some great research right now on how NatSec related algorithms and transparency threats are beginning to be regulated, with some recent trials of regulations.
Preventing competitive dynamics
Not much to say here, as this is outside my expertise area. I’ll leave that for others.
As an instrument of great power conflict
This was an interesting point. One thing I’d highlight is though most of my work is in AI regulation, I’ve done a bunch of Space regulation too and a thing to bear in mind is that space law has aged horribly and is stagnant. One of the main issues is that it was written when there were three space powers (mainly US and USSR, with UK as a US-aligned third space power), and the regulation was written with the idea of a major tech bottleneck to space and the ability for two nations to ‘police’ it all. This is more so true of the Outer Space Treaty than some of the treaties that followed and built on it. Obviously the modern day bears very little resemblance to that which makes things more difficult eg private entities exploring space, modern technology allowing autonomy. Worth thinking about how we would avoid this when we don’t know what the world will look like in 10, 25, 50 years.
Improving consumer welfare
This was a great point. Not much further direction to add here, other than looking at how some successful laws have been future-proofed against AI changes vs how some have been less successful.
Political economy
I’ve done a bunch of work in this area, but haven’t really got anything to add beyond what you’ve put.
Military Technology
One of the major bottlenecks with this category is that most new projects happen far behind closed doors for decades. EA currently lacks much of a MilTech presence, which is a missed opportunity IMO.
All in all this is an interesting way to group AI governance areas, but I think some additional attention could be paid to how the markets behave there and how that affects regulation. Perhaps an extra category for monopsonies or large market suppliers at opposite ends of a spectrum?
Executive summary: The author outlines several common policy frameworks that AI governance efforts can be categorized under, such as managing negative externalities, national security risks, preventing competitive dynamics, and more.
Key points:
One common framework is to treat AI as a product with safety risks, and apply standard regulations like other dangerous products.
AI governance can also be viewed through the lens of innovation policy, creating incentives for new safety technologies.
Some treat AI as a national security risk like other dangerous technologies that criminals or terrorists could exploit.
Other work tries preventing competitive dynamics that discourage adopting safety measures.
AI is sometimes seen as a technology where global leadership provides strategic advantages.
Improving consumer welfare is another framework, via transparency, privacy protections, etc.
Political economy lenses address issues of bias, fairness, automation’s impacts, election integrity.
Military applications necessitate restricting dangerous uses of AI.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.