Lessons for AI governance from the Biological Weapons Convention

Summary

The Implementation Support Unit, the institution that organises the Biological Weapons Convention’s (BWC) annual meetings and encourages the universal adoption of the convention, has a budget of $1.4 million. In The Precipice, Toby Ord points out that this paltry budget is lower than that of an average McDonald’s restaurant.

Having a central body like the BWC has both its upsides and downsides. An analysis of both can help draw insights into how we want to structure the international AI governance architecture. Here are my main takeaways:

  • 1. We should be particularly concerned with ‘lock-in’ effects that could happen with a centralised structure. It’s often hard to dismantle a central body once we have put it into motion. It’s even more troubling if the institution is flawed from the outset, like the BWC was.

  • 2. A central AI governance architecture would have a hard time coming up with very specific policy recommendations on a multilateral level and this could prove to be very counterproductive when it comes to governing a powerful technology.

  • 3. This post scratches the surface in terms of what it’s trying to achieve, and there is potentially some value in this post being further expanded. The comparisons here may and probably will eventually fall apart with enough investigation. But the investigation is important owing to how young the field of AI governance is.

Introduction

There are two types of AI governance architectures I’m going to look at here—centralised and fragmented. In a centralised structure, governance of a particular area is undertaken by a single body, whereas a fragmented structure is where distinct institutions, who have their own scope and rules, interact to govern a particular area. The BWC can help us understand what we stand to gain and lose with a central structure, and what advantages a fragmented structure may hold.

I’ve picked the BWC over other institutions because I think this would particularly interest the EA community since biosecurity is seen as one of the more pressing and urgent problems we should work on. But I should note, there are also lessons to be drawn from elsewhere (e.g. other multilateral treaties such as the Chemical Weapons Convention) and I think it would be amazing to see more posts like this focussing on governance in other fields — especially since the field of AI governance is so nascent and there is a lot to figure out and structure.

An Overview of the BWC

The BWC was the first international agreement to ban the development, production, stockpiling and acquisition of an entire class of weapons of mass destruction. The BWC took advantage of the alignment that occurred between states during the Cold War to get countries on board by ratifying the convention. However, the circumstances of the Cold War meant that it was “politically unacceptable” to incorporate into the BWC a verification system to ensure compliance with the convention. And so, from the very get-go, the BWC suffered from a major defect.

After the Cold War, there was greater agreement among the BWC member states about the need for an effective BWC verification system. They agreed to create a so-called “Ad Hoc Group” in 1994 to evaluate potential BWC verification measures and draft an additional BWC protocol to implement them. The draft protocol negotiated by the Ad Hoc group would have created an international organization (an “Organization for the Prohibition of Biological Weapons”) to conduct “routine on-site visits to declared facilities” and “challenge inspections of suspect facilities and activities”.

This was a major improvement to the BWC. But with the last stages of the negotiation, things didn’t go well. The United States decided not to sign on, owing to their suspicions about the legitimacy of these on-site verifications. It would be technically challenging to verify implicit biological weapons. It is hard to tell whether some piece of biological matter is good for the world, or bad for the world, prima facie. While this is hardly ever the case with nuclear weapons, for example.

As of now, the BWC currently faces issues with the relevance of its review conferences. There’s been a lack of discussion on more pressing issues like gain-of-function research and gene drives. And while they discuss proposals to improve the BWC in Review Conferences every five years (and annual Meetings of Experts and Meetings of States Parties), persistent disagreements about the fundamentals of the BWC have resulted in little progress being made.

BWC and AI Governance Architecture

It’s important to point that the BWC has also had significant success. The BWC has strengthened the international norm against developing biological weapons. (See more here, under “Ramifications for the future”.) As a result, any offensive bioweapons research needs to be done in secrecy, which proves detrimental if you’re trying to get work done in the life sciences. Therefore, the BWC makes it more difficult and less appealing for a country to work on bioweapons. The negative consequences (e.g. reputational harm, potential trade embargoes, or even military action) of being found to have broken international law provide an additional disincentive.

So the mere presence of the BWC proves to be important in creating a deterrence against biosecurity threats. Knowing this could be a very useful insight for AI governance work. A central AI architecture could create or strengthen beneficial norms and uphold a sense of proactiveness in thinking about the harms of AI.

But a central structure like the BWC risks creating a ‘lock-in’ effect. Trying to change the way the BWC works now would be incredibly difficult. And since any change needs to be consensus-based, profound change becomes nearly impossible. So it is unlikely that the institution’s deficiencies will be remediated anytime soon.

When setting up a central AI governance structure, I think it’s crucial we think about ‘lock-in’ effects, and how they may pan out over the years to come. If we want an equivalent to the BWC for AI governance, it should not start on the wrong foot. (To clarify—the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)

A key argument for having a patchwork of organisations (fragmented) would be its specificity. A central organisation that tries to encompass nearly 200 countries will, out of necessity, have broad policies and rules. Countries have their own dynamics and workings that aren’t always compatible with each other. And so central structures have to make up for this. But having a fragmented structure could mean that governance gets catered to certain regions. So this architecture, to me, would make it a lot easier to ensure that we get the international landscape of AI governance right. (See here for some arguments against a fragmented structure.)

We could also not look at this as a binary decision. Instead of choosing between these two structures, we could create a network of both centralised and fragmented regimes that could get us the best of both worlds. (We can see similar ideas in “A Web of prevention.”) But I haven’t done enough thinking so far to draw any valuable insight.

Acknowledgements

This post owes a lot to helpful discussions with/​feedback from Darius Meissner, Suzanne Van Arsdale, Simon Grimm, Aaron Gertler, Jonas Schuett, Tessa Alexanian and Nuño Sempere. Their help doesn’t imply their agreement with what I’ve written. All mistakes remain my own.