Cool post, very interesting! I’m fascinated by this topic—the PhD thesis I’m writing is on nuclear, bio and cyber weapons arms control regimes and what lessons can be drawn for AI. So obviously I’m very into this, and want to see more work done on this. Really excellent to see you exploring the parallels. A few thoughts:
Your point on ‘lock-in’ seems crucial. It currently seems to me that there are ‘critical junctures’ (Capoccia) in which regimes get set and then its very hard to change them. So e.g. the failure to control nukes or cyber in early years. ABM is a complex example—very very hard to get back on the table, but Rumsfeld +others managed it after 30 years of battling.
My impression is that the BWC (and CWC) - the meetings/conferences etc—are often seen as arms control regimes that are pretty good at keeping up with technical developments—maybe a point in favour of centralisation.
Just on the details of the BWC, seems worth mentioning a few things. (Nitpicky: when the UK proposed a BWC, it said verification wasn’t technically possible at the time [1]). First, the Nixon Administration thought BW were militarily useless and had already unilaterally disarmed, so verification was less of a priority [2]. Second, one of the reasons to want a Verification Protocol in the 90s was the revelation that the Soviets cheated over the 70s-80s, building the biggest BW program ever. Third, the Bush Admin rejected the Verification Protocol in 2001 (pre 9/11!), its first year—at the same time as it was ripping up START III, Kyoto, and the ABM Treaty. This is all to suggest that state interest, and elites’ changing conceptions of state interest, can create space for change.
This isn’t central to the post, but I’m interested in this parenthetical:
(To clarify—the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)
At first glance, a ban on AI weapons research or AI research with military uses seems pretty plausible to me. For example, one could ban research on lethal autonomous weapons systems and research devoted to creating an AGI without banning, e.g., the use of machine learning for image classification or text generation.
Can you say more about why this seems implausible from your point of view?
Good question. I included this disclaimer because to me it seems very hard to define what we exactly mean by an “AI weapon”, which makes a complete ban, like the one the BWC has, implausible.
I think I still don’t quite get why this seems implausible. (For what it’s worth, I think your view is pretty mainstream, so I’m asking about it more to understand how people are thinking about AI and not as any kind of criticism of the post or the parenthetical.)
It seems clear to me that an AI weapon could exist. AI systems designed to autonomously identify and destroy targets seem like a particularly clear example. A ban which distinguishes that technology from nearby civilian technology doesn’t seem much more difficult than distinguishing biological weapons from civilian uses of biological technology.
Of course we’re mostly interested in AGI, not narrower AI technology. I agree that society doesn’t think of AGI development as a weapons technology and so banning “AGI weapons” seems strange to contemplate, but it’s not too difficult to imagine that changing! After all, many of the proponents of the technology are clear that they think it will be the most powerful technology ever invented, granting its creators unprecedented strength. Various components of the US military and intelligence services certainly seems to think AGI development has military implications, so the shift to seeing it as a dual-use weapons technology doesn’t seem to be too big of a leap to imagine.
Hi Aryan,
Cool post, very interesting! I’m fascinated by this topic—the PhD thesis I’m writing is on nuclear, bio and cyber weapons arms control regimes and what lessons can be drawn for AI. So obviously I’m very into this, and want to see more work done on this. Really excellent to see you exploring the parallels. A few thoughts:
Your point on ‘lock-in’ seems crucial. It currently seems to me that there are ‘critical junctures’ (Capoccia) in which regimes get set and then its very hard to change them. So e.g. the failure to control nukes or cyber in early years. ABM is a complex example—very very hard to get back on the table, but Rumsfeld +others managed it after 30 years of battling.
My impression is that the BWC (and CWC) - the meetings/conferences etc—are often seen as arms control regimes that are pretty good at keeping up with technical developments—maybe a point in favour of centralisation.
Just on the details of the BWC, seems worth mentioning a few things. (Nitpicky: when the UK proposed a BWC, it said verification wasn’t technically possible at the time [1]). First, the Nixon Administration thought BW were militarily useless and had already unilaterally disarmed, so verification was less of a priority [2]. Second, one of the reasons to want a Verification Protocol in the 90s was the revelation that the Soviets cheated over the 70s-80s, building the biggest BW program ever. Third, the Bush Admin rejected the Verification Protocol in 2001 (pre 9/11!), its first year—at the same time as it was ripping up START III, Kyoto, and the ABM Treaty. This is all to suggest that state interest, and elites’ changing conceptions of state interest, can create space for change.
[1] http://www.cbw-events.org.uk/EX1968.PDF
[2] https://www.belfercenter.org/publication/farewell-germs-us-renunciation-biological-and-toxin-warfare-1969-70
https://wmdcenter.ndu.edu/Publications/Publication-View/Article/627136/president-nixons-decision-to-renounce-the-us-offensive-biological-weapons-progr/
This isn’t central to the post, but I’m interested in this parenthetical:
At first glance, a ban on AI weapons research or AI research with military uses seems pretty plausible to me. For example, one could ban research on lethal autonomous weapons systems and research devoted to creating an AGI without banning, e.g., the use of machine learning for image classification or text generation.
Can you say more about why this seems implausible from your point of view?
Hey Kerry!
Good question. I included this disclaimer because to me it seems very hard to define what we exactly mean by an “AI weapon”, which makes a complete ban, like the one the BWC has, implausible.
I think I still don’t quite get why this seems implausible. (For what it’s worth, I think your view is pretty mainstream, so I’m asking about it more to understand how people are thinking about AI and not as any kind of criticism of the post or the parenthetical.)
It seems clear to me that an AI weapon could exist. AI systems designed to autonomously identify and destroy targets seem like a particularly clear example. A ban which distinguishes that technology from nearby civilian technology doesn’t seem much more difficult than distinguishing biological weapons from civilian uses of biological technology.
Of course we’re mostly interested in AGI, not narrower AI technology. I agree that society doesn’t think of AGI development as a weapons technology and so banning “AGI weapons” seems strange to contemplate, but it’s not too difficult to imagine that changing! After all, many of the proponents of the technology are clear that they think it will be the most powerful technology ever invented, granting its creators unprecedented strength. Various components of the US military and intelligence services certainly seems to think AGI development has military implications, so the shift to seeing it as a dual-use weapons technology doesn’t seem to be too big of a leap to imagine.