Efficacy of AI Activism: Have We Ever Said No?

Disclaimer:

This piece came out of a Summer Research Project with the Existential Risk Alliance. This section was intended to be part of an introduction for an academic article, which mutated into a stand-alone piece. My methodology was shallow dives (2-4 hours of research) into each case study. I am not certain about my reading of these case studies, and there may be some important historical factors which I’ve overlooked. If you have expertise about any one of these case studies, please tell me what I’ve missed! For disclosure: I have taken part in several AI protests, and have tried to limit the effects of personal bias on this piece.

I am particularly grateful to Luke Kemp for mentoring me during the ERA program, and to Joel Christoph for his help as research manager. I’d also like to thank Gideon Futerman, Tyler Johnston, Matthijs Maas, and Alistair Stewart for their useful comments.

1. Executive Summary:

Main Research Questions:

  • How should we look for historical analogies for restraining AI development?

  • Are there any successful historical precedent(s) for AI protests?

  • What general lessons can we learn from these case studies?

Research Significance:

  • Unclear whether there are any relevant historical analogues for restraining AI development

  • Efficacy is a crucial consideration for AI activism, yet little research has been done

Findings

Based on a new framework for identifying analogues for AI restraint, and shallow historical dives (2-5 hours) into 6 case studies, this project finds that:

  • General-Purpose-Technologies are a flawed reference class for thinking about restraining AI development.

  • Protests and public advocacy have been influential in restraining several technologies with comparable ‘drivers’ and ‘motivations for restraint’

In particular, protests have had counterfactual influence in the following cases:

  • Cancellation of prominent SAI geo-engineering experiments (30-60% influence* for SCOPEx, 10-30%* for cancellation of SPICE)

  • De-nuclearization in Kazakhstan in early 1990s (5-15%*) and in Sweden in late 1960s (1-15%*)

  • Reagan’s move towards a ‘nuclear freeze’ in the 1980s, and subsequent international treaties (20-40%* for former, 5-15%* for latter)

  • Changing UK’s climate policies in 2019 (40-70%* for specific emissions reductions)

  • Germany’s phase-out of nuclear power from 2011 to 2023 (10-30%*)

  • Domestic bans on CFCs in late 1970s (5-25%*), and influential for stricter international governance from 1990 (1-10%*)

  • Europe’s de-facto moratorium on GMOs in late 1990s (30-50%*)

* Confidence Interval = Probability of event occurring given protests—Probability of event occurring without protests

More General Lessons:

  • Geopolitical Incentives are not Overriding: activists influenced domestic and international policy on strategically important technologies like nuclear weapons and nuclear power.

  • Protests can Shape International Governance: Nuclear Freeze campaign helped enable nuclear treaties in 1980s, and protests against CFCs helped enable stricter revision of Montreal Protocol

  • Inside and Outside-Game Strategies are Complementary: different ‘epistemic communities’ of experts advising governments pushed for stricter regulation of CFCs in 1980s and for denuclearization in Kazakhstan in 1990s; both groups were helped by public mobilisations.

  • Warning Shots are Important: use of nuclear weapons in WW2, Fukushima power station meltdown, and discovery of Ozone Hole all important for success of respective protests.

  • Activists can Overwhelm Corporate Interests: protests succeeded in spite of powerful commercial drivers for fossil fuels, nuclear power, and GMOs.

  • Technologies which pose perceived Catastrophic Risks can be messaged saliently: there was significant media coverage of proposed SAI experiments that focused on ‘injustice’ not risk.

2. Introduction

Mustafa Suleyman, in his recent book, ‘The Coming Wave’, asks whether there is any historical precedent for containing the proliferation of AI: “Have We Ever Said No?” (p38). His answer is clear. For ‘General Purpose Technologies’ like AI, “proliferation is the default” (p30). Given the history of technology, in addition to the particular dynamics at play for AI – increasing nationalism, prevalence of open-sourcing, high economic rewards available – we should expect to continue, like “one big slime mold slowly rolling toward an inevitable future” (p142). Eventually these technologies will proliferate across society. There is a ‘Coming Wave’, the title of Suleyman’s book.

Trying to hold back the ‘coming wave’ would be as ineffectual as Cnut, the King who tried to order back the tide: “The option of not building, saying no, perhaps even just slowing down or taking a different path isn’t there” (p143). Suleyman’s policy proposals, like other high-profile publications like The Precipice [1], don’t include slowing down AI development. Instead, they feature an Apollo Program for AI safety and mandates for AI companies to fund more safety research.

Given the inevitability of the ‘coming wave’, protest groups who advocate for moratorium on powerful Ai models are bound to fail. Suleyman doesn’t mention emerging groups like the ‘Campaign for AI Safety’ or ‘PauseAI’. While coalitions between developers and civil society groups can create “new norms” around AI – just as abolitionists and the suffragettes did – these new norms don’t include slowing down development. In short, a historical perspective about technological inevitability suggests that slowing down AI development, including by protests, is unlikely to succeed.

In this piece, I try to challenge this historical narrative. There are several cases in which we have said no to powerful corporate and geopolitical incentives. And protests have been influential in several of them.

I think that this research is significant because while efficacy is a crucial consideration for AI protests, little research has been done. There has been several projects looking at the possibility of slowing down AI, including here and here. However, my other post, an in-depth study of GM protests, is the only other extended piece I know of which looks specifically at efficacy of AI protests.

EAs seem to think that slowing AI development is not tractable.[2] They often focus on the ‘inside-view’, including the strong economic and geopolitical drivers behind AI development. However, as noted by others, it is important to complement our inside-view views about AI development with an outside-view understanding of technological history.

I’d also like to make a brief caveat to say that there are many areas which I do not address, including the ethics of protest; the desirability of slowing or pausing AI development either unilaterally or multilaterally; and other downside risks from protests. Instead, I am simply looking at the efficacy of protests and technological restraint.

This piece will proceed as follows. In the first section, I set out a framework for analogies for restraining AI development. I then identify 6 relevant and useful case studies involved ‘protests’ (broadly defined to include public advocacy, as well as public demonstrations). In the next section, I go through each case study, and analyze what role protest had. Finally, I conclude and draw general meta-lessons for protests, including those against AI.

3. Have We Ever Said No?

To support his claim that “proliferation is default”, Mustafa Suleyman mostly looks at our attempts to contain other General-Purpose-Technologies (GPTs), technologies which spawn other innovations and become widespread across the economy. He acknowledges that there are some short-term successes: for example, the Ottoman Empire did not possess a printing press until 1727; the Japanese shogunate shut out the world for almost 300 years; and imperial China refused to import manufactured British technology. Yet, in the long-run, “none of it worked” (p40): all of these societies were forced to adopt new technologies. Similarly, our modern attempts at containment are rare and flawed because international agreements are not effectively enforced: e.g. consider bans on chemical weapons which are routinely violated. The one potential exception, the case of nuclear weapons, has several caveats, including its extremely high barriers to development and our close shaves with nuclear war. Modern and historical failures of containment suggest that pausing AI isn’t “an option. The wave is coming, and Suleyman predicts we will have ‘Artificial Capable Intelligences’ in the next 5 years.

(A quick aside: this view echoes theories of technological determinism, the view that technology develops and proliferates according to its own inner logic. Nick Bostrom’s “Technological Completion Conjecture” suggests that if scientific progress does not stop, then humanity will eventually develop all possible technological capabilities which are basic and important. Alan Dafoe suggests that military-economic competition exerts a powerful selection pressure on technological development.)

I think that there are two main problems with Suleyman’s reasoning.

Firstly, even if all technologies proliferate in the long-run, this doesn’t mean that short-term restraint is impossible. It may be true that on a big-picture history scale of analysis, technologies with strong strategic advantages will almost certainly be built, per Dafoe’s position. (This seems plausible to me!). However, as Dafoe himself recognises, this doesn’t deny that at the micro level, social groups have quite a lot of flexibility in steering different technologies. Suleyman himself gives many examples from imperial Japan to the Ottoman empire. There are additional cases involving GPTs, including: funding cuts hampering progress in nanotechnology in early 2000s, the canceled Soviet internet of the 1960s, and prevailing perceptions around masculinity favoring the development of internal combustion engines over electric alternatives in the early 20th century. Even if AI does proliferate in the long-run, how long is the long-run? A year, a decade, or 300 years? Even a short pause to AI development might be vital for securing humanity’s survival.

More fundamentally, I think that GPTs are not the best reference class for thinking about slowing down AI. AI might indeed be, or soon become, the “new electricity”. (Some scholars think it’s premature to call AI a GPT.) However, even if GPTs are the best reference class for AI as a technology, this does not mean that they are the best reference class for restraining AI development.

This is because, firstly, AI, electricity, and other GPTs all had vastly different ‘drivers’. Frontier AI models are being built by a handful of multi-billion-dollar corporations, within a political climate of increasing US-China competition. In contrast, electricity, and other modern GPTs were developed by a few individual ‘tinkerers’: individuals like Benjamin Franklin for electricity; Gutenberg for the printing press; Graham Bell for the telephone; Carl Benz for the internal combustion engine. They weren’t part of large multinationals, or, as far as I’m aware, conducting scientific research because of perceived ‘arms races’. Or consider older GPTs. There was no paleolithic Sam Altman who pioneered the domestication of animals, the invention of fire, or the smelting of iron. Several GPTs first started being used even before the existence of states. Our ability to restrain different technologies depends on the different forces which are driving their development. Different ‘drivers’ matter.

In addition, there are various motivations for restraining different GPTs. Groups like PauseAI are mainly motivated by the perceived catastrophic risks from AI systems in the future. Few PauseAI protestors are campaigning because of immediate risks to their jobs, unlike the Luddites. The motivations to restrain AI development are more comparable to protests against nuclear weapons. But there are also important disanalogies: AI has not yet had a ‘warning shot’ comparable to the dropping of nuclear bombs on Hiroshima and Nagasaki. Perhaps differences in motivations for restraint are less significant than differences in ‘drivers’. However, I think they are still important in order to avoid inappropriate analogies like, “PauseAI are neo-Luddites”.

Thus, my criteria for analogues of restraining AI development include:

1. Drivers for Development

  • Who is developing the technology? (Private sector, governments, universities, individuals)

  • What academic/​intellectual value did the technology have?

  • What strategic/​geopolitical value did the technology have?

  • What commercial value does the technology have? What lobbying power did firms have?

  • Relatedly, what public value did the technology bring? Was technological restraint personally costly for citizens

2. Motivations for Restraint

  • Did the technology pose current or future perceived risks?

  • Did the technology pose perceived catastrophic risks?

  • Were there clear ‘warning shots’, events demonstrating the destructive potential of this technology?

  • Did the technology offer significant benefits to the public?

In addition, I am looking for cases where ‘protests’ – defined broadly to include public demonstrations, actions by social movement organizations, or public advocacy – might have been influential for restraining the technology.

This framework has been simplified, with several factors left out.[3] However, I hope it serves as a rough starting point for identifying useful and relevant analogues for technological restraint, including cases which protests are involved.

4. Protests Which Said No

Based on this criteria, I found 7 potential analogues for restraining AI, all of which had corresponding protest movements.

The different ‘drivers’ and ‘motivations for restraint’ for each technology colour coded in terms of closeness to AI: green (close fit); orange (medium fit); red (low fit).

I have set aside many other interesting examples of technological restraint, either because they didn’t significantly involve protest, or because the technologies had substantially different drivers.[4] Please add comments if you think I’ve missed a particularly relevant example of technological restraint which had accompanying protests!

In the sub-sections that follow, I give a brief overview of each of these case studies, detailing their analogies and disanalogies with AI restraint, as well as whether protests were influential.

A. Geo-engineering

One analogue for AI protests could be protests geo-engineering experiments. Public advocacy groups have targeted low environmental impact outdoor experiments into Stratospheric Aerosol Injection (SAI), a proposed intervention which would involve spraying reflective particles into the stratosphere to reflect sunlight and cool the Earth. Proponents suggest that it could be immensely valuable for society as a whole, saving society trillions of dollars per year by mitigating the costs of global warming.

The two most prominent SAI experiments have both been cancelled.

SCoPEx, or Stratospheric Controlled Perturbation Experiment, was first planned in 2017, and is led by Harvard University with significant funding from several billionaires and private foundations. In 2021, a small group of researchers had planned to conduct a ‘dry-run’ – without any particles released – in northern Sweden. However, in February 2022, a group of Swedish environmental organizations and the Indigenous Saami Council published a letter demanding that the project be cancelled, citing “moral hazard” and “risks of catastrophic consequences” from SAI. The letter received significant media attention. Around a month later, the Harvard advisory committee put the test on hold until 2022. No further SCoPEx experiments have occurred or have been planned since. Given the timing of the cancellation – only weeks after the open letter – public advocacy (which I define ‘protests’ to encompass here) was likely the key reason.

The second prominent experiment which was cancelled was SPICE (Stratospheric Particle Injection for Climate Engineering). After being announced in mid-September 2011, SPICE received negative media attention, and was later postponed on 26th September to allow for more “deliberation and stakeholder engagement”. Later that same day, the ETC Group published an open letter, ‘Say No to the “Trojan Hose”: No SPICE in Our Skies’, which was eventually signed by over 50 NGOs. SPICE was cancelled in April 2012.

I think that public pressure was less significant than in the SCoPEx case. The decision to postpone the project in September 2011 was made in part because of anticipated NGO opposition, but also because of a potential conflict of interest: several engineers on SPICE had applied for a patent for a stratospheric balloon. Matthew Watson, the principal scientist at SPICE, cited the COI issue as the reason for the cancellation in 2012. Jack Stigloe, in his book “Experiment Earth”, de-emphasizes the role of public pressure, saying, “the decision [was] made by the SPICE team themselves”. Public protests may have counterfactually contributed to the cancellation of SPICE, though this is more uncertain than the case of SCoPEx.

More generally, however, the norms against SAI experimentation have created a de-facto moratorium on SAI experimentation by universities (there is no formal moratorium at present). One field experiment took place in Russia in 2009, though this occurred prior to the public backlash to SPICE in 2011, and additionally did not provide much relevant information for future experiments or implementation of SAI. In 2022, the US start-up Make Sunsets released particles via balloons in Mexico. However, Make Sunsets released less than 10 grams of sulfur per flight (commercial flights emit about 100 grams per minute), and experiments were banned in the Mexico in January 2023. Other experiments occurred in secret in the UK in October 2021 and September 2022, under the name Stratospheric Aerosol Transport and Nucleation (SATAN). However, SATAN was also a small-scale project (with hardware costs less than $1,000), conducted by an independent researcher, and did not attempt to monitor effects on atmospheric chemistry.

Does this case study suggest AI protests might be effective? To a degree. Protests have been influential in canceling major outdoor SAI experiments, particularly SCoPEx. The perception of uncertain, catastrophic risks from SAI experiments are similar to the perception of risks from ‘Giant AI Experiments’.

However, the drivers behind SAI experiments are very different to AI research, which is led by private corporations. Make Sunsets is a commercial venture, which sells ‘carbon credits’, but it operates at a much smaller scale: it has received $750,000 in VC funding, in contrast to the billions for AI labs. Both high-profile successes of SAI protests involved university-run projects. Universities are likely to be more sensitive to public pressure than large corporations who have significant sunk investment in AI projects.

Despite these key differences, there are several lessons which can be drawn. One lesson is that protests can have high leverage over a technology in early stages of development, particularly when universities are involved: e.g. CRISPr. However for AI, this would suggests that the best time for AI protests might have been in the 2010s when universities dominated AI research.

One positive lesson is that future catastrophic risks from technologies can be messaged in a salient way. The risks from SAI experiments might seem difficult to message. However, the proposed SCoPEx experiment received outsized media attention, with a common ‘hero-villain’ narrative: Indigenous Groups versus powerful American university, backed by Bill Gates (e.g. here, here, here). This supports my findings elsewhere that promoting emotive ‘injustice’ frames are vital for advocacy groups protesting future catastrophic risks.

Additionally, the coalition opposed to SCoPEx was broad – involving the Saami Council, the Swedish Society for Nature Conservation, Friends of the Earth, and Greenpeace. The literature suggests that diverse protest groups are more likely to succeed. While SAI case study is not the closest analogue for AI protests, it holds practical lessons in terms of messaging and coalition-building.

B. Nukes

Another protest movement relevant for AI protests could be protests against nuclear weapons. Both technologies share striking similarities, both posing catastrophic risks having emerged from a nascent field of science. One New York Times piece challenges readers to tell apart quotes referring to each technology. It’s quite difficult.

Protests against nuclear weapons have had particular success in Kazakhstan, Sweden, and the US.

I) Kazakhstan

Kazakhstan inherited a large nuclear arsenal after gaining independence from the USSR in 1991. Initially, Kazakhstan agreed to denuclearize. However, by 1992, President Nazarbayev suggested Kazakhstan should be a temporary nuclear state, citing fears of Russian imperialism and technical difficulties disarming. Eventually, by 1995, Kazakhstan had signed important international treaties, alongside Ukraine and Belarus, such as START I, the Lisbon Protocol, and the NPT, and eventually gave up its nuclear weapons in 1995. Why did this happen? US diplomatic efforts were undoubtedly important. The creation of the Cooperative Threat Reduction (CPT) initiative, a Department of Defence Program, was vital for pushing Kazakhstan back towards denuclearisation. It provided financial and technical assistance aiding the difficult technical task of removing and destroying nuclear weapons. A network of American and Soviet experts who shared similar views about the importance of non-proliferation, and convinced two US Senators – Sam Nunn (D-GA) and Richard Lugar – to create the CPT. Similar groups of experts who share similar worldviews and shape political outcomes by giving information and advice to politicians are called ‘epistemic communities’ in the literature. Thus, US diplomacy and ‘epistemic communities’ were undoubtedly important for de-nuclearisation in Kazakhstan.

However, solely focusing on US diplomatic efforts is too simplistic: domestic opposition was also significant. The Nevada-Semipalatinsk Movement (NSM), led by the poet Olzhas Suleimenov, began protesting nuclear testing in 1989, and forced forced authorities to cancel 11 out of the 18 nuclear tests scheduled that year. Two years later, the Semipalatinsk test site – which had been the largest nuclear test site in the USSR – was closed. As President Nazarbayev himself acknowledged, NSM protests were key to its closure. In short, ‘epistemic communities’ of non-proliferation experts who were providing technical support to Kazakhstan’s government were aided by public mobilization in achieving denuclearisation.

II) Sweden

Similarly, protest movements may have been important for Sweden’s moves towards denuclearisation. Between 1940s to the 1960s, Sweden actively pursued nuclear weapons, and possessed the needed requisite capability by the late 1950s. However, by 1968, Sweden formally abandoned the program.

Why did this happen? Firstly, the small nuclear program in development was seen as offering limited deterrent value. Instead, it might have just pulled Sweden into a war. Secondly, the ruling Social Democrats (SDs) believed that the costs were too high, given other fiscal demands on the welfare state. Public opinion was also significant. Sweden moved from a situation in the 1950s in which most Swedes and leading SDs favored the nuclear program, to widespread opposition in the 1960s. Going ahead with the nuclear weapons program would have led to an implosion in the SD Party. Grass-root movements in Sweden, including the Action Group against Swedish Nuclear Bombs, strengthened anti-nuclear public and elite sentiments, contributing to eventual denuclearisation. However, I have not looked into this case in detail, so I am fairly uncertain about the counterfactual impact of protests.

III) Nuclear Freeze Campaign

Further, nuclear protest movements influenced foreign policy and international policymaking. The Nuclear Freeze campaign of the 1980s, which called for halting the testing, production, and deployment of nuclear weapons grew into a mass movement with widespread support: petitions with several million signatures, referendums passed in 9 different states, and a House resolution passed. In response, Reagan dramatically changed his rhetoric, from suggesting protestors had been manipulated by the Soviets, to saying, “I’m with you”. Public opinion was a clear motivator for Reagan: in 1983, he suggested to his secretary of state that, “If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.” Reagan’s rhetorical and diplomatic shift paved the way for several international agreements, including the 1987 INF Treaty and the 1991 START Treaty reducing strategic nuclear weapons.

What lessons can we draw from protests against nuclear weapons? First, there is the trivial point that the use of nuclear weapons in WW2, and the clear demonstration of their catastrophic potential, was a key motivator for all of these cases. ‘Warning shots’ are helpful for mobilizing public opinion.

Increasingly, AI is becoming a geopolitical battleground: for example, the US has sought to boost its domestic AI capabilities through the CHIPs Act, and China explicitly aims to be the world’s primary AI innovation center by 2030. Yet, the examples in Sweden and Kazakhstan suggest that activists can influence domestic government policy towards a geopolitically important technology. The Nuclear Freeze Campaign in the US suggests that protests can influence international policymaking.

C. Fossil Fuels

Environmental protests provide a pertinent analogue to protests against advancements in artificial intelligence (AI). Both technologies have comparable ‘motivations for restraint’: both climate change and AI are perceived to threaten civilisational collapse, although AI risk is often considered more speculative and poses less obvious concrete harms today,

Further, both technologies have similar drivers. Like the dominance of ‘Big Tech’ in the AI landscape – Microsoft via OpenAI, Google via Deepmind, now Amazon via Anthropic – a handful of global fossil fuel giants, commonly referred to as ‘the Majors’, wield significant influence. These corporations, like ExxonMobil, BP, Chevron, etc., control substantial portions of the sector’s reserves (12%), production (15%), and emissions (10%). Fossil fuel companies wield formidable lobbying power, employing an array of tactics, including disseminating doubts about climate science (Oreskes & Conway, 2010).

AI-lobbying is increasing and is particularly visible in lobbying over the EU AI Act. However, the extent of Big Tech lobbying on AI policy is still small in comparison to Fossil Fuels. Big Tech may have greater lobbying potential than ‘the Majors’, given their greater market capitalisation: almost $5 trillion for Amazon/​Google/​Amazon combined, versus around $1 trillion for ‘the majors’. Yet, individual AI labs have significantly less resources to marshall: OpenAI’s valuation is estimated at around $30 billion, while Anthropic is valued at approximately $5 billion. While the overall level of AI-lobbying is unclear – estimates tie together all ‘digital lobbying’ – I expect it to be less than the current spending on climate lobbying, at over £150 million per year.

Further, transitioning away from fossil fuels confronts more formidable challenges in terms of collective action. Fossil fuels constitute a staggering 84% of global energy consumption. Some even liken society’s entrenched reliance on fossil fuels to the role slavery played prior to its abolition. In contrast, AI models have narrower applications, primarily in product enhancement and analytics, and are utilized by hundreds of millions, not billions, of users.

Despite confronting a powerful corporate lobby and threatening to raise costs on consumers, protests have influenced climate policies. Extinction Rebellion (XR), for instance, played a crucial role in galvanizing more ambitious climate policies within the UK government. From its launch in July 2018 until May 2019, Extinction Rebellion likely contributed to local authorities pushing forward net zero targets to 2030 from 2050, a more ambitious nationally determined contribution from the UK government, and the 2050 net zero pledge being implemented 1-3 years earlier. Additionally, XR may have had an influence at an international level: in the wake of XR protests, publics across the world became more worried about climate change, and many countries (including the EU and 10 others) declared a climate emergency – however, these effects are less well documented.

It might be tempting to look at Big Tech’s size and willingness to lobby AI regulation with despair. The success of XR gives reasons for hope.

D. Nuclear power

Protests against nuclear power offer another precedent for AI existential risk protests.

Anti-nuclear protests have achieved partial successes in the US, where stringent regulation effectively blocks any new power stations; several other countries, including Italy, have never introduced nuclear power, despite importing nuclear-generated electricity from its neighbours.

One particularly note-worthy case study is Germany which fully phased out nuclear power between 2011 and April 2023. Activists may have had a role in this policy reversal. In the 1970s, the Green party formed on an anti-nuclear platform, pushing the SPD to oppose nuclear energy too. An SPD-Green coalition government from 1998-2005 decided to slowly phase out nuclear power, although this decision was reversed after Merkel came to power. After Fukushima in 2011, massive anti-nuclear protests broke out across the world, including in Germany. Facing electoral threats from the Greens in state elections, Merkel announced a nuclear moratorium and ethics review. This committee’s recommended phase-out was overwhelmingly approved, reflecting anti-nuclear sentiment.

Nuclear protests achieved phase-outs in Germany despite strong economic and strategic ‘drivers’ which parallel AI. Like the high potential profits from AI, there are strong financial incentives behind nuclear power. Legislation which extended plant lifetimes in 2010 offered around €73 billion in extra profits for energy companies. After Germany’s nuclear reversal, various energy companies sued the government for compensation, with Vattenfall seeking €4.7 billion.

Further, there were clear strategic ‘drivers’ for continued nuclear power. Germany phased out nuclear power in 2023 despite rising energy costs following Russia’s invasion of Ukraine. Other countries boosted investments (France), or delayed their phase-outs of nuclear power (e.g. Belgium), following the Russian invasion (ANS, 2023).

The case of nuclear power shows that geopolitical calculus is not an overwhelming determinant of government policy, at least in the short-run. Public pressure, including from protests, was influential. Further, Germany’s anti-nuclear mobilization was catalysed by the Fukushima meltdown in 2011. This suggests that, similarly to other cases, ‘warning shots’ from AI might also be significant.

E. CFCs

Another interesting analogue for AI protests is protests against CFCs. CFCs were first unilaterally banned in the US in the late 1970s, before spreading to other countries. The discovery of the ‘ozone hole’ in 1985 spurred countries to agree to substantial CFC reduction via the Montreal Protocol of 1987.

The ‘drivers’ of both CFCs and AI share some similarities. CFC production was concentrated in a few powerful chemical firms like DuPont, analogous to the dominance of Big Tech. While CFCs posed some benefits for the public in the 1970s, particularly in refrigeration and foam production, they CFCs were not as foundational to the world economy as fossil fuels – a closer comparison with the stage AI is at currently.

What role did activists have in regulating CFCs? Their role in the early CFCs bans is fairly clear. In 1974, a coalition of environmental groups helped to publicize the initial ‘Molina-Rowland’ hypothesis that CFC could destroy ozone molecules. The popularization of the risks, and media campaigns by groups like Natural Resources Defence Council (NRDC), helped spur the first bans on CFCs in the US in 1977.

Protests may also have influenced international regulation of CFCs. The Friends of the Earth ran an extensive corporate campaign encouraging consumers to boycott products with CFCs. It was particularly successful in the UK, with the British aerosol industry reversed its opposition to CFC regulation, which helped enable the strengthening of the Montreal Protocol in 1990. However, I am more uncertain about this: firms may have reversed their opposition to regulation because of changing profit incentives. For example, by the mid-1980s CFCs, DuPont realized global regulation could create a more profitable market for substitute chemicals the company was positioned to produce, so actually supported the signing of the Montreal Protocol.

In addition, there were other factors aside from activists which were key. First, there was the role of ‘epistemic communities’ of scientists. As described in the case of de-nuclearization in Kazakhstan, ‘epistemic communities’ are groups of experts who share similar worldviews and shape political outcomes by giving information and advice to politicians. In the CFC case, scientists advising US departments, especially in the EPA and State Department, influenced the US government to advocate for a more stringent treaty, enabling the success of the Montreal Protocol.

Further, US’s government support for the Montreal Protocol was unilaterally incentivised: if the US had unilaterally implementing the Montreal Protocol on its own, it would have prevented millions of skin cancer deaths among US citizens for example, equivalent to roughly $3.5 trillion in benefits, versus $21 billion in costs.

What lessons can be learnt from the case of CFCs?

There are several reasons for optimism. Just like with the Nuclear Freeze Campaign, activists helped shape international policymaking surrounding CFCs. Furthermore, the activists who carried out corporate campaigns benefitted the expert scientists who were working within the US government and pushing for stricter CFC regulation.

However, there are other reasons to be pessimistic about AI protests.

Different countries were unilaterally incentivised to sign the Montreal Protocol. This might be the case for AI, in theory: different countries pursuing continued AI development might be competing in a ‘suicide race’. However, this is not how countries are behaving. Instead, they are pushing ahead with AI capabilities.

Perhaps in the future, a clear AI ‘warning shot’ might change policymakers’ perceptions of the national interest. The discovery of the ‘ozone hole’ was crucial for mobilizing public opinion and policymakers’ attention. Perhaps in the future we will see similar ‘warning shots’ from which could spur stringent, international AI regulation.

Further, stringent international regulation was aided by a diminishing profit incentive behind CFCs. By the late 1980s, alternative products might have been more profitable for Dupont. In contrast, there are no obvious substitutes to ‘frontier AI models’ like GPT-3, so firms are much less likely to support a ‘phasing out’.

F. GMOS

(I have covered the case of GMOs in detail here. Please skip this section if you’re already familiar!)

In several ways, GMOs are a useful analogy for AI. It was seen as a revolutionary and highly profitable technology, which powerful companies were keen to deploy. Furthermore, GMOs did not have any clear ‘warning shots’, or high events which demonstrate the potential for large-scale harm, unlike the discovery of the ozone hole for CFCs or the meltdown of Fukushima for nuclear power. Yet, within the space of only a few years, between 1996 and 1999, public opinions shifted rapidly against GMOs, due to increasingly hostile media attention surrounding key ‘trigger events’ and heightened public mobilization. There was a de-facto moratorium on new approvals of GMOs in Europe by 1999. Today, only 1 GM crop is planted in Europe, and the region accounts for 0.05% of total GMOs grown worldwide. I am confident that protests had a key counterfactual impact.

There are several lessons from the GMO case. Firstly, ‘trigger events’ which lead to mass mobilization need not be catastrophes. The cloning of Dolly the Sheep, and the outbreak of ‘Mad Cow Disease’ were unrelated to GMOs, and did not pose any immediate catastrophic risks to the public.

Additionally, powerful corporate lobbies can be overwhelmed by activists. In the US, biotech companies spent over $140 million between 1998 and 2003 on lobbying. Monsanto spent $5 million on a single ad campaign in Europe. These levels of lobbying likely exceeds levels of AI-lobbying.

The messaging of GMO protests did not focus on risk. Instead, it focused on injustice. Opponents of GMOs thought principally in terms of moral acceptability, not risk. Rhetoric targeted particular companies (e.g. “Monsatan”) not systemic pressures.

Additionally, the anti-GMO movement brought together a broad range of groups, including environmental NGOs, consumer advocacy groups, farmers, and religious organizations. More generally, diverse protests are more likely to succeed.

5. Conclusion

So, have we ever said no? Given the history of technology, should we see AI development like “one big slime mold slowly rolling toward an inevitable future”?

In this piece, I have tried to set out a framework for thinking about these questions. When asking the question, “is slowing AI development possible?”, we should not look towards cases of restraining General Purpose Technologies like electricity or the internet, as the key reference class. Instead, we should look for technologies with similar geopolitical and commercial drivers, and groups who had comparable motivations for restraining them.

And looking at this reference class, there are several relevant cases in which we have said no: from geo-engineering experiments, nuclear weapons programs, and the use of fossil fuels, CFCs, and GMOs. (I am uncertain exactly how influential: these case studies constitute shallow dives (2-4 hours of research), and showing causation is difficult in social science.)

There are several more general ‘meta-lessons’ we can draw from these protests.

Firstly, the role of geopolitical incentives in determining policy outcomes is not overriding. The case studies of nuclear weapons (denuclearisation in Kazakhstan and Sweden; ‘Nuclear Freeze’ in US in 1980s) and the phase-out of nuclear power in Germany demonstrate that protests can have influence policy outcomes even in the face of powerful geopolitical drivers. Similarly, powerful corporate interests do not always have their way. Powerful lobbies – comparable to corporate interests favoring continued AI development – were overwhelmed by activists in several cases: fossil fuels, nuclear power, and GMOs.

Protests have also demonstrated their capacity to shape international governance, as seen in the Nuclear Freeze campaign and efforts against CFCs. If you believe that international governance is necessary for safe AI governance, protests could help enable this.

Additionally, in the cases of CFCs and de-nuclearisation in Kazakhstan, experts pushing for certain policies within governments benefitted from public mobilisations. This suggests that community of AI researchers, who share similar concerns about AI existential risk, and have privileged positions in the UK’s AI taskforce, would benefit from public mobilization in achieving more stringent regulation of ‘frontier AI’. More generally, these two case studies lend some support to the broader idea that ‘inside-game’ strategies for creating social change (working within the existing system: e.g., running for office, working for the government) are benefitted by other ‘outside-game strategies’ (community organizing, picketing, civil resistance etc.).

Lastly, the importance of ‘trigger events’ – highly publicised public events which lead to mass mobilisation – should not be understated. The dropping of nuclear weapons in WW2, the Fukushima nuclear disaster, and the discovery of the Ozone Hole were pivotal in galvanizing public awareness and support for these protest movements. However, the case of GMOs suggests that mass mobilisation does not require a catastrophic ‘warning shot’.

  1. ^

    https://​​papers.ssrn.com/​​sol3/​​papers.cfm?abstract_id=3995225, p11: “The policy recommendations for mitigating [risks from AI] in The Precipice support R&D into aligned artificial general intelligence (AGI), instead of delaying, stopping or democratically controlling AI research and deployment.”

  2. ^

    This is based on personal conversations. Claim that slowing AI development was also made here: “Yes, longtermist researchers typically do not advocate for the wholesale pause or reversal of technological progress — because, short of a disaster, that seems deeply implausible. As mentioned above, longtermism pays attention to the “tractability” of various problems and strategies. Given the choice, many longtermists would likely slow technological development if they could.”

  3. ^

    I have left three factors out in particular. First, levels of epistemic support or ‘validity’ of different perceptions of risk are excluded; I am focused on the efficacy of protest groups rather than the desirability of their respective goals.

    Additionally, I have not selected protest groups with similar demographics/​strategies as current AI protests – since these factors are endogenous to protest groups (tactics and allyship are choices for protest groups), and because it seems too soon to make broad generalizations about AI protests.

    Finally, I have not made any restrictions on the type of regulation which achieved restraint: i.e. restrictions on development or deployment; whether unilateral or multilateral. This is because there are many policy options for restraining AI: regulating development through a unilateral US-led pause, an international treaty, or restrictions on compute; or regulating the deployment of AI systems, for example, using tort regulation to make developers legally liable for AI accidents. By doing so, I hope to shed light on the various outcomes and ‘drivers’ protests groups can lead to.

  4. ^

    Boeing 2707 was a proposed supersonic jet which received over $1 billion in funding from the government. Funding was cut in 1971 by the US Senate. While protests likely had some role, over 30 environmental, labor, and consumer groups lobbied Congress to end the program (NYT, 1972), other factors may have been more important: including insurmountable technical challenges and rapidly escalating costs (Dowling, 2016). Operation Popeye was a weather-control system used by the US military between 1967 and 1972 to increase the length of the monsoon season to prevent the North Vietnamese from roads. It was halted two days after being exposed by a journalist (Harper, 2008); wide-scale public protests have not been documented, limits its usefulness as an analogue. Another analogue is Automated Decision Systems (ADS) – algorithms for automating government tasks such as fraud detection, facial recognition, and risk modelling for children at risk of neglect. Redden et al. (2022) present 61 different ADSs which were canceled by Western Government between 2011 and 2020. Whilst protests were key in canceling many different projects, the ‘drivers’ of ADS – often improved government efficiency – seem very different to AI. Operation Ploughshare, a US government project to capture the civilian benefits of nuclear devices, was primarily abandoned due to diminishing geopolitical rationale, lack of commercial viability, and rising costs, rather than public mobilization (Hacker, 1995).