3rd PPE student at UCL and former President of the EA Society there. I was an ERA Fellow in 2023 & researched historical precedents for AI advocacy.
Charlie Harrison
Thank you Nathan!!
‘Surveillance Capitalism’ & AI Governance: Slippery Business Models, Securitisation, and Self-Regulation
I’m so sorry it’s taken me so long to respond, Mikhail!
<I would like to note that none of that had been met with corporations willing to spend potentially dozens of billions of dollars on lobbying>
I don’t think this is true, for GMOs, fossil fuels, or nuclear power. It’s important total lobbying capacity/potential, from actual amount spent on lobbying.… Total annual total technology lobbying is in the order hundreds of million: the amount allocated for AI lobbying is, by definition, less. This is a similar to total annual lobbying (or I suspect lower) than than biotechnology spending for GMOs. Annual climate lobbying over £150 million per year as I mentioned in my piece. The stakes are also high for nuclear power. As mentioned in my piece, legislation in Germany to extend plant lifetimes in 2010 offered around €73 billion in extra profits for energy companies, some firms sued for billions of Euros after Germany’s reversal. (Though, I couldn’t find an exact figure for nuclear lobbying).
< none of these clearly stand out to policymakers as something uniquely important from the competitiveness perspective >
I also feel this is too strong. Reagan’s national security advisors were reluctant about his arms control efforts in 1980s because of national security concerns. Some politicians in Sweden believed nuclear weapons were uniquely important for national security. If your point is that AI is more strategically important than these other examples, then I would agree with you. Though your phrasing is overly strong.
< AI is more like railroads >
I don’t know if this is true … I wonder how strategically important railroads were? I also wonder how profitable they were? Seems to be much more state involvement in railroads versus AI… Though, this could be an interesting case study project!
< AI is more like CFCs in the eyes of policymakers, but for that, you need a clear scientific consensus on the existential threat from AI >
I agree you need scientific input, but CFCs also saw widespread public mobilisation (as described in the piece).
< incentivising them to address the public’s concerns won’t lead to the change we need >
This seems quite confusing. Surely, this depends on what the public’s concerns are?
< the loudest voices are likely to make claims that the policymakers will know to be incorrect >
This also seems confusing to me. If you believe that policymakers regularly sort the “loudest voices” from real scientists, in general, why do you think that regulations with “substantial net-negative impact” passed wrt GMOs/nuclear?
< Also, I’m not sure there’s an actual moratorium on GM crops in Europe >
Yes, with “moratorium” I’m referring to a de-facto moratorium on new approvals of GMOs 1999-2002. In general, though, Europe grows a lot less GMOs than other countries: 0.1 million hectares annually versus >70 million hectares in US. I wasn’t aware Europe imports GMOs from abroad.
Sorry that this is still confusing. 5-15 is the confidence interval/range for the counterfactual impact of protests, i.e. p(event occurs with protests) - p(event occurs without protests) = somewhere between 5 and 15. Rather than p(event occurs with protests) = 5, p(event occurs without protests) = 15, which wouldn’t make sense.
I agree restraining AGI requires “saying no” prior to deployment. In this sense, it is more similar to geo-engineering than fossil fuels: there might be no ‘fire alarm’/‘warning shot’ for either.
Though, the net-present value of AGI (as perceived by AI labs) still seems v high, evidenced by high investment in AGI firms. So, in this sense, it has similar commercial incentives for continued development as continued deployment of GMOs/fossil fuels/nuclear power. I think the GMO example might be the best as it both had strong profit incentives and no ‘warning shots’.
Thank you!
I think your point about hindsight bias is a good one. I think it is true of technological restraint in general: “Often, in cases where a state decided against pursuing a strategically pivotal technology for reasons of risk, or cost, or (moral or risk) concerns, this can be mis-interpreted as a case where the technology probably was never viable.”
I haven’t discounted protests which were small – GMO campaigns and SAI advocacy were both small scale. The fact that unsuccessful protests are more prolonged might make them more psychologically available: e.g. Just Stop Oil campaigns. I’m slightly unsure what your point is here?
I also agree that other examples of restraint are also relevant – particularly if public pressure was involved (like for Operation Popeye, and Boeing 2707).
Hi Vaipan, I appreciate that!
I agree that political climate is definitely important. The presence of elite allies (Swedish Democrats, President Nazarbayev), and their responsiveness to changes in public opinion was likely important. I am confident the same is true for GM protests in 1990s in Europe: decision-making was made by national governments, (who were more responsive to public perceptions than FDA in USA), and there were sympathetic Green Parties in coalition governments in France/Germany.
I agree that understanding these political dynamics for AI is vitally important – and I try to do so in the GM piece. One key reason to be pessimistic about AI protests is that there aren’t many elite political allies for a pause. I think the most plausible TOCs for AI protests, for now, is about raising public awareness/shifting the Overton Window/etc., rather than actually achieving a pause.
That is a good point, thanks for that Jobst. I’ve made some edits in light of what you’ve said.
Thank you @Ulrik Horn! I think warning shots may very well be important.
From my other piece: building up organizations in anticipation of future ‘trigger events’ is vital for protests, so that they can mobilize and scale in response – the organizational factor which experts thought was most important for protests. I think the same is true for GMOs: pre-existing social movements were able to capitalise on trigger events of 1997/1998, in part, because of prior mobilisation starting in 1980s.
I also think that engineered pathogen event is a plausible warning shot for AI, though we should also broaden our scope of what could lead to public mobilisation. Lots of ‘trigger events’ for protest groups (e.g. Rosa Parks, Arab Spring) did not stem from warning shots, but cases of injustice. Similarly, there weren’t any ‘warning shots’ which posed harm for GMOs. (I say more about this in other piece!)
Appreciate that @Remmelt Ellen! In theory, I think these messages could work together. Though, given animosity between these communities, I think alliances are more challenging. Also I’m curious—what sort of policies would be mutually beneficial for people concerned about facial recognition and x-risk?
Hi Stephen, thank you for this piece.
I wonder about how relevant this case study is: housing doesn’t have significant geopolitical drivers, and construction companies are much less powerful than AI firms. Pushing the ‘Overton Window’ towards onerous housing restrictions strikes me as significantly more tractable than shifting the Overton window towards a global moratorium to AI development, as PauseAI people want. A less tractable issue might require more radical messaging.
If we look at cases which I think are closer analogues for AI protests (e.g. climate change etc.), protests often used maximalist rhetoric (e.g. Extinction Rebellion calling for a net-zero target of 2025 in the UK) which brought more moderate policies (e.g. 2050 net-zero target) into the mainstream.
In short, I don’t think we should generalise from one issue (NIMBYs), which is different in many ways from AI, to what might look like good politics for AI safety people.
Hi Denis! Thank you for this. I agree that more EA influence on policy decisions would a good outcome. As I tried to set out in this piece, ‘insiders’ currently advising governments on AI policy would benefit from greater salience of AI as an issue, which protests could help bring about.
In terms of how we can get more EA-aligned protestors … a really interesting question, and looking forward to seeing what you produce!
My initial thoughts: rational arguments about AI activism probably aren’t necessary or sufficient for broader EA engagement. EAs aren’t typically very ideological/political, and I think psychological factors (“even though I want to protest, is this what serious EAs do?”) are strong motivators. I doubt many people seriously consider the efficacy/desirability of protests, before going on a protest. (I didn’t, really). Once protests become more mainstream, I suspect more people will join. A rough-and-ready survey of EAs & their reasons not to protest would be interesting. @Gideon Futerman mentioned this in passing.
Another constraint on more EAs at protests is a lack of funding. This is endemic to protest groups more generally, and I think is also true for groups like PauseAI. I don’t think there are any full-time organisers in the UK, for example.
Hi Geoffrey, I appreciate that: thank you!
I agree with you that taking lessons from groups with goals you might object to seems counter-intuitive. (I might also add that protests against nuclear weapons programs, fossil fuels, and CFCs seemed to have had creditworthy aims.) However, I agree with you that we can learn effective strategies from groups with wrong-headed goals. Restricting the data to just groups we agree with would lose lessons about efficacy/messaging/allyship etc.
(There’s also a broader question about whether this mixed reference class should make us worry about bad epistemics in AI activism community. @Oscar Delaney made a related comment in my other piece. However, I am comparing groups on what circumstances they were in (facing similar geopolitical/corporate incentives), not epistemics.)
I also agree that widening the scope beyond anti-technology protests would be interesting!
Efficacy of AI Activism: Have We Ever Said No?
Hi Chris, thank you for this.
1) Nice! Agreed
2) It really depends on what form of alliance this takes. It could be implicit: fundraising for artists’ lawsuits for example, without any major change to public messaging. I don’t think this would dilute the focus on existential risk. When Baptists allied with Bootleggers in the prohibition era, this did not dilute their focus away from Christianity! I also think that there are indeed common interests here: restrictions on GAI models. (https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from).
That being said, if PauseAI did try to become a broad ‘AI protest group’, including via its messaging, this would dilute the focus on x-risk. Though, mixture of near-term and long-term messaging may more effective in reaching a broader audience. As mentioned in another comment, identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’. (I am more unsure about this, though.)
3) I am also hesitant about more disruptive research tactics, in particular because of allies within firms. But, I don’t think that disruptive protests necessarily have to turn the public against us… no more than blocking ships made GMO protestors unpopular. Efficacy of disruptive tactics are quite issue-dependent… I think it would be useful if someone did a thorough lit review of disruptive protests.
Thanks for these questions Oscar! To be clear, I was suggesting that effective messaging would emphasise the injustice of continued AI development in an emotionally compelling way: e.g. lack of democratic input to corporate attempts to build AGI. I wasn’t talking so much about communicating near-term injustices. Though, I take your point, that by allying with other groups suffering from near-term harms, this would imply a combined near-term and long-term message.
On your first question: would thinking about near-term & LT harms lead to worse thinking? Do you mean this would make us care about AI x-risk less?
And on your second point, on whether it would be perceived as manipulative. I don’t think so. If AI protest can effectively communicate a ‘We are fighting a shared battle’ message, as @Gideon Futerman has written about, this could make AI protests seem less niche/esoteric. Identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’, and could make AI risk more salient. In addition, broad ‘coalitions of the willing’ (i.e. Baptists and Bootleggers) are very common in politics. What do you think?
Sounds interesting Oscar, though I wonder what reference class you’d use … all protests? A unique feature of AI protests is that many AI researchers are themselves protesting. If we are comparing groups on epistemics, the Bulletin of the Atomic Scientists (founded by Manhattan Project scientists) might be a closer comparison than GM protestors (who were led by Greenpeace, farmers etc., not people working in biotech). I also agree that considering inside-view arguments about AI risk are important.
Thank you for your comments Kasey! Glad you think it’s an interesting comparison. I agree with you that GMOs were over-regulated in Europe. Perhaps I should have said explicitly that the scientific consensus is that GMOs are safe. I do make a brief caveat in the Intro that I’m not comparing the “credibility of AI safety concerns (which appear more legitimate than GMO concerns)”, though this deserves more detail.
Thank you for writing this piece, Sarah! I think the difference stated above between: A) counterfactual impact of an action, or a person; B) moral praise-worthiness is important.
You might say that individual actions, or lives have large differences in impact, but remain sceptical of the idea of (intrinsic) moral desert/merit – because individuals’ actions are conditioned by prior causes. Your post reminded me a lot of Michael Sandel’s book, The Tyranny of Merit. Sandel takes issue with the attitude of “winners” within contemporary meritocracy who see themselves as deserving of their success. This seems similar to your concerns about hubris amongst “high-impact individuals” .