3rd PPE student at UCL and former President of the EA Society there. I was an ERA Fellow in 2023 & researched historical precedents for AI advocacy.
Charlie Harrison
Appreciate that @Remmelt Ellen! In theory, I think these messages could work together. Though, given animosity between these communities, I think alliances are more challenging. Also I’m curious—what sort of policies would be mutually beneficial for people concerned about facial recognition and x-risk?
Hi Stephen, thank you for this piece.
I wonder about how relevant this case study is: housing doesn’t have significant geopolitical drivers, and construction companies are much less powerful than AI firms. Pushing the ‘Overton Window’ towards onerous housing restrictions strikes me as significantly more tractable than shifting the Overton window towards a global moratorium to AI development, as PauseAI people want. A less tractable issue might require more radical messaging.
If we look at cases which I think are closer analogues for AI protests (e.g. climate change etc.), protests often used maximalist rhetoric (e.g. Extinction Rebellion calling for a net-zero target of 2025 in the UK) which brought more moderate policies (e.g. 2050 net-zero target) into the mainstream.
In short, I don’t think we should generalise from one issue (NIMBYs), which is different in many ways from AI, to what might look like good politics for AI safety people.
Hi Denis! Thank you for this. I agree that more EA influence on policy decisions would a good outcome. As I tried to set out in this piece, ‘insiders’ currently advising governments on AI policy would benefit from greater salience of AI as an issue, which protests could help bring about.
In terms of how we can get more EA-aligned protestors … a really interesting question, and looking forward to seeing what you produce!
My initial thoughts: rational arguments about AI activism probably aren’t necessary or sufficient for broader EA engagement. EAs aren’t typically very ideological/political, and I think psychological factors (“even though I want to protest, is this what serious EAs do?”) are strong motivators. I doubt many people seriously consider the efficacy/desirability of protests, before going on a protest. (I didn’t, really). Once protests become more mainstream, I suspect more people will join. A rough-and-ready survey of EAs & their reasons not to protest would be interesting. @Gideon Futerman mentioned this in passing.
Another constraint on more EAs at protests is a lack of funding. This is endemic to protest groups more generally, and I think is also true for groups like PauseAI. I don’t think there are any full-time organisers in the UK, for example.
Hi Geoffrey, I appreciate that: thank you!
I agree with you that taking lessons from groups with goals you might object to seems counter-intuitive. (I might also add that protests against nuclear weapons programs, fossil fuels, and CFCs seemed to have had creditworthy aims.) However, I agree with you that we can learn effective strategies from groups with wrong-headed goals. Restricting the data to just groups we agree with would lose lessons about efficacy/messaging/allyship etc.
(There’s also a broader question about whether this mixed reference class should make us worry about bad epistemics in AI activism community. @Oscar Delaney made a related comment in my other piece. However, I am comparing groups on what circumstances they were in (facing similar geopolitical/corporate incentives), not epistemics.)
I also agree that widening the scope beyond anti-technology protests would be interesting!
Hi Chris, thank you for this.
1) Nice! Agreed
2) It really depends on what form of alliance this takes. It could be implicit: fundraising for artists’ lawsuits for example, without any major change to public messaging. I don’t think this would dilute the focus on existential risk. When Baptists allied with Bootleggers in the prohibition era, this did not dilute their focus away from Christianity! I also think that there are indeed common interests here: restrictions on GAI models. (https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from).
That being said, if PauseAI did try to become a broad ‘AI protest group’, including via its messaging, this would dilute the focus on x-risk. Though, mixture of near-term and long-term messaging may more effective in reaching a broader audience. As mentioned in another comment, identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’. (I am more unsure about this, though.)
3) I am also hesitant about more disruptive research tactics, in particular because of allies within firms. But, I don’t think that disruptive protests necessarily have to turn the public against us… no more than blocking ships made GMO protestors unpopular. Efficacy of disruptive tactics are quite issue-dependent… I think it would be useful if someone did a thorough lit review of disruptive protests.
Thanks for these questions Oscar! To be clear, I was suggesting that effective messaging would emphasise the injustice of continued AI development in an emotionally compelling way: e.g. lack of democratic input to corporate attempts to build AGI. I wasn’t talking so much about communicating near-term injustices. Though, I take your point, that by allying with other groups suffering from near-term harms, this would imply a combined near-term and long-term message.
On your first question: would thinking about near-term & LT harms lead to worse thinking? Do you mean this would make us care about AI x-risk less?
And on your second point, on whether it would be perceived as manipulative. I don’t think so. If AI protest can effectively communicate a ‘We are fighting a shared battle’ message, as @Gideon Futerman has written about, this could make AI protests seem less niche/esoteric. Identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’, and could make AI risk more salient. In addition, broad ‘coalitions of the willing’ (i.e. Baptists and Bootleggers) are very common in politics. What do you think?
Sounds interesting Oscar, though I wonder what reference class you’d use … all protests? A unique feature of AI protests is that many AI researchers are themselves protesting. If we are comparing groups on epistemics, the Bulletin of the Atomic Scientists (founded by Manhattan Project scientists) might be a closer comparison than GM protestors (who were led by Greenpeace, farmers etc., not people working in biotech). I also agree that considering inside-view arguments about AI risk are important.
Thank you for your comments Kasey! Glad you think it’s an interesting comparison. I agree with you that GMOs were over-regulated in Europe. Perhaps I should have said explicitly that the scientific consensus is that GMOs are safe. I do make a brief caveat in the Intro that I’m not comparing the “credibility of AI safety concerns (which appear more legitimate than GMO concerns)”, though this deserves more detail.
Thank you @Ulrik Horn! I think warning shots may very well be important.
From my other piece: building up organizations in anticipation of future ‘trigger events’ is vital for protests, so that they can mobilize and scale in response – the organizational factor which experts thought was most important for protests. I think the same is true for GMOs: pre-existing social movements were able to capitalise on trigger events of 1997/1998, in part, because of prior mobilisation starting in 1980s.
I also think that engineered pathogen event is a plausible warning shot for AI, though we should also broaden our scope of what could lead to public mobilisation. Lots of ‘trigger events’ for protest groups (e.g. Rosa Parks, Arab Spring) did not stem from warning shots, but cases of injustice. Similarly, there weren’t any ‘warning shots’ which posed harm for GMOs. (I say more about this in other piece!)