This is really valuable research, thank you for investigating and sharing!
My take from this is that there are people out there who are very good (and sometimes lucky) at organising effective protests. As @Geoffrey Miller comments below (above?), the protesters were not always right. But this is how the world works. We do not have a perfect world government. We can still learn from what they did that worked!
I believe that the world would be a much better place if EA’s influenced more policy decisions. And if protesters were supporting EA-positions, it’s highly likely that they’d will be at least mostly right!
So hoping that this work will help us figure out how to get more EA-aligned protesters.
I will be investigating a related area in my BlueDot Research Project, so may post something more on this later. But this article has already been a great help!
Hi Denis! Thank you for this. I agree that more EA influence on policy decisions would a good outcome. As I tried to set out in this piece, ‘insiders’ currently advising governments on AI policy would benefit from greater salience of AI as an issue, which protests could help bring about.
In terms of how we can get more EA-aligned protestors … a really interesting question, and looking forward to seeing what you produce!
My initial thoughts: rational arguments about AI activism probably aren’t necessary or sufficient for broader EA engagement. EAs aren’t typically very ideological/political, and I think psychological factors (“even though I want to protest, is this what serious EAs do?”) are strong motivators. I doubt many people seriously consider the efficacy/desirability of protests, before going on a protest. (I didn’t, really). Once protests become more mainstream, I suspect more people will join. A rough-and-ready survey of EAs & their reasons not to protest would be interesting. @Gideon Futerman mentioned this in passing.
Another constraint on more EAs at protests is a lack of funding. This is endemic to protest groups more generally, and I think is also true for groups like PauseAI. I don’t think there are any full-time organisers in the UK, for example.
Charlie—I appreciate your point about the lack of funding for AI-related protests.
There seems to be a big double standard here.
Many EA organizations are happy to spend tens of millions of dollars on ‘technical AI alignment work’, or AI policy/governance work, in hopes that they will reduce AI extinction risk. Although, IMHO, both have a very low chance of actually slowing AGI development, or resulting in safe alignment (given that ‘alignment with human values in general’ seems impossible, in principle, given the diversity and heterogeneity of human values—as starkly illustrated in recent news from the Middle East.)
But the same EA organizations aren’t willing, yet, to spend even a few tens of thousands of dollars on ‘Pause AI’ protests that, IMO, would have a much higher chance of sparking public discourse, interest, and concern about AI risks.
Funding protests is a tried-and-true method for raising public awareness. Technical AI alignment work is not a tried-and-true method for making AI safe. If our goal is to reduce extinction risk, we may be misallocating resources in directions that might seem intellectually prestigious, but that aren’t actually very effective in the real world of public opinion, social media, mainstream media, and democratic politics.
Lightspeed Grants and my smaller individual donors should get credit for funding me to work on advocacy which includes protests full-time! Sadly afaik that is the only EA/adjacent funding that has gone toward public advocacy for AI Safety.
This is really valuable research, thank you for investigating and sharing!
My take from this is that there are people out there who are very good (and sometimes lucky) at organising effective protests. As @Geoffrey Miller comments below (above?), the protesters were not always right. But this is how the world works. We do not have a perfect world government. We can still learn from what they did that worked!
I believe that the world would be a much better place if EA’s influenced more policy decisions. And if protesters were supporting EA-positions, it’s highly likely that they’d will be at least mostly right!
So hoping that this work will help us figure out how to get more EA-aligned protesters.
I will be investigating a related area in my BlueDot Research Project, so may post something more on this later. But this article has already been a great help!
Hi Denis! Thank you for this. I agree that more EA influence on policy decisions would a good outcome. As I tried to set out in this piece, ‘insiders’ currently advising governments on AI policy would benefit from greater salience of AI as an issue, which protests could help bring about.
In terms of how we can get more EA-aligned protestors … a really interesting question, and looking forward to seeing what you produce!
My initial thoughts: rational arguments about AI activism probably aren’t necessary or sufficient for broader EA engagement. EAs aren’t typically very ideological/political, and I think psychological factors (“even though I want to protest, is this what serious EAs do?”) are strong motivators. I doubt many people seriously consider the efficacy/desirability of protests, before going on a protest. (I didn’t, really). Once protests become more mainstream, I suspect more people will join. A rough-and-ready survey of EAs & their reasons not to protest would be interesting. @Gideon Futerman mentioned this in passing.
Another constraint on more EAs at protests is a lack of funding. This is endemic to protest groups more generally, and I think is also true for groups like PauseAI. I don’t think there are any full-time organisers in the UK, for example.
Charlie—I appreciate your point about the lack of funding for AI-related protests.
There seems to be a big double standard here.
Many EA organizations are happy to spend tens of millions of dollars on ‘technical AI alignment work’, or AI policy/governance work, in hopes that they will reduce AI extinction risk. Although, IMHO, both have a very low chance of actually slowing AGI development, or resulting in safe alignment (given that ‘alignment with human values in general’ seems impossible, in principle, given the diversity and heterogeneity of human values—as starkly illustrated in recent news from the Middle East.)
But the same EA organizations aren’t willing, yet, to spend even a few tens of thousands of dollars on ‘Pause AI’ protests that, IMO, would have a much higher chance of sparking public discourse, interest, and concern about AI risks.
Funding protests is a tried-and-true method for raising public awareness. Technical AI alignment work is not a tried-and-true method for making AI safe. If our goal is to reduce extinction risk, we may be misallocating resources in directions that might seem intellectually prestigious, but that aren’t actually very effective in the real world of public opinion, social media, mainstream media, and democratic politics.
Lightspeed Grants and my smaller individual donors should get credit for funding me to work on advocacy which includes protests full-time! Sadly afaik that is the only EA/adjacent funding that has gone toward public advocacy for AI Safety.