Charlie—thanks very much for this informative and valuable analysis.
Some EAs might react by thinking ‘Well in many of these cases, the protesters were wrong, misguided, or irrational, so we feel like it’s weird to learn from protests that were successful, but that addressed the wrong cause areas’. That was my first reaction, esp. regarding protests against nuclear power, GMOs, and geo-engineering (all of which I support, more or less).
So I think it’s important to separate goals from strategies, and to learn effective strategies even from protest movements that may have had wrong-headed goals.
Indeed, taking that point seriously, it may be worth broadening our historical consideration of successful protest movements from anti-new-technology protests to other situations in which protesters succeeded in challenging entrenched corporate power and escaping from geopolitical arms races. There again, I think we should feel free to learn effective tactics even from movements with misguided goals.
I agree with you that taking lessons from groups with goals you might object to seems counter-intuitive. (I might also add that protests against nuclear weapons programs, fossil fuels, and CFCs seemed to have had creditworthy aims.) However, I agree with you that we can learn effective strategies from groups with wrong-headed goals. Restricting the data to just groups we agree with would lose lessons about efficacy/messaging/allyship etc.
(There’s also a broader question about whether this mixed reference class should make us worry about bad epistemics in AI activism community. @Oscar Delaney made a related comment in my other piece. However, I am comparing groups on what circumstances they were in (facing similar geopolitical/corporate incentives), not epistemics.)
I also agree that widening the scope beyond anti-technology protests would be interesting!
I would caution against taking too strong stances on whether we support something or not without having rigorously assessed it according to EA epistemic standards. I feel this is particularly acute in the case of GMOs and geo-engineering where in the case of GMOs the OP highlighted the justice part of the protests where on the one hand there are powerful corporates and on the other sometimes poor farmers. In the case of geo-engineering in Sweden it should be noted that Sweden has not signed the declaration on the rights of indigenous peoples. Not heeding the needs and perspectives of such marginalized groups has in the past been linked to horrific outcomes and as EA has its origin in helping these very groups I would strongly caution about taking any strong stances here without very careful deliberation.
This also goes to another comment on this post about how “the world would be better if EAs influence more policy decisions” which makes me cautious about hubris from EAs in how we can “fix everything”. One thing I like about EA is how we focus on very small targeted interventions, understanding them thoroughly before trying to make a change. History seem less seldom to judge harshly those who just were seeking to help people clearly in need with limited scope, whereas people or small groups with grand ideas of revolutionizing the world often seem to end up as history’s antagonists.
Apologies for the rant, I actually came to this post to raise another point but felt a need to react to some of the sentiment in the other comments.
Ulrik—I agree with you that ‘History seem less seldom to judge harshly those who just were seeking to help people clearly in need with limited scope, whereas people or small groups with grand ideas of revolutionizing the world often seem to end up as history’s antagonists.’
It’s important to note that the AI industry promoters who advocate rushing full speed ahead towards AGI are a typical example of ‘small groups with grand ideas of revolutionizing the world’, eg by promising that AGI will solve climate change, solve aging, create mass prosperity, eliminate the need to work, etc. They are the dreamy utopians who are willing to take huge risks and dangers on everybody else, to impose their vision of an ideal society.
The people advocating an AI Pause (like me) are focused on a ‘very small targeted intervention’ of the sort that you support: shut down the handful of companies, involving just a few thousand researchers, in just a few cities, who are rushing towards AGI.
Charlie—thanks very much for this informative and valuable analysis.
Some EAs might react by thinking ‘Well in many of these cases, the protesters were wrong, misguided, or irrational, so we feel like it’s weird to learn from protests that were successful, but that addressed the wrong cause areas’. That was my first reaction, esp. regarding protests against nuclear power, GMOs, and geo-engineering (all of which I support, more or less).
So I think it’s important to separate goals from strategies, and to learn effective strategies even from protest movements that may have had wrong-headed goals.
Indeed, taking that point seriously, it may be worth broadening our historical consideration of successful protest movements from anti-new-technology protests to other situations in which protesters succeeded in challenging entrenched corporate power and escaping from geopolitical arms races. There again, I think we should feel free to learn effective tactics even from movements with misguided goals.
Hi Geoffrey, I appreciate that: thank you!
I agree with you that taking lessons from groups with goals you might object to seems counter-intuitive. (I might also add that protests against nuclear weapons programs, fossil fuels, and CFCs seemed to have had creditworthy aims.) However, I agree with you that we can learn effective strategies from groups with wrong-headed goals. Restricting the data to just groups we agree with would lose lessons about efficacy/messaging/allyship etc.
(There’s also a broader question about whether this mixed reference class should make us worry about bad epistemics in AI activism community. @Oscar Delaney made a related comment in my other piece. However, I am comparing groups on what circumstances they were in (facing similar geopolitical/corporate incentives), not epistemics.)
I also agree that widening the scope beyond anti-technology protests would be interesting!
I would caution against taking too strong stances on whether we support something or not without having rigorously assessed it according to EA epistemic standards. I feel this is particularly acute in the case of GMOs and geo-engineering where in the case of GMOs the OP highlighted the justice part of the protests where on the one hand there are powerful corporates and on the other sometimes poor farmers. In the case of geo-engineering in Sweden it should be noted that Sweden has not signed the declaration on the rights of indigenous peoples. Not heeding the needs and perspectives of such marginalized groups has in the past been linked to horrific outcomes and as EA has its origin in helping these very groups I would strongly caution about taking any strong stances here without very careful deliberation.
This also goes to another comment on this post about how “the world would be better if EAs influence more policy decisions” which makes me cautious about hubris from EAs in how we can “fix everything”. One thing I like about EA is how we focus on very small targeted interventions, understanding them thoroughly before trying to make a change. History seem less seldom to judge harshly those who just were seeking to help people clearly in need with limited scope, whereas people or small groups with grand ideas of revolutionizing the world often seem to end up as history’s antagonists.
Apologies for the rant, I actually came to this post to raise another point but felt a need to react to some of the sentiment in the other comments.
Ulrik—I agree with you that ‘History seem less seldom to judge harshly those who just were seeking to help people clearly in need with limited scope, whereas people or small groups with grand ideas of revolutionizing the world often seem to end up as history’s antagonists.’
It’s important to note that the AI industry promoters who advocate rushing full speed ahead towards AGI are a typical example of ‘small groups with grand ideas of revolutionizing the world’, eg by promising that AGI will solve climate change, solve aging, create mass prosperity, eliminate the need to work, etc. They are the dreamy utopians who are willing to take huge risks and dangers on everybody else, to impose their vision of an ideal society.
The people advocating an AI Pause (like me) are focused on a ‘very small targeted intervention’ of the sort that you support: shut down the handful of companies, involving just a few thousand researchers, in just a few cities, who are rushing towards AGI.
Yes I agree. Apologies for responding to the other comment on this post in the reply to your comment—I think that created unnecessary confusion.
Ulrik—thanks; understood!