Hi Jamie. Extremely interesting post! I’m giving my initial thoughts as an animal welfare researcher who has also participated in animal activism and AI protest.
I agree with Tyler that PauseAI, etc., are probably best characterised as moderate in tactics (if not demands)
I don’t think more radical tactics would be helpful right now, as AI labs and governments are expressing intention to regulate
I agree it’s important to think about how knowledge flows from/between, e.g., GovAI and PauseAI. Should PauseAI be demanding AI labs meet some of the governance proposals from Schuett, et al (2023)? Does the AI moratorium ask look less realistic when AI public intellectuals like Geoffrey Hinton say it’s not doable? If so: does that matter? Or does the radical flank theory apply?
There’s room in this social change ecology for more actors focused on direct corporate engagement and shareholder activism. I know a couple people are thinking about this
Easier said than done: AI corporate campaigners should strive for robust demands that aren’t easily Goodharted or ‘alignment-washed’. Campaigners should also be wary of alienating potential allies by dismissing any corporate commitments relating to neartermist AI ethics as alignment-washing; corporate advocacy will demand political coalition building
I think I agree with all of these points, with the tentative exception of the 2nd.
I think adding more ‘bad cop’ advocacy groups into the mix could help motivate (or enforce?) companies to actually act on their intentions. After all, the behaviour-intention gap is real… and it’s hard to know their true intentions.
Besides, it could also be that the advocacy groups start by targeting companies that are maybe less frontier but lagging behind on safety commitments or actions. This could help diffuse safety norms faster, and reduce race dynamics where leading labs feel the push to stay ahead of less safety-conscious orgs.
Hi Jamie. Extremely interesting post! I’m giving my initial thoughts as an animal welfare researcher who has also participated in animal activism and AI protest.
I agree with Tyler that PauseAI, etc., are probably best characterised as moderate in tactics (if not demands)
I don’t think more radical tactics would be helpful right now, as AI labs and governments are expressing intention to regulate
I agree it’s important to think about how knowledge flows from/between, e.g., GovAI and PauseAI. Should PauseAI be demanding AI labs meet some of the governance proposals from Schuett, et al (2023)? Does the AI moratorium ask look less realistic when AI public intellectuals like Geoffrey Hinton say it’s not doable? If so: does that matter? Or does the radical flank theory apply?
There’s room in this social change ecology for more actors focused on direct corporate engagement and shareholder activism. I know a couple people are thinking about this
Easier said than done: AI corporate campaigners should strive for robust demands that aren’t easily Goodharted or ‘alignment-washed’. Campaigners should also be wary of alienating potential allies by dismissing any corporate commitments relating to neartermist AI ethics as alignment-washing; corporate advocacy will demand political coalition building
George’s work on why companies are motivated to make CSR commitments might be worth reviewing, although he notes methodological limitations
Some AI campaigners have experience with animal campaigning, at least in the UK. I think the moral trade idea is interesting!
I think I agree with all of these points, with the tentative exception of the 2nd.
I think adding more ‘bad cop’ advocacy groups into the mix could help motivate (or enforce?) companies to actually act on their intentions. After all, the behaviour-intention gap is real… and it’s hard to know their true intentions.
Besides, it could also be that the advocacy groups start by targeting companies that are maybe less frontier but lagging behind on safety commitments or actions. This could help diffuse safety norms faster, and reduce race dynamics where leading labs feel the push to stay ahead of less safety-conscious orgs.