• I agree that working with other groups is great when we have a common interest. Take, for example, the FLI letter. This was a highly successful example of a collaboration with some AI ethics people.
• At the same time, I’m less optimistic about any plans that involve developing our strategy in broad-tent groups which would possibly dilute our focus. This doesn’t just apply to the AI ethics community, with whom we have an unfortunately fractious relationship, but would also apply to artists as well. Of course, think it makes sense to collaborate with them when our interests align.
• I’m less a fan of disruptive tactics, especially since we have allies within these firms. There’s a sense in which these are a cheap way to get attention and I suspect that if we’re strategic we can find other ways to draw attention to our concerns without risking turning the public against us. For example, persuading a large number of people to wear the same t-shirt at a conference might actually be more effective.
2) It really depends on what form of alliance this takes. It could be implicit: fundraising for artists’ lawsuits for example, without any major change to public messaging. I don’t think this would dilute the focus on existential risk. When Baptists allied with Bootleggers in the prohibition era, this did not dilute their focus away from Christianity! I also think that there are indeed common interests here: restrictions on GAI models. (https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from).
That being said, if PauseAI did try to become a broad ‘AI protest group’, including via its messaging, this would dilute the focus on x-risk. Though, mixture of near-term and long-term messaging may more effective in reaching a broader audience. As mentioned in another comment, identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’. (I am more unsure about this, though.)
3) I am also hesitant about more disruptive research tactics, in particular because of allies within firms. But, I don’t think that disruptive protests necessarily have to turn the public against us… no more than blocking ships made GMO protestors unpopular. Efficacy of disruptive tactics are quite issue-dependent… I think it would be useful if someone did a thorough lit review of disruptive protests.
Regarding allies:
• I agree that working with other groups is great when we have a common interest. Take, for example, the FLI letter. This was a highly successful example of a collaboration with some AI ethics people.
• At the same time, I’m less optimistic about any plans that involve developing our strategy in broad-tent groups which would possibly dilute our focus. This doesn’t just apply to the AI ethics community, with whom we have an unfortunately fractious relationship, but would also apply to artists as well. Of course, think it makes sense to collaborate with them when our interests align.
• I’m less a fan of disruptive tactics, especially since we have allies within these firms. There’s a sense in which these are a cheap way to get attention and I suspect that if we’re strategic we can find other ways to draw attention to our concerns without risking turning the public against us. For example, persuading a large number of people to wear the same t-shirt at a conference might actually be more effective.
Hi Chris, thank you for this.
1) Nice! Agreed
2) It really depends on what form of alliance this takes. It could be implicit: fundraising for artists’ lawsuits for example, without any major change to public messaging. I don’t think this would dilute the focus on existential risk. When Baptists allied with Bootleggers in the prohibition era, this did not dilute their focus away from Christianity! I also think that there are indeed common interests here: restrictions on GAI models. (https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from).
That being said, if PauseAI did try to become a broad ‘AI protest group’, including via its messaging, this would dilute the focus on x-risk. Though, mixture of near-term and long-term messaging may more effective in reaching a broader audience. As mentioned in another comment, identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’. (I am more unsure about this, though.)
3) I am also hesitant about more disruptive research tactics, in particular because of allies within firms. But, I don’t think that disruptive protests necessarily have to turn the public against us… no more than blocking ships made GMO protestors unpopular. Efficacy of disruptive tactics are quite issue-dependent… I think it would be useful if someone did a thorough lit review of disruptive protests.