There’s such a wide-open field here that we can make a lot of headway with nice tactics. No question from me that should be the first approach, and I would be thrilled and relieved if that just kept working. There’s no reason to rush to hostility, and I don’t know if I would be able to run a group like that if I thought it was coming to that, but there may one day be a place for (nonviolent) hostile advocacy.
I sometimes see people make a similar point to yours in an illogical way, basically asserting that hostility never works, and I don’t agree with that. People think they hate PETA while updating in their direction about whatever issues they are advocating for and promptly forgetting they ever thought anything different. It’s a difficult role to play but I think PETA absolutely knows what they are doing and how to influence people. It’s common in social change for moderate groups to get the credit, and for people remember disliking the radical groups, but the direction of society’s update was determined by the radical flank pushing the Overton window.
The answer is not “hostility is bad/doesn’t work” or “hostility is good/works”. It depends on the context. It’s an inconvenient truth that hostility sometimes works, and works where nothing else does. I don’t think we should hold it off the table forever.
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth. If showing hostility works to convey the situation, then hostility could be merited.
Again, though, one amazing thing about not having explored outside game much in AI Safety is that we have the luxury of pushing the Overton window with even the most bland advocacy. I think we should advance that frontier slowly. And I really hope it’s not necessary to advance into hostility.
EDIT: Just to be absolutely clear—the hard line that advocacy should not cross is violence. I am never using the word “hostility” to refer to violence.
I agree that confrontational/hostile tactics have their place and can be effective (under certain circumstances they are even necessary). I also agree that there are several plausible positive radical flank effects. Overall, I’d still guess that, say, PETA’s efforts are net negative—though it’s definitely not clear to me and I’m by no means an expert on this topic. It would be great to have more research on this topic.[1]
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
Yeah, I’m sympathetic to such concerns. I sometimes worry about being biased against the more “dirty and tedious” work of trying to slow down AI or public AI safety advocacy. For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy. To be clear, there were of course also many good reasons to not consider such options earlier (such as a complete lack of public support). (Also, AI alignment research (generally speaking) is great, of course!)
It still seems possible to me that one can convey strong messages like “(some) AI companies are doing something reckless and unreasonable” while being nice and considerate, similarly to how Martin Luther King very clearly condemned racism without being (overly) hostile.
Again, though, one amazing thing about not having explored outside game much in AI Safety is that we have the luxury of pushing the Overton window with even the most bland advocacy.
For example, present participants with (hypothetical) i) confrontational and ii) considerate AI pause protest scenarios/messages and measure resulting changes in beliefs and attitudes. I think Rethink Priorities has already done some work in this vein.
For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Hmmm, your reply makes me more worried than before that you’ll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :’)
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
I’m not completely sure what you refer to with “legitimate jobs”, but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing “the behavior of AI companies” (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn’t do? I think that’s definitely not “the most bland advocacy” anymore?
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly:
“There’s no reason to rush to hostility”
“If showing hostility works to convey the situation, then hostility could be merited.”
“And I really hope it’s not necessary to advance into hostility.”
It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly
I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?
Thanks, David :)
There’s such a wide-open field here that we can make a lot of headway with nice tactics. No question from me that should be the first approach, and I would be thrilled and relieved if that just kept working. There’s no reason to rush to hostility, and I don’t know if I would be able to run a group like that if I thought it was coming to that, but there may one day be a place for (nonviolent) hostile advocacy.
I sometimes see people make a similar point to yours in an illogical way, basically asserting that hostility never works, and I don’t agree with that. People think they hate PETA while updating in their direction about whatever issues they are advocating for and promptly forgetting they ever thought anything different. It’s a difficult role to play but I think PETA absolutely knows what they are doing and how to influence people. It’s common in social change for moderate groups to get the credit, and for people remember disliking the radical groups, but the direction of society’s update was determined by the radical flank pushing the Overton window.
The answer is not “hostility is bad/doesn’t work” or “hostility is good/works”. It depends on the context. It’s an inconvenient truth that hostility sometimes works, and works where nothing else does. I don’t think we should hold it off the table forever.
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth. If showing hostility works to convey the situation, then hostility could be merited.
Again, though, one amazing thing about not having explored outside game much in AI Safety is that we have the luxury of pushing the Overton window with even the most bland advocacy. I think we should advance that frontier slowly. And I really hope it’s not necessary to advance into hostility.
EDIT: Just to be absolutely clear—the hard line that advocacy should not cross is violence. I am never using the word “hostility” to refer to violence.
Thanks, makes sense!
I agree that confrontational/hostile tactics have their place and can be effective (under certain circumstances they are even necessary). I also agree that there are several plausible positive radical flank effects. Overall, I’d still guess that, say, PETA’s efforts are net negative—though it’s definitely not clear to me and I’m by no means an expert on this topic. It would be great to have more research on this topic.[1]
Yeah, I’m sympathetic to such concerns. I sometimes worry about being biased against the more “dirty and tedious” work of trying to slow down AI or public AI safety advocacy. For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy. To be clear, there were of course also many good reasons to not consider such options earlier (such as a complete lack of public support). (Also, AI alignment research (generally speaking) is great, of course!)
It still seems possible to me that one can convey strong messages like “(some) AI companies are doing something reckless and unreasonable” while being nice and considerate, similarly to how Martin Luther King very clearly condemned racism without being (overly) hostile.
Agreed. :)
For example, present participants with (hypothetical) i) confrontational and ii) considerate AI pause protest scenarios/messages and measure resulting changes in beliefs and attitudes. I think Rethink Priorities has already done some work in this vein.
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Definitely!
Hmmm, your reply makes me more worried than before that you’ll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :’)
I’m not completely sure what you refer to with “legitimate jobs”, but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing “the behavior of AI companies” (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn’t do? I think that’s definitely not “the most bland advocacy” anymore?
Also, the way you frame your pushback makes me worry that you’ll loose patience with considerate advocacy way too quickly:
It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.
I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?