I agree that confrontational/hostile tactics have their place and can be effective (under certain circumstances they are even necessary). I also agree that there are several plausible positive radical flank effects. Overall, I’d still guess that, say, PETA’s efforts are net negative—though it’s definitely not clear to me and I’m by no means an expert on this topic. It would be great to have more research on this topic.[1]
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
Yeah, I’m sympathetic to such concerns. I sometimes worry about being biased against the more “dirty and tedious” work of trying to slow down AI or public AI safety advocacy. For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy. To be clear, there were of course also many good reasons to not consider such options earlier (such as a complete lack of public support). (Also, AI alignment research (generally speaking) is great, of course!)
It still seems possible to me that one can convey strong messages like “(some) AI companies are doing something reckless and unreasonable” while being nice and considerate, similarly to how Martin Luther King very clearly condemned racism without being (overly) hostile.
Again, though, one amazing thing about not having explored outside game much in AI Safety is that we have the luxury of pushing the Overton window with even the most bland advocacy.
For example, present participants with (hypothetical) i) confrontational and ii) considerate AI pause protest scenarios/messages and measure resulting changes in beliefs and attitudes. I think Rethink Priorities has already done some work in this vein.
For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Thanks, makes sense!
I agree that confrontational/hostile tactics have their place and can be effective (under certain circumstances they are even necessary). I also agree that there are several plausible positive radical flank effects. Overall, I’d still guess that, say, PETA’s efforts are net negative—though it’s definitely not clear to me and I’m by no means an expert on this topic. It would be great to have more research on this topic.[1]
Yeah, I’m sympathetic to such concerns. I sometimes worry about being biased against the more “dirty and tedious” work of trying to slow down AI or public AI safety advocacy. For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy. To be clear, there were of course also many good reasons to not consider such options earlier (such as a complete lack of public support). (Also, AI alignment research (generally speaking) is great, of course!)
It still seems possible to me that one can convey strong messages like “(some) AI companies are doing something reckless and unreasonable” while being nice and considerate, similarly to how Martin Luther King very clearly condemned racism without being (overly) hostile.
Agreed. :)
For example, present participants with (hypothetical) i) confrontational and ii) considerate AI pause protest scenarios/messages and measure resulting changes in beliefs and attitudes. I think Rethink Priorities has already done some work in this vein.
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Definitely!