Thanks for the link – very helpful. I’m surprised by how unpopular the suggestion of an OpenAI picket is on LW.
To be clear, is your suggestion that engaging in AI-focused direct action could lead to a unilateralist’s curse-type situation in which one government (presumably a goodish actor) pauses AI development, leaving others (presumably worse actors) to develop AI more easily?
If we could create a global AI-focused movement that would pressure governments simultaneously into coordinating a multilateral moratorium on development, would you support that?
No, I mean unilateralist curse in a different way:
Let’s say lots of LessWrong people think about whether to go blockade the OpenAI offices while holding signs and yelling at people that AI will kill everyone. Most of the people conclude that this is a bad idea because there could be large negative consequences (potentially: OpenAI engineers becoming hostile to safety efforts, polarization, spreading an inaccurate picture of the problem, the wrong memes (terminator, killer robots) go mainstream, makes AI Alignment look crazy, etc.).
However, one small minority of LessWrong people could think otherwise and that the effects are positive, and go on and do blockade the OpenAI offices.
What one should do in such a situation: coordinate with a larger group of people, think more carefully whether this is a good idea and listen to other people’s argument, etc.
I think protesting/blocking fossil fuel companies is different and less of a unilateralist curse situation. For example, there is wide elite/expert agreement that more CO2 in the atmosphere is bad. We do not have that for the extinction of humanity due to AI. There also have been many protests against fossil fuel already, so additional protest is less likely to cause serious downsides or set the tone for future attempts to solve the problem. The nature of the problem is also different: incompetent political solutions to solve global warming often still help reduce CO2 somewhat, but the same might not be true for AI Notkilleveryoneism.
I am not sure whether “direct action” (imo a terrible name btw if the theory of change is indirect) against AI would be a good idea but lean against it currently.
For example, there is wide elite/expert agreement that more CO2 in the atmosphere is bad. We do not have that for the extinction of humanity due to AI.
We don’t need to believe that AI will lead to human extinction to advocate for a moratorium on AI development. Karnofsky outlines a number of ways in which TAI could lead to global catastrophe here; and this 2021 survey of 44 AI risk researchers found the median estimate of existential risk was 32.5%. The risk from AI is a huge problem.
There also have been many protests against fossil fuel already, so additional protest is less likely to cause serious downsides or set the tone for future attempts to solve the problem.
Do you think that climate protest is more harmful than helpful when it comes to solving the climate crisis?
The nature of the problem is also different: incompetent political solutions to solve global warming often still help reduce CO2 somewhat, but the same might not be true for AI Notkilleveryoneism.
This is a good point – but that’s an argument for competent political solutions, not no political solutions (which is roughly what we have at the moment I think?).
I am not sure whether “direct action” (imo a terrible name btw if the theory of change is indirect) against AI would be a good idea but lean against it currently.
Some potential reasons why not were posted here:
https://www.lesswrong.com/posts/o8nPDZmmiLQhi6fwt/support-me-in-a-week-long-picketing-campaign-near-openai-s
I think this is potentially a unilateralist-curse-type situation, so people should be careful before they engage in these types of disruption.
Thanks for the link – very helpful. I’m surprised by how unpopular the suggestion of an OpenAI picket is on LW.
To be clear, is your suggestion that engaging in AI-focused direct action could lead to a unilateralist’s curse-type situation in which one government (presumably a goodish actor) pauses AI development, leaving others (presumably worse actors) to develop AI more easily?
If we could create a global AI-focused movement that would pressure governments simultaneously into coordinating a multilateral moratorium on development, would you support that?
No, I mean unilateralist curse in a different way:
Let’s say lots of LessWrong people think about whether to go blockade the OpenAI offices while holding signs and yelling at people that AI will kill everyone. Most of the people conclude that this is a bad idea because there could be large negative consequences (potentially: OpenAI engineers becoming hostile to safety efforts, polarization, spreading an inaccurate picture of the problem, the wrong memes (terminator, killer robots) go mainstream, makes AI Alignment look crazy, etc.). However, one small minority of LessWrong people could think otherwise and that the effects are positive, and go on and do blockade the OpenAI offices.
What one should do in such a situation: coordinate with a larger group of people, think more carefully whether this is a good idea and listen to other people’s argument, etc.
I see – and I presume you would agree with the majority of OpenAI people in this situation (i.e. direct action is a bad idea)?
Would you say the same thing about direct action taken against fossil fuel companies?
I think protesting/blocking fossil fuel companies is different and less of a unilateralist curse situation. For example, there is wide elite/expert agreement that more CO2 in the atmosphere is bad. We do not have that for the extinction of humanity due to AI. There also have been many protests against fossil fuel already, so additional protest is less likely to cause serious downsides or set the tone for future attempts to solve the problem. The nature of the problem is also different: incompetent political solutions to solve global warming often still help reduce CO2 somewhat, but the same might not be true for AI Notkilleveryoneism.
I am not sure whether “direct action” (imo a terrible name btw if the theory of change is indirect) against AI would be a good idea but lean against it currently.
We don’t need to believe that AI will lead to human extinction to advocate for a moratorium on AI development. Karnofsky outlines a number of ways in which TAI could lead to global catastrophe here; and this 2021 survey of 44 AI risk researchers found the median estimate of existential risk was 32.5%. The risk from AI is a huge problem.
Do you think that climate protest is more harmful than helpful when it comes to solving the climate crisis?
This is a good point – but that’s an argument for competent political solutions, not no political solutions (which is roughly what we have at the moment I think?).
Learn more here!