I think this is tantamount to saying that we shouldn’t engage within the political system, compromise, or meet people where they are coming from in our advocacy. I don’t think other social movements would have got anywhere with this kind of attitude, and this seems especially tricky with something very detail orientated like a AI safety.
Inside game approaches (versus outside game approaches like this is describing) are going to require engaging in things this post says that no one should do. Boldly stating exactly the ideal situation you are after could have its role, but I’d need to see and much more detailed argument about why that should be the only game in town when it comes to AI.
I think that as AI safety turns more into an advocacy project it needs engage more with the existing literature on the subject including what has worked for past social movements.
Also, importantly, this isn’t lying (as Daniel’s comment explains).
This is fundamentally different imo, because we aren’t asking for people to right injustices, stick up for marginalised groups, care about future generations, or do good of any kind; we’re asking people not to kill literally everyone, including ourselves, and for those who would do (however unintentionally) to be stopped by governments. It’s a matter of survival above all else.
I don’t think the scale or expected value affects this strategy question directly. You still just use a strategy that is going to be most likely to achieve the goal.
If the goal is something you have really widespread agreement on, that probably leans you towards an uncompromising, radical ask approach. Seems like things might be going pretty well for AI safety in that respect, though I don’t know if it’s been established that people are buying into the high probability of doom arguments that much. I suspect that we are much less far along than the climate change movement in that respect, for example. And even if support were much greater, I wouldn’t agree with a lot of this post.
Oh, my expertise is in animal advocacy, not AI safety FYI
I think there is something to be said for the radical flank effect, and Connor and Gabe are providing a somewhat radical flank (even though I actually think the “fucking stop!” position is the most reasonable, moderate one, given the urgency and the stakes!).
I think this is tantamount to saying that we shouldn’t engage within the political system, compromise, or meet people where they are coming from in our advocacy. I don’t think other social movements would have got anywhere with this kind of attitude, and this seems especially tricky with something very detail orientated like a AI safety.
Inside game approaches (versus outside game approaches like this is describing) are going to require engaging in things this post says that no one should do. Boldly stating exactly the ideal situation you are after could have its role, but I’d need to see and much more detailed argument about why that should be the only game in town when it comes to AI.
I think that as AI safety turns more into an advocacy project it needs engage more with the existing literature on the subject including what has worked for past social movements.
Also, importantly, this isn’t lying (as Daniel’s comment explains).
This is fundamentally different imo, because we aren’t asking for people to right injustices, stick up for marginalised groups, care about future generations, or do good of any kind; we’re asking people not to kill literally everyone, including ourselves, and for those who would do (however unintentionally) to be stopped by governments. It’s a matter of survival above all else.
I don’t think the scale or expected value affects this strategy question directly. You still just use a strategy that is going to be most likely to achieve the goal.
If the goal is something you have really widespread agreement on, that probably leans you towards an uncompromising, radical ask approach. Seems like things might be going pretty well for AI safety in that respect, though I don’t know if it’s been established that people are buying into the high probability of doom arguments that much. I suspect that we are much less far along than the climate change movement in that respect, for example. And even if support were much greater, I wouldn’t agree with a lot of this post.
Oh, my expertise is in animal advocacy, not AI safety FYI
I think there is something to be said for the radical flank effect, and Connor and Gabe are providing a somewhat radical flank (even though I actually think the “fucking stop!” position is the most reasonable, moderate one, given the urgency and the stakes!).