Strongly downvoted. I agree with the other comments. I think this post is bad as is especially in the current context of AI Safety disclose, and should be posted as part of a broader post about violent methods being ineffective (at least, assuming you’re writing such a post). I personally strongly want AI Safety discourse to condemn and disavow violent methods, and think it’s both immoral and ineffective. I don’t think you believe that violence is a good idea here, but this post in isolation just feels like “hey, violent approaches exist, maybe worth thinking about, you wouldn’t be super weird for doing them”
To be very blunt, I’m very concerned about the optics of this post, especially in the wake of the Yudkowsky Time article and backlash. “What is the point of this post,” someone will naturally ask. Even I don’t understand its purpose. I personally don’t assume you have any dubious intent, but I am concerned about whether you’ve really considered the optics of this, especially given that there’s no “Disclaimer: I am not endorsing any of the violent actions listed herein, and simply and listing them in order to _______ [e.g., demonstrate what not to do]” posted up front.
I’ve considered the optics of this. It’s more of a precursor or reference post I’ll cite later in forthcoming posts I’m drafting now that will make the case that more violent methods to reduce x-risks have historically and empirically proven to be far less effective than non-violent methods.
I’m drafting that up right now but I’m responding to your comments in real time and I just wanted to assure you right away I’m adding a disclosure to my post like that right now.
Small typo in the blurb, currently reads: “case for why and how non-violent methods to reduce the risk of global civilization have been more effective than non-violent methods” I presume the second ‘non-violent’ is meant to be ‘violent’.
I’m not sure how useful it is to separate this post from your substantial argument about the inefficacy of violent methods. This post currently just points out the existence of violent acts by movements in the past, which some could interpret as defensive/justification of recent controversy over violence in AI risk reduction. I’m aware that this is not the interpretation you intend. I think it would have been clearer to wait and include this post in your larger upcoming one.
Upvoted. Thanks for pointing out the typo. I’ve fixed that.
As to how “some” (who? Emile Torres? Some unambiguously more antagonistic, straight-up bad-faith actors who’ve got an agenda of demonizing EA and AI safety at all costs?)could interpret this as a defence of violence or justification of recent controversy over violence in AI risk reduction, there is no content in the post that I consider could be used as serious evidence that I’m defending or justifying violence in the name of mitigating the risk of any global catastrophe, let alone AI risk specifically. I’d consider any such allegation against me, my post, or even any part of the EA community at large, to be someone just casting a pall of baseless aspersion to stigmatize me or my other peers. If such allegations were levelled aggressively enough, I might even consider them slander.
To boot, there are contents of my post that, if anything, mean to discourage violent action.
environmental movements became increasingly radical, or even violent, as the 1970s went on, which killed that wave of environmentalism by significantly alienating public/popular support and provoking an overwhelming backlash from the state.
I’ve added the emphasis here to make clear the implication that the bulk of available evidence that non-violent methods were far more effective than violent methods in movements in the 20th century that are remotely analogous to movements pursuant of reducing existential risk, from runaway climate change to unaligned machine superintelligence.
I acknowledge this isn’t a moral or ethical condemnation of violent action and while that’s what a lot of others on the EA Forum might prefer, I’ve already clarified in my disclaimer that isn’t my goal, as my overall goal will be to make a more descriptive as prescriptive case:
an empirical case for why and how non-violent methods to reduce the risk of global civilizational destruction have practically been more effective than violent methods.
I further mentioned how:
The third wave [of global radical environmentalism] is more like the second wave in how rates of violence remain relatively low, even maybe exceptionally low given the tens of millions of participants in radical environmental movements in the 2020s so far, and command more public/popular support.
One implication here is that, at least in my opinion, the evidence bears out that there is a strong negative correlation between how successful movements are to prevent global ecological catastrophe, and how violent they tend to be. I.e., the more violent they are, the less successful they are.If there are any lessons to be drawn from that to apply for AI safety/alignment, they support the effectiveness of non-violent methods over violent methods.
For what it’s worth for me to personally condemn anything:
I abhor terrorism. I’d still be morally disgusted by terrorism to the point of rejecting it even in the face of hypothetical, substantive arguments it’d somehow be operationally effective for the the ultimate achievement of some political goal (though I also doubt there are any convincing arguments like that really even exist in the first place).
As to how my post might be exploited to cast effective altruism or AI alignment as an overall destructive movement based on something like the narrative from its most prominent polemical critic, and former effective altruist, Emile Torres, about how “longtermism is the world’s most dangerous secular ideology,” like many others who still participate in effective altruism, I personally am not a longtermist. This is in significant part due to how I’m especially wary of how longtermism has evidently lent itself to rationalize or motivate unjustifiable and extremely destructive actions, both in terms of the decisions that resulted in the ongoing catastrophe that is the FTX collapse, among other dire problems in EA.
Agreed and upvoted. Here is he blurb I’ve put at the top of my post.
Disclaimer: I’ve written this as a reference post to cite for other, forthcoming posts making an empirical case for why and how non-violent methods to reduce the risk of the destruction of global civilization have practically been more effective than violent methods. This post in no way is meant to endorse violent action to reduce any potential existential risk. It cannot and should not be used as a reference in support of promoting any such violent agenda.
Strongly downvoted. I agree with the other comments. I think this post is bad as is especially in the current context of AI Safety disclose, and should be posted as part of a broader post about violent methods being ineffective (at least, assuming you’re writing such a post). I personally strongly want AI Safety discourse to condemn and disavow violent methods, and think it’s both immoral and ineffective. I don’t think you believe that violence is a good idea here, but this post in isolation just feels like “hey, violent approaches exist, maybe worth thinking about, you wouldn’t be super weird for doing them”
To be very blunt, I’m very concerned about the optics of this post, especially in the wake of the Yudkowsky Time article and backlash. “What is the point of this post,” someone will naturally ask. Even I don’t understand its purpose. I personally don’t assume you have any dubious intent, but I am concerned about whether you’ve really considered the optics of this, especially given that there’s no “Disclaimer: I am not endorsing any of the violent actions listed herein, and simply and listing them in order to _______ [e.g., demonstrate what not to do]” posted up front.
I’ve considered the optics of this. It’s more of a precursor or reference post I’ll cite later in forthcoming posts I’m drafting now that will make the case that more violent methods to reduce x-risks have historically and empirically proven to be far less effective than non-violent methods.
Then I would strongly recommend putting the blurb you’ve written here at the beginning of the article.
I’m drafting that up right now but I’m responding to your comments in real time and I just wanted to assure you right away I’m adding a disclosure to my post like that right now.
Small typo in the blurb, currently reads: “case for why and how non-violent methods to reduce the risk of global civilization have been more effective than non-violent methods”
I presume the second ‘non-violent’ is meant to be ‘violent’.
I’m not sure how useful it is to separate this post from your substantial argument about the inefficacy of violent methods. This post currently just points out the existence of violent acts by movements in the past, which some could interpret as defensive/justification of recent controversy over violence in AI risk reduction. I’m aware that this is not the interpretation you intend. I think it would have been clearer to wait and include this post in your larger upcoming one.
Upvoted. Thanks for pointing out the typo. I’ve fixed that.
As to how “some” (who? Emile Torres? Some unambiguously more antagonistic, straight-up bad-faith actors who’ve got an agenda of demonizing EA and AI safety at all costs?) could interpret this as a defence of violence or justification of recent controversy over violence in AI risk reduction, there is no content in the post that I consider could be used as serious evidence that I’m defending or justifying violence in the name of mitigating the risk of any global catastrophe, let alone AI risk specifically. I’d consider any such allegation against me, my post, or even any part of the EA community at large, to be someone just casting a pall of baseless aspersion to stigmatize me or my other peers. If such allegations were levelled aggressively enough, I might even consider them slander.
To boot, there are contents of my post that, if anything, mean to discourage violent action.
I’ve added the emphasis here to make clear the implication that the bulk of available evidence that non-violent methods were far more effective than violent methods in movements in the 20th century that are remotely analogous to movements pursuant of reducing existential risk, from runaway climate change to unaligned machine superintelligence.
I acknowledge this isn’t a moral or ethical condemnation of violent action and while that’s what a lot of others on the EA Forum might prefer, I’ve already clarified in my disclaimer that isn’t my goal, as my overall goal will be to make a more descriptive as prescriptive case:
I further mentioned how:
One implication here is that, at least in my opinion, the evidence bears out that there is a strong negative correlation between how successful movements are to prevent global ecological catastrophe, and how violent they tend to be. I.e., the more violent they are, the less successful they are. If there are any lessons to be drawn from that to apply for AI safety/alignment, they support the effectiveness of non-violent methods over violent methods.
For what it’s worth for me to personally condemn anything:
I abhor terrorism. I’d still be morally disgusted by terrorism to the point of rejecting it even in the face of hypothetical, substantive arguments it’d somehow be operationally effective for the the ultimate achievement of some political goal (though I also doubt there are any convincing arguments like that really even exist in the first place).
As to how my post might be exploited to cast effective altruism or AI alignment as an overall destructive movement based on something like the narrative from its most prominent polemical critic, and former effective altruist, Emile Torres, about how “longtermism is the world’s most dangerous secular ideology,” like many others who still participate in effective altruism, I personally am not a longtermist. This is in significant part due to how I’m especially wary of how longtermism has evidently lent itself to rationalize or motivate unjustifiable and extremely destructive actions, both in terms of the decisions that resulted in the ongoing catastrophe that is the FTX collapse, among other dire problems in EA.
Agreed and upvoted. Here is he blurb I’ve put at the top of my post.