Upvoted. Thanks for pointing out the typo. I’ve fixed that.
As to how “some” (who? Emile Torres? Some unambiguously more antagonistic, straight-up bad-faith actors who’ve got an agenda of demonizing EA and AI safety at all costs?)could interpret this as a defence of violence or justification of recent controversy over violence in AI risk reduction, there is no content in the post that I consider could be used as serious evidence that I’m defending or justifying violence in the name of mitigating the risk of any global catastrophe, let alone AI risk specifically. I’d consider any such allegation against me, my post, or even any part of the EA community at large, to be someone just casting a pall of baseless aspersion to stigmatize me or my other peers. If such allegations were levelled aggressively enough, I might even consider them slander.
To boot, there are contents of my post that, if anything, mean to discourage violent action.
environmental movements became increasingly radical, or even violent, as the 1970s went on, which killed that wave of environmentalism by significantly alienating public/popular support and provoking an overwhelming backlash from the state.
I’ve added the emphasis here to make clear the implication that the bulk of available evidence that non-violent methods were far more effective than violent methods in movements in the 20th century that are remotely analogous to movements pursuant of reducing existential risk, from runaway climate change to unaligned machine superintelligence.
I acknowledge this isn’t a moral or ethical condemnation of violent action and while that’s what a lot of others on the EA Forum might prefer, I’ve already clarified in my disclaimer that isn’t my goal, as my overall goal will be to make a more descriptive as prescriptive case:
an empirical case for why and how non-violent methods to reduce the risk of global civilizational destruction have practically been more effective than violent methods.
I further mentioned how:
The third wave [of global radical environmentalism] is more like the second wave in how rates of violence remain relatively low, even maybe exceptionally low given the tens of millions of participants in radical environmental movements in the 2020s so far, and command more public/popular support.
One implication here is that, at least in my opinion, the evidence bears out that there is a strong negative correlation between how successful movements are to prevent global ecological catastrophe, and how violent they tend to be. I.e., the more violent they are, the less successful they are.If there are any lessons to be drawn from that to apply for AI safety/alignment, they support the effectiveness of non-violent methods over violent methods.
For what it’s worth for me to personally condemn anything:
I abhor terrorism. I’d still be morally disgusted by terrorism to the point of rejecting it even in the face of hypothetical, substantive arguments it’d somehow be operationally effective for the the ultimate achievement of some political goal (though I also doubt there are any convincing arguments like that really even exist in the first place).
As to how my post might be exploited to cast effective altruism or AI alignment as an overall destructive movement based on something like the narrative from its most prominent polemical critic, and former effective altruist, Emile Torres, about how “longtermism is the world’s most dangerous secular ideology,” like many others who still participate in effective altruism, I personally am not a longtermist. This is in significant part due to how I’m especially wary of how longtermism has evidently lent itself to rationalize or motivate unjustifiable and extremely destructive actions, both in terms of the decisions that resulted in the ongoing catastrophe that is the FTX collapse, among other dire problems in EA.
Upvoted. Thanks for pointing out the typo. I’ve fixed that.
As to how “some” (who? Emile Torres? Some unambiguously more antagonistic, straight-up bad-faith actors who’ve got an agenda of demonizing EA and AI safety at all costs?) could interpret this as a defence of violence or justification of recent controversy over violence in AI risk reduction, there is no content in the post that I consider could be used as serious evidence that I’m defending or justifying violence in the name of mitigating the risk of any global catastrophe, let alone AI risk specifically. I’d consider any such allegation against me, my post, or even any part of the EA community at large, to be someone just casting a pall of baseless aspersion to stigmatize me or my other peers. If such allegations were levelled aggressively enough, I might even consider them slander.
To boot, there are contents of my post that, if anything, mean to discourage violent action.
I’ve added the emphasis here to make clear the implication that the bulk of available evidence that non-violent methods were far more effective than violent methods in movements in the 20th century that are remotely analogous to movements pursuant of reducing existential risk, from runaway climate change to unaligned machine superintelligence.
I acknowledge this isn’t a moral or ethical condemnation of violent action and while that’s what a lot of others on the EA Forum might prefer, I’ve already clarified in my disclaimer that isn’t my goal, as my overall goal will be to make a more descriptive as prescriptive case:
I further mentioned how:
One implication here is that, at least in my opinion, the evidence bears out that there is a strong negative correlation between how successful movements are to prevent global ecological catastrophe, and how violent they tend to be. I.e., the more violent they are, the less successful they are. If there are any lessons to be drawn from that to apply for AI safety/alignment, they support the effectiveness of non-violent methods over violent methods.
For what it’s worth for me to personally condemn anything:
I abhor terrorism. I’d still be morally disgusted by terrorism to the point of rejecting it even in the face of hypothetical, substantive arguments it’d somehow be operationally effective for the the ultimate achievement of some political goal (though I also doubt there are any convincing arguments like that really even exist in the first place).
As to how my post might be exploited to cast effective altruism or AI alignment as an overall destructive movement based on something like the narrative from its most prominent polemical critic, and former effective altruist, Emile Torres, about how “longtermism is the world’s most dangerous secular ideology,” like many others who still participate in effective altruism, I personally am not a longtermist. This is in significant part due to how I’m especially wary of how longtermism has evidently lent itself to rationalize or motivate unjustifiable and extremely destructive actions, both in terms of the decisions that resulted in the ongoing catastrophe that is the FTX collapse, among other dire problems in EA.