The comments quoted in that tweet come perilously close to an incitement to violence. If you don’t think that anyone would actually commit violence due (partly) to ideas related to the rationalist community or AI alignment, I’ll point out that this has already happened, and possibly up to six people are dead because of it. (That’s not the whole explanation, but it’s part of it.)
When you are speaking publicly, I think you have a responsibility to be extra cautious not to incite violence or say things that could be interpreted by someone as encouraging violence. You’re not just talking to the median person listening, you’re talking to everyone listening, including people who are impressionable, emotionally unwell, and may be predisposed to violence.
It’s unfortunate, but this is what it means to hold a position of responsibility in our society. You have to consider these things. I really strongly disagree with dismissing these as mere “social status games” — we are talking about a real risk of irreparable physical harm to innocent people here, even if small.
What’s strange here is the “warning shot” already happened — up to six murders, perpetuated by people who almost certainly had multiple risk factors going on (as almost always seems to be the case), but for whom the discourse, ideas, and subcultural social norms of the whole LessWrong/Bay Area rationalist/AI alignment world seemed to play a role.
[Edited on Nov. 22, 2025 at 6:00 PM Eastern to add: disturbingly, what I said in this comment turned out, just a few days later, to be more prescient than I could have realized.]
You have to look beyond what was literally, directly said to what the most extreme people who are listening to those remarks might infer, or might feel encouraged to do. Saying that people should burn down AI labs and the employees should be jailed for attempted murder is not literally, directly saying someone should commit violence against the employees of AI companies, but it is easy for someone who is in an extremist mindset and who is emotionally unwell to take what was said that extra step further.
And there is no point arguing about what is theoretically, in principle true or not when this violence is already happening, or being threatened.
No, actually; the legal standard used in courts is what was actually said, which needs to include a clear call for violence to be carried out at some point in the near future. It’s extremely frustrating to me that you’re misusing legal terms to lend your arguments weight they don’t hold; please cease to do so. https://en.wikipedia.org/wiki/Brandenburg_v._Ohio contains plenty of helpful information if you’d like to learn more about what “incitement to violence” means in America.
I didn’t say it was an incitement to violence, I said it was perilously close to one. What that means is that the person making such statements can, indeed, completely avoid legal liability for such statements, and can plausibly deny any moral responsibility if any violence occurs, although the actual effect on a very small minority of people listening — who aren’t in a headspace where they can safely process these kinds of inflammatory proclamations — might be, plausibly, to encourage violence.
The important question is not what kind of speech is illegal or not, the important question is what kind of speech might be taken as encouragement (or discouragement) of the kind of violence or threatened violence that just happened, whether or not that is the speaker’s intention. I’m not trying to make a claim about what’s illegal or not, I’m making a claim about what kind of public statements are responsible or irresponsible.
It’s not “perilously close”, because it’s very different from incitement to violence. I have explained that incitement to violence requires a call for violence which is time-scoped to the near future; Sherman’s statement did not include a call for violence at all. You are correct that he bears no moral responsibility for the actions of people who heard his statements.
“Perilously close” has no legal definition, so what you are asserting is a matter of opinion, not a matter of fact. My intention in using “perilously close” was to convey that such statements have a similar kind of danger to statements that would meet the legal definition of incitement to violence, even though they are perfectly legal.
You know that I did not say people who make such statements bear no moral responsibility for how their words are interpreted, so I’m not sure what your intention is in making that false statement.
Since you have not signaled good faith, I won’t engage further.
The comments quoted in that tweet come perilously close to an incitement to violence. If you don’t think that anyone would actually commit violence due (partly) to ideas related to the rationalist community or AI alignment, I’ll point out that this has already happened, and possibly up to six people are dead because of it. (That’s not the whole explanation, but it’s part of it.)
When you are speaking publicly, I think you have a responsibility to be extra cautious not to incite violence or say things that could be interpreted by someone as encouraging violence. You’re not just talking to the median person listening, you’re talking to everyone listening, including people who are impressionable, emotionally unwell, and may be predisposed to violence.
It’s unfortunate, but this is what it means to hold a position of responsibility in our society. You have to consider these things. I really strongly disagree with dismissing these as mere “social status games” — we are talking about a real risk of irreparable physical harm to innocent people here, even if small.
What’s strange here is the “warning shot” already happened — up to six murders, perpetuated by people who almost certainly had multiple risk factors going on (as almost always seems to be the case), but for whom the discourse, ideas, and subcultural social norms of the whole LessWrong/Bay Area rationalist/AI alignment world seemed to play a role.
[Edited on Nov. 22, 2025 at 6:00 PM Eastern to add: disturbingly, what I said in this comment turned out, just a few days later, to be more prescient than I could have realized.]
They don’t come remotely close to it. That’s a wildly dishonest characterization.
No, it’s an incredibly accurate characterization.
I don’t think you’re familiar with what actually constitutes incitement to violence.
You have to look beyond what was literally, directly said to what the most extreme people who are listening to those remarks might infer, or might feel encouraged to do. Saying that people should burn down AI labs and the employees should be jailed for attempted murder is not literally, directly saying someone should commit violence against the employees of AI companies, but it is easy for someone who is in an extremist mindset and who is emotionally unwell to take what was said that extra step further.
And there is no point arguing about what is theoretically, in principle true or not when this violence is already happening, or being threatened.
No, actually; the legal standard used in courts is what was actually said, which needs to include a clear call for violence to be carried out at some point in the near future. It’s extremely frustrating to me that you’re misusing legal terms to lend your arguments weight they don’t hold; please cease to do so. https://en.wikipedia.org/wiki/Brandenburg_v._Ohio contains plenty of helpful information if you’d like to learn more about what “incitement to violence” means in America.
I didn’t say it was an incitement to violence, I said it was perilously close to one. What that means is that the person making such statements can, indeed, completely avoid legal liability for such statements, and can plausibly deny any moral responsibility if any violence occurs, although the actual effect on a very small minority of people listening — who aren’t in a headspace where they can safely process these kinds of inflammatory proclamations — might be, plausibly, to encourage violence.
The important question is not what kind of speech is illegal or not, the important question is what kind of speech might be taken as encouragement (or discouragement) of the kind of violence or threatened violence that just happened, whether or not that is the speaker’s intention. I’m not trying to make a claim about what’s illegal or not, I’m making a claim about what kind of public statements are responsible or irresponsible.
It’s not “perilously close”, because it’s very different from incitement to violence. I have explained that incitement to violence requires a call for violence which is time-scoped to the near future; Sherman’s statement did not include a call for violence at all. You are correct that he bears no moral responsibility for the actions of people who heard his statements.
“Perilously close” has no legal definition, so what you are asserting is a matter of opinion, not a matter of fact. My intention in using “perilously close” was to convey that such statements have a similar kind of danger to statements that would meet the legal definition of incitement to violence, even though they are perfectly legal.
You know that I did not say people who make such statements bear no moral responsibility for how their words are interpreted, so I’m not sure what your intention is in making that false statement.
Since you have not signaled good faith, I won’t engage further.
They do not have a similar kind of danger; you are making false equivalences. Thank you for ceasing your censorious fearmongering.