Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.
I want to distinguish between “harmful” and “upsetting”. It seems to me that there is a big difference between shouting ‘FIRE’ in a crowed theater, “commanding others to do direct harm” on the one hand, and “being unable to focus for hours” after reading a facebook thread, being exhausted from fielding questions.
My intuitive grasp of these things has it that the “harm” of the first category is larger than that of the second. But even if that isn’t true, and the harm of reading racist stuff is as bad as literal physical torture, there are a number of important differences.
For one thing, the speech acts in the first category have physical, externallylegible bad consequences. This matters, because it means we can have rules around those kinds of consequences that can be socially enforced without those rules being extremely exploitable. If we adopt a set of discourse rules that say “we will ban any speech act that produce significant emotional harm”, then anyone not in good faith can shut down and discourse that they don’t like by claiming to be emotionally harmed by it. Indeed, they don’t even need to be consciously malicious (though of course there will be some explicitly manipulative bad actors); this creates a subconscious incentive to be and act more upset than you might otherwise be by some speech-acts, because if you are sufficiently upset, the people saying things you don’t like will stop.
Second, I note that both of the examples in the second category are much easier to avoid than the second category. If there are Facebook threads that drain someone’s ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. Most of us have some kind of political topics that we find triggering, and a lot of us find that browsing facebook at all saps our motivation. So we have workarounds to avoid that stuff. These workarounds aren’t perfect, and occasionally you’ll encounter material that triggers you. But it seems way better to have that responsibility be on the individual. Hence the idea of safe spaces in the first place.
Furthermore, there are lots of things that are upsetting (for instance, that there are people dying of preventable Malaria in the third world right now, and that this in principle, could be stopped if enough people and the first world knew and cared about it, or that the extinction of humanity is plausibly imminent), which are never the less pretty important to talk about.
If there are Facebook threads that drain your ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. … [It]seems way better to have that responsibility be on the individual.
We agree here that if something is bad for you, you can just not go into the place where that thing is. But I think this is argument in favor of my position: that there should be EA spaces where people like that can go and discuss EA-related stuff.
For example, some people have to go to the EAA facebook thread as a part of their job. They are there to talk about animal stuff. So when people come into a thread about how to be antiracist while helping animals and decide to argue vociferously that racism doesn’t exist, that is just needlessly inappropriate. It’s not that the issue shouldn’t ever be discussed; it’s that the issue shouldn’t be discussed there, in that thread.
We should allow people to be able to work on EA stuff without having to be around the kind of stuff that is bad for them. If they feel unable to discuss certain topics without feeling badly, let them not go into threads on the EA forum that discuss those topics. This we agree on. But then why say that we can’t have a lesser EA space (like an EA facebook group) for them where they can interact without discussion on the topics that make them feel badly? Remember, some of these people are employees whose very job description may require them to be active on the EAA facebook group. They don’t have a choice here; we do.
I want to distinguish between “harmful” and “upsetting”. It seems to me that there is a big difference between shouting ‘FIRE’ in a crowed theater, “commanding others to do direct harm” on the one hand, and “being unable to focus for hours” after reading a facebook thread, being exhausted from fielding questions.
My intuitive grasp of these things has it that the “harm” of the first category is larger than that of the second. But even if that isn’t true, and the harm of reading racist stuff is as bad as literal physical torture, there are a number of important differences.
For one thing, the speech acts in the first category have physical, externally legible bad consequences. This matters, because it means we can have rules around those kinds of consequences that can be socially enforced without those rules being extremely exploitable. If we adopt a set of discourse rules that say “we will ban any speech act that produce significant emotional harm”, then anyone not in good faith can shut down and discourse that they don’t like by claiming to be emotionally harmed by it. Indeed, they don’t even need to be consciously malicious (though of course there will be some explicitly manipulative bad actors); this creates a subconscious incentive to be and act more upset than you might otherwise be by some speech-acts, because if you are sufficiently upset, the people saying things you don’t like will stop.
Second, I note that both of the examples in the second category are much easier to avoid than the second category. If there are Facebook threads that drain someone’s ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. Most of us have some kind of political topics that we find triggering, and a lot of us find that browsing facebook at all saps our motivation. So we have workarounds to avoid that stuff. These workarounds aren’t perfect, and occasionally you’ll encounter material that triggers you. But it seems way better to have that responsibility be on the individual. Hence the idea of safe spaces in the first place.
Furthermore, there are lots of things that are upsetting (for instance, that there are people dying of preventable Malaria in the third world right now, and that this in principle, could be stopped if enough people and the first world knew and cared about it, or that the extinction of humanity is plausibly imminent), which are never the less pretty important to talk about.
We agree here that if something is bad for you, you can just not go into the place where that thing is. But I think this is argument in favor of my position: that there should be EA spaces where people like that can go and discuss EA-related stuff.
For example, some people have to go to the EAA facebook thread as a part of their job. They are there to talk about animal stuff. So when people come into a thread about how to be antiracist while helping animals and decide to argue vociferously that racism doesn’t exist, that is just needlessly inappropriate. It’s not that the issue shouldn’t ever be discussed; it’s that the issue shouldn’t be discussed there, in that thread.
We should allow people to be able to work on EA stuff without having to be around the kind of stuff that is bad for them. If they feel unable to discuss certain topics without feeling badly, let them not go into threads on the EA forum that discuss those topics. This we agree on. But then why say that we can’t have a lesser EA space (like an EA facebook group) for them where they can interact without discussion on the topics that make them feel badly? Remember, some of these people are employees whose very job description may require them to be active on the EAA facebook group. They don’t have a choice here; we do.