First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I’m inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn’t allow that kind of talk. Namely, that “the Holocaust happened, and Holocaust denial is false”.
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates...
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
...
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don’t think we’re on the same page about what the relevant scalar is.
If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA?
It depends entirely on what is meant by “certain forms”, but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as “racist”, because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren’t actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don’t know enough about the world to rule out discussion of that line of thinking entirely.
...
I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces.
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, “look, we’re here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are ‘harmed’ by speech-acts, I’m sorry for you, but tough nuggets. I guess you shouldn’t participate in this discourse. ”
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
...
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I’m tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:
I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.
I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.
In what ways do people not feel safe? (Is it things like this comment?) … I want to know more about this. What kind of harm?
No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.
I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission
“[T]aking action to eliminate racism is critical for improving the world, regardless of the ramifications for animal advocacy. But if the EA and animal advocacy communities fail to stand for (and not simply passively against) antiracism, we will also lose valuable perspectives that can only come from having different lived experiences—not just the perspectives of people of the global majority who are excluded, but the perspective of any talented person who wants to accomplish good for animals without supporting racist systems.
I know this is true because I have almost walked away from these communities myself, disquieted by the attitudes toward racism I found within them.”
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I’m inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn’t allow that kind of talk. Namely, that “the Holocaust happened, and Holocaust denial is false”.
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
...
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don’t think we’re on the same page about what the relevant scalar is.
It depends entirely on what is meant by “certain forms”, but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as “racist”, because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren’t actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don’t know enough about the world to rule out discussion of that line of thinking entirely.
...
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, “look, we’re here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are ‘harmed’ by speech-acts, I’m sorry for you, but tough nuggets. I guess you shouldn’t participate in this discourse. ”
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
...
I’m tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:
I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.
I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.
No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.
I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission