Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I strongly believe that representation, equity, and inclusiveness is important in the EA movement. I believe it so strongly that I try to look at what people are saying in the safe spaces where they feel comfortable talking about EA norms that scare them away. I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces. I am not merely saying that they are “worried” about where EA is heading; I’m saying that right here, right now, they feel uncomfortable fully participating in generalized EA spaces.
You say that “If people wouldn’t like the discourse norms in the central EA spaces…I would prefer that they bounce off.” In principle, I think we agree on this. Casual demands that we are being alienating should not faze us. But there does exist a point at which I think we might agree that those demands are sufficiently strong, like the holocaust denial example. The question, then, is not one of kind, but of degree. The question turns on whether the harm that is caused by certain forms of speech outweighs the benefits accrued by discussing those things.
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
Q2: You mentioned having similar standards to academia. If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA? Or do you mean only having similar standards to what academics discuss amongst each other, setting aside completely how universities deal with undergraduate students’ spaces.
I have significant cognitive dissonance here. I’m not at all certain about what I personally feel. But I do want to report that there are large numbers of people, in several disparate places, many of which I doubt interact between themselves in any significant way, who all keep saying in private that they do not feel safe here. I have seen people actively go through harm from EAs casually making the case for systemic racism not being real and I can report that it is not a minor harm.
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind. After seeing the level of harm that these kinds of speech acts cause, I think my position of moving that discourse away from introductory spaces is warranted. But I also strongly agree with traditional enlightenment ideals of open discussion, free speech, and that the best way to show an idea is wrong is to seriously discuss it. So I definitely don’t want to ban such speech everywhere. I just want there to be some way for us to have good epistemic standards and also benefit from EAs who don’t feel safe in the main EA Facebook groups.
To borrow a phrase from Nora Caplan-Bricker, they’re not demanding that EA spaces be happy places where they never have to read another word of dissent. Instead, they’re asking for a level of acceptance and ownership that other EAs already have. They just want to feel safe.
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
I agree with your conclusion about this instance, but for very different reasons, and I don’t think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.
It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be very valuable to hear contrary evidence. If, for example, we learnt that far fewer people were being killed than we thought, or that the CCP’s explanations about terrorism were correct, this would be useful information that would help us prioritize our work. Equally, it would be valuable to hear if we had actually under-estimated the death toll, for exactly the same reasons.
Similarly, Animal Rights EAs consider our use of factory farming to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (animals) have moral value?’
Or again, pro-life activists consider our use of abortion to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (fetuses) have moral value?’
It might be the case that people make a dedicated ‘Effective Liberation for Xinjiang’ group, and intend to discuss only methods there, not the fundamental premise. But if they started posting about the Uighurs in other EA groups, criticism of their project, including its fundamental premises, would be entirely legitimate.
I think this is true even if it made some hypothetical Uighur diaspora members of the group feel ‘unsafe’. People have a right to actual safety—clearly no-one should be beating each other up at EA events. But an unlimited right to ‘feel safe’, even when this can only be achieved by imposing strict (and contrary to EA) restrictions on others is clearly tyrannical. If you feel literally unsafe when someone makes an argument on the internet you have a serious problem and it is not our responsibility (or even within our power) to accommodate this. You should feel unsafe while near cliff edges, or around strange men in dark allys—not in a debate. Indeed, if feeling ‘unsafe’ is a trump card then I will simply claim that I feel unsafe when people discuss BLM positively, due to the (from my perspective) implied threat of riots.
The analogy here I think is clear. I think it is legitimate to say we will not discuss the Uighur genocide (or animal rights, or racism) in a given group because they are off-topic. What is not at all legitimate is to say that one side, but not the other, is forbidden.
Finally, I also think your strategy is potentially a bit dishonest. We should not hide the true nature of EA, whatever that is, from newcomers in an attempt to seduce them into the movement.
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.
On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren’t obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don’t have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven’t thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying “I think factory farming is terrible but XYZ” instead of just “XYZ”.
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I’m inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn’t allow that kind of talk. Namely, that “the Holocaust happened, and Holocaust denial is false”.
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates...
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
...
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don’t think we’re on the same page about what the relevant scalar is.
If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA?
It depends entirely on what is meant by “certain forms”, but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as “racist”, because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren’t actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don’t know enough about the world to rule out discussion of that line of thinking entirely.
...
I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces.
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, “look, we’re here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are ‘harmed’ by speech-acts, I’m sorry for you, but tough nuggets. I guess you shouldn’t participate in this discourse. ”
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
...
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I’m tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:
I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.
I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.
In what ways do people not feel safe? (Is it things like this comment?) … I want to know more about this. What kind of harm?
No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.
I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission
“[T]aking action to eliminate racism is critical for improving the world, regardless of the ramifications for animal advocacy. But if the EA and animal advocacy communities fail to stand for (and not simply passively against) antiracism, we will also lose valuable perspectives that can only come from having different lived experiences—not just the perspectives of people of the global majority who are excluded, but the perspective of any talented person who wants to accomplish good for animals without supporting racist systems.
I know this is true because I have almost walked away from these communities myself, disquieted by the attitudes toward racism I found within them.”
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I strongly believe that representation, equity, and inclusiveness is important in the EA movement. I believe it so strongly that I try to look at what people are saying in the safe spaces where they feel comfortable talking about EA norms that scare them away. I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces. I am not merely saying that they are “worried” about where EA is heading; I’m saying that right here, right now, they feel uncomfortable fully participating in generalized EA spaces.
You say that “If people wouldn’t like the discourse norms in the central EA spaces…I would prefer that they bounce off.” In principle, I think we agree on this. Casual demands that we are being alienating should not faze us. But there does exist a point at which I think we might agree that those demands are sufficiently strong, like the holocaust denial example. The question, then, is not one of kind, but of degree. The question turns on whether the harm that is caused by certain forms of speech outweighs the benefits accrued by discussing those things.
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
Q2: You mentioned having similar standards to academia. If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA? Or do you mean only having similar standards to what academics discuss amongst each other, setting aside completely how universities deal with undergraduate students’ spaces.
I have significant cognitive dissonance here. I’m not at all certain about what I personally feel. But I do want to report that there are large numbers of people, in several disparate places, many of which I doubt interact between themselves in any significant way, who all keep saying in private that they do not feel safe here. I have seen people actively go through harm from EAs casually making the case for systemic racism not being real and I can report that it is not a minor harm.
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind. After seeing the level of harm that these kinds of speech acts cause, I think my position of moving that discourse away from introductory spaces is warranted. But I also strongly agree with traditional enlightenment ideals of open discussion, free speech, and that the best way to show an idea is wrong is to seriously discuss it. So I definitely don’t want to ban such speech everywhere. I just want there to be some way for us to have good epistemic standards and also benefit from EAs who don’t feel safe in the main EA Facebook groups.
To borrow a phrase from Nora Caplan-Bricker, they’re not demanding that EA spaces be happy places where they never have to read another word of dissent. Instead, they’re asking for a level of acceptance and ownership that other EAs already have. They just want to feel safe.
I agree with your conclusion about this instance, but for very different reasons, and I don’t think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.
It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be very valuable to hear contrary evidence. If, for example, we learnt that far fewer people were being killed than we thought, or that the CCP’s explanations about terrorism were correct, this would be useful information that would help us prioritize our work. Equally, it would be valuable to hear if we had actually under-estimated the death toll, for exactly the same reasons.
Similarly, Animal Rights EAs consider our use of factory farming to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (animals) have moral value?’
Or again, pro-life activists consider our use of abortion to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (fetuses) have moral value?’
It might be the case that people make a dedicated ‘Effective Liberation for Xinjiang’ group, and intend to discuss only methods there, not the fundamental premise. But if they started posting about the Uighurs in other EA groups, criticism of their project, including its fundamental premises, would be entirely legitimate.
I think this is true even if it made some hypothetical Uighur diaspora members of the group feel ‘unsafe’. People have a right to actual safety—clearly no-one should be beating each other up at EA events. But an unlimited right to ‘feel safe’, even when this can only be achieved by imposing strict (and contrary to EA) restrictions on others is clearly tyrannical. If you feel literally unsafe when someone makes an argument on the internet you have a serious problem and it is not our responsibility (or even within our power) to accommodate this. You should feel unsafe while near cliff edges, or around strange men in dark allys—not in a debate. Indeed, if feeling ‘unsafe’ is a trump card then I will simply claim that I feel unsafe when people discuss BLM positively, due to the (from my perspective) implied threat of riots.
The analogy here I think is clear. I think it is legitimate to say we will not discuss the Uighur genocide (or animal rights, or racism) in a given group because they are off-topic. What is not at all legitimate is to say that one side, but not the other, is forbidden.
Finally, I also think your strategy is potentially a bit dishonest. We should not hide the true nature of EA, whatever that is, from newcomers in an attempt to seduce them into the movement.
I think this comment says what I was getting at in my own reply, though more strongly.
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.
On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren’t obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don’t have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven’t thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying “I think factory farming is terrible but XYZ” instead of just “XYZ”.
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I’m inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn’t allow that kind of talk. Namely, that “the Holocaust happened, and Holocaust denial is false”.
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
...
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don’t think we’re on the same page about what the relevant scalar is.
It depends entirely on what is meant by “certain forms”, but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as “racist”, because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren’t actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don’t know enough about the world to rule out discussion of that line of thinking entirely.
...
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, “look, we’re here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are ‘harmed’ by speech-acts, I’m sorry for you, but tough nuggets. I guess you shouldn’t participate in this discourse. ”
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
...
I’m tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:
I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.
I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.
No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.
I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission