Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
I agree with your conclusion about this instance, but for very different reasons, and I don’t think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.
It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be very valuable to hear contrary evidence. If, for example, we learnt that far fewer people were being killed than we thought, or that the CCP’s explanations about terrorism were correct, this would be useful information that would help us prioritize our work. Equally, it would be valuable to hear if we had actually under-estimated the death toll, for exactly the same reasons.
Similarly, Animal Rights EAs consider our use of factory farming to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (animals) have moral value?’
Or again, pro-life activists consider our use of abortion to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (fetuses) have moral value?’
It might be the case that people make a dedicated ‘Effective Liberation for Xinjiang’ group, and intend to discuss only methods there, not the fundamental premise. But if they started posting about the Uighurs in other EA groups, criticism of their project, including its fundamental premises, would be entirely legitimate.
I think this is true even if it made some hypothetical Uighur diaspora members of the group feel ‘unsafe’. People have a right to actual safety—clearly no-one should be beating each other up at EA events. But an unlimited right to ‘feel safe’, even when this can only be achieved by imposing strict (and contrary to EA) restrictions on others is clearly tyrannical. If you feel literally unsafe when someone makes an argument on the internet you have a serious problem and it is not our responsibility (or even within our power) to accommodate this. You should feel unsafe while near cliff edges, or around strange men in dark allys—not in a debate. Indeed, if feeling ‘unsafe’ is a trump card then I will simply claim that I feel unsafe when people discuss BLM positively, due to the (from my perspective) implied threat of riots.
The analogy here I think is clear. I think it is legitimate to say we will not discuss the Uighur genocide (or animal rights, or racism) in a given group because they are off-topic. What is not at all legitimate is to say that one side, but not the other, is forbidden.
Finally, I also think your strategy is potentially a bit dishonest. We should not hide the true nature of EA, whatever that is, from newcomers in an attempt to seduce them into the movement.
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.
On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren’t obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don’t have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven’t thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying “I think factory farming is terrible but XYZ” instead of just “XYZ”.
I agree with your conclusion about this instance, but for very different reasons, and I don’t think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.
It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be very valuable to hear contrary evidence. If, for example, we learnt that far fewer people were being killed than we thought, or that the CCP’s explanations about terrorism were correct, this would be useful information that would help us prioritize our work. Equally, it would be valuable to hear if we had actually under-estimated the death toll, for exactly the same reasons.
Similarly, Animal Rights EAs consider our use of factory farming to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (animals) have moral value?’
Or again, pro-life activists consider our use of abortion to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (fetuses) have moral value?’
It might be the case that people make a dedicated ‘Effective Liberation for Xinjiang’ group, and intend to discuss only methods there, not the fundamental premise. But if they started posting about the Uighurs in other EA groups, criticism of their project, including its fundamental premises, would be entirely legitimate.
I think this is true even if it made some hypothetical Uighur diaspora members of the group feel ‘unsafe’. People have a right to actual safety—clearly no-one should be beating each other up at EA events. But an unlimited right to ‘feel safe’, even when this can only be achieved by imposing strict (and contrary to EA) restrictions on others is clearly tyrannical. If you feel literally unsafe when someone makes an argument on the internet you have a serious problem and it is not our responsibility (or even within our power) to accommodate this. You should feel unsafe while near cliff edges, or around strange men in dark allys—not in a debate. Indeed, if feeling ‘unsafe’ is a trump card then I will simply claim that I feel unsafe when people discuss BLM positively, due to the (from my perspective) implied threat of riots.
The analogy here I think is clear. I think it is legitimate to say we will not discuss the Uighur genocide (or animal rights, or racism) in a given group because they are off-topic. What is not at all legitimate is to say that one side, but not the other, is forbidden.
Finally, I also think your strategy is potentially a bit dishonest. We should not hide the true nature of EA, whatever that is, from newcomers in an attempt to seduce them into the movement.
I think this comment says what I was getting at in my own reply, though more strongly.
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.
On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren’t obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don’t have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven’t thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying “I think factory farming is terrible but XYZ” instead of just “XYZ”.