I think we should slightly narrow the Overton Window of what ideas and behaviours are acceptable to express in EA spaces, to help exclude more harassment, assault and discrimination.
I also think EA at its best would primarily be more of a professional and intellectual community and less of a social circle, which would help limit harmful power dynamics, help limit groupthink and help promote intellectual diversity.
I think we should slightly narrow the Overton Window of what ideas and behaviours are acceptable to express in EA spaces, to help exclude more harassment, assault and discrimination.
Does ‘narrowing the Overton Window of acceptable behaviours in EA, to help exclude more harassment, assault and discrimination’ just mean making harassment, assault, and discrimination socially unacceptable within EA? Because that seems like a no-brainer to me.
But then I don’t really know what “ideas” you have in mind. Is the idea just to generically reduce the size of the Overton window / the diversity and weirdness of ideas in EA, in the hope that this will by some indirect path reduce rates of bad behavior?
So, sorry in advance if I’m reading way too much into a casual choice of words, but—this is an incredibly ominous metaphor, right? (I’m definitely not blaming you for anything, because I’ve also used it in just this context, and it took me a while to notice how incredibly ominous it is.)
Maybe my rationality realism is showing, but I thought the premise and promise of the website is that there are laws of systematically correct reasoning as objective as mathematics—different mathematicians from different cultures might have different interests (like analysis or algebra or combinatorics) or be accustomed to different notations, but ultimately, they’re all on the same cooperative quest for Truth—even if that cooperative process may occasionally involve some amount of yelling and crying.
The Overton window concept describes a process of social-pressure mind control, not rational deliberation: an idea is said to be “outside the Overton window” not on account of its being wrong, but on account of its being unacceptably unpopular. If a mathematician were to describe a debate with their colleagues about mathematics (as opposed to some dumb non-math thing like tenure or teaching duties) as an “Overton-window fight”, I would be pretty worried about the culture of that mathematics department, wouldn’t you?!
(So much for keeping this comment section normal. 😛 I apologize for nothing.)
Rather than trying to have an “Overton window” in EA, I’d bid that we just try to figure out what’s true, and ban topics from the EA Forum outright if we don’t want to talk about them.
Banning a topic (especially in a transparent and explicit way, that tries to avoid distorting any EA discussion of consequence) is the sort of thing you can do without lying, manipulating people, or spinning what you say.
On the other hand, allowing a topic, but saying “you’re allowed to give supporting arguments for X, but not allowed to give supporting arguments for not-X, even if the arguments themselves are good-quality and have no particular epistemic defect”, seems bad to me. (And potentially worse if it’s done via vague social pressure rather than via an explicit forum rule.)
I think you’re assuming we can disentangle the professional community and social circles. I also strongly disagree with the claim that professional communities “help limit groupthink and help promote intellectual diversity.” In fact, the opposite is true.
I think we should slightly narrow the Overton Window of what ideas and behaviours are acceptable to express in EA spaces, to help exclude more harassment, assault and discrimination.
I also think EA at its best would primarily be more of a professional and intellectual community and less of a social circle, which would help limit harmful power dynamics, help limit groupthink and help promote intellectual diversity.
Does ‘narrowing the Overton Window of acceptable behaviours in EA, to help exclude more harassment, assault and discrimination’ just mean making harassment, assault, and discrimination socially unacceptable within EA? Because that seems like a no-brainer to me.
But then I don’t really know what “ideas” you have in mind. Is the idea just to generically reduce the size of the Overton window / the diversity and weirdness of ideas in EA, in the hope that this will by some indirect path reduce rates of bad behavior?
Mostly my response here will echo Zack Davis’ in a LW thread in 2019:
(So much for keeping this comment section normal. 😛 I apologize for nothing.)
Rather than trying to have an “Overton window” in EA, I’d bid that we just try to figure out what’s true, and ban topics from the EA Forum outright if we don’t want to talk about them.
Banning a topic (especially in a transparent and explicit way, that tries to avoid distorting any EA discussion of consequence) is the sort of thing you can do without lying, manipulating people, or spinning what you say.
On the other hand, allowing a topic, but saying “you’re allowed to give supporting arguments for X, but not allowed to give supporting arguments for not-X, even if the arguments themselves are good-quality and have no particular epistemic defect”, seems bad to me. (And potentially worse if it’s done via vague social pressure rather than via an explicit forum rule.)
I think you’re assuming we can disentangle the professional community and social circles. I also strongly disagree with the claim that professional communities “help limit groupthink and help promote intellectual diversity.” In fact, the opposite is true.