Some of the people who have spent the most time doing the above came to the conclusion that EA should be more cautious and attentive to diversity.
Edited from earlier comment: I think I am mostly confused what diversity has to do with this decision. It seems to me that there are many pro-diversity reasons to not deplatform Hanson. Indeed, the primary one cited, one of intellectual diversity and tolerance of weird ideas, is primary an argument in favor of diversity. So while diversity plays some role, I think I am actually confused why you bring it up here.
I am saying this because I wanted to argue against things in the last section, but realized that you just use really high-level language like “diversity and inclusion” which is very hard to say anything about. Of course everyone is in favor of some types of diversity, but it feels to me like the last section is trying to say something like “people who talked to a lot of people in the community tend to be more concerned about the kind of diversity that having Robin as a speaker might harm”, but I don’t actually know whether that’s what you mean. But if you do mean it, I think that’s mostly backwards, based on the evidence I have seen.
I maybe should have said something like “concerns related to social justice” when I said “diversity.” I wound up picking the shorter word, but at the price of ambiguity.
You’d expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn’t lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
A simple example of the kind of thing I’m thinking of (which I’m aware is too simplistic to represent reality in full, but does draw from the experiences of people I’ve met):
A German survivor of sexual abuse is interested in EA Munich’s events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on “gentle silent rape” and find it viscerally unpleasant. They’ve seen other discussion spaces where ideas like Hanson’s were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
There are many reasons someone might want to engage more or less with EA based on the types of views discussed within the community. Different types of deplatforming probably lead to different types of diversity being more or less present, and send different kinds of signals about effective altruism into the wider world.
For example, I’ve heard one person (who was generally anti-deplatforming) argue against ever promoting events that blend EA and religious thought, because (as best I understood it) they saw religious views as antithetical to things they thought were important about EA. This brings up questions like:
Will the promotion of religion-aligned EA events increase or decrease the positive impact of EA, on net?
There are lots of trade-offs that make this hard to figure out.
Is it okay for an individual EA group to decline to host a speaker if they discover that the speaker is an evangelical Christian who wrote some grisly thought experiments about how Hell might work, if it were real? Even if the event was on an unrelated topic?
This seems okay to me. Again, there are trade-offs, but I leave it to the group to navigate them. I might advise them one way or another if they asked me, but whatever their decision was, I’d assume they did what made the most sense to them, based partly on private information I couldn’t access.
As a movement, EA aims to have a lot of influence across a variety of fields, institutions, geographic regions, etc. This will probably work out better if we have a movement that is diverse in many ways. Entertaining many different ideas probably lets us make more intellectual progress. Being welcoming to people from many backgrounds gives us access to a wider range of ideas, while also making it easier for us to recruit more people*, spread our ideas to more places, etc. On the other hand, if we try to be welcoming by restricting discussion, we might lead the new people we reach to share their ideas less freely, slowing our intellectual progress. Getting the right balance seems difficult.
I could write much more along these themes, but I’ll end here, because I already feel like I’m starting to lose coherence.
*And of course, recruiting more people overall means you get an even wider range of ideas. Even if there aren’t ideas that only people from group X will have, every individual is a new mind with new thoughts.
I maybe should have said something like “concerns related to social justice” when I said “diversity.” I wound up picking the shorter word, but at the price of ambiguity.
I find it interesting that you thought “diversity” is a good shorthand for “social justice”, whereas other EAs naturally interpreted it as “intellectual diversity” or at least thought there’s significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
Getting the right balance seems difficult.
Well, maybe not, if some of the apparent options aren’t real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as I’m starting to suspect that EA has taken quite a few steps down the slope already.)
It may be that in the end EA can’t fight (i.e., can’t win against) SJ-like dynamics, and therefore EA joining cancel culture is more “effective” than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and I’ll defer to them. (If that’s the case, I’ll appreciate that it would be politically impossible to publicly lay out all of their reasoning.) It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about “getting the right balance” without even mentioning the obvious possibility of a slippery slope.
I find it interesting that you thought “diversity” is a good shorthand for “social justice”, whereas other EAs naturally interpreted it as “intellectual diversity” or at least thought there’s significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
I don’t think it says much about the current moment in EA. It says a few things about me:
That I generated the initial draft for this post in the middle of the night with no intention of publishing
That I decided to post it in a knowingly imperfect state rather than fiddling around with the language at the risk of never publishing, or publishing well after anyone stopped caring (hence the epistemic status)
That I spend too much time on Twitter, which has more discussion of demographic diversity than other kinds. Much of the English-speaking world also seems to be this way:
For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope.
Is there such a slope? It seems to me as though cultures and institutions can swing back and forth on this point; Donald Trump’s electoral success is a notable example. Throughout American history, different views have been cancel-worthy; is the Overton Window really narrower now than it was in the 1950s? (I’d be happy to read any arguments for this being a uniquely bad time; I don’t think it’s impossible that a slippery slope does exist, or that this is as bad as cancel culture has been in the modern era.)
It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about “getting the right balance” without even mentioning the obvious possibility of a slippery slope.
If you have any concerns about specific moderation decisions or other elements of the way the Forum is managed, please let me know! I’d like to think that we’ve hosted a variety of threads on related topics while managing to maintain a better combination of civility and free speech than almost any other online space, but I’d be surprised if there weren’t ways for us to improve.
As for not mentioning the possibility ; had I written for a few more hours, there might have been 50 or 60 bullet points in this piece, and I might have bounced between perspectives a dozen more times, with the phrase “slippery slope” appearing somewhere. As I said above, I chose a relatively arbitrary time to stop, share what I had with others, and then publish.
I’m open to the possibility that a slippery slope is almost universal when institutions and communities tackle these issues, but I also think that attention tends to be drawn to anecdotes that feed the “slippery slope” narrative, so I remain uncertain.
(For people curious about the topic, Ezra Klein’s podcast with Yascha Mounk includes an interesting argument that the Overton Window may have widened since 2000 or so.)
Whew! Because of the Atlantic article today, I am now getting another flood of missives from academics deeply afraid. Folks, I hear you but the volume outstrips my ability to write back. Please know I am reading all of them eventually, and they all make me think.
If you’re not sure whether EA can avoid sharing this fate, shouldn’t figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of “50 or 60 bullet points”? (Unless you know that others are already working on the problem, and it sure doesn’t sound like it.)
Having read through them, I’m still not convinced that today’s conditions are worse than those of other eras. It is very easy to find horrible stories of bad epistemics now, but is that because there are more such stories per capita, or because more information is being shared per capita than ever before?
(I should say, before I continue, that many of these stories horrify me — for example, the Yale Halloween incident, which happened the year after I graduated. I’m fighting against my own inclination to assume that things are worse than ever.)
Take John McWhorter’s article. Had a professor in the 1950s written a similar piece, what fraction of the academic population (which is, I assume, much larger today than it was then) might have sent messages to them about e.g. being forced to hide their views on one of that era’s many taboo subjects? What would answers to the survey in the article have looked like?
Or take the “Postcard from Pre-Totalitarian America” you referenced. It’s a chilling anecdote… but also seems wildly exaggerated in many places. Do those young academics actually all believe that America is the most evil country, or that the hijab is liberating? Is he certain that none of his students are cynically repeating mantras the same way he did? Do other professors from a similar background also think the U.S. is worse than the USSR was? Because this is one letter from one person, it’s impossible to tell.
Of course, it could be that things really were better then, but the lack of data from that period bothers me, given the natural human inclination to assume that one’s own time period is worse than prior time periods in various ways. (You can see this on Left Twitter all the time when today’s economic conditions are weighed against those of earlier eras.)
But whether this is the worst time in general isn’t as relevant as:
If you’re not sure whether EA can avoid sharing this fate, shouldn’t figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of “50 or 60 bullet points”?
Taking this question literally, there are a huge number of fates I’m not sure EA can avoid sharing, because nothing is certain. Among these fates, “devolving into cancel culture” seems less prominent than other failure conditions that I have also not made my top priority.
This is because my top priority at work is to write and edit things on behalf of other people. I sometimes think about EA cultural/community issues, but mostly because doing so might help me improve the projects I work on, as those are my primary responsibility. This Forum post happened in my free time and isn’t connected to my job, save for that my job led me to read that Twitter thread in the first place and has informed some of my beliefs.
(For what it’s worth, if I had to choose a top issue that might lead EA to “fail”, I’d cite “low or stagnant growth,” which is something I think about a lot, inside and outside of work.)
There are people whose job descriptions include “looking for threats to EA and trying to plan against them.” Some of them are working on problems like the ones that concern you. For example, many aspects of 80K’s anonymous interview series gets into questions about diversity and groupthink (among other relevant topics).
Of course, the interviews are scattered across many subjects, and many potentially great projects in this area haven’t been done. I’d be interested to see someone take on the “cancel culture” question in a more dedicated way, but I’d also like to see someone do this for movement growth, and that seems even more underworked to me.
I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the “cancel culture” direction. (The EA Munich thing seems like one of a few isolated incidents, and I don’t see a cancel-y trend in EA right now.)
I think the biggest reason I’m worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/arts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and I’ve personally seen it in action in my local parent groups. Doesn’t that give you a high enough base rate that you should think “I better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/dynamics don’t apply to EA”?
Your reasoning (from another comment) is “I’ve seen various incidents that seem worrying, but they don’t seem to form a pattern.” Well if you only get seriously worried once there’s a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/cultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.
For what it’s worth, if I had to choose a top issue that might lead EA to “fail”, I’d cite “low or stagnant growth,” which is something I think about a lot, inside and outside of work.
“Low or stagnant growth” is less worrying to me because that’s something you can always experiment or change course on, if you find yourself facing that problem. In other words you can keep trying until you get it right. With cancel culture though, if you don’t get it right the first time (i.e., you allow cancel culture to take over) then it seems very hard to recover.
I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the “cancel culture” direction.
Thanks for this information. It does makes it more understandable why you’re personally not focusing on this problem. I still think it should be on or near the top of your mind too though, especially as you think about and discuss related issues like this particular cancellation of Robin Hanson.
You’d expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn’t lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
But isn’t it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter).
That is why it’s so important to not give into those people’s demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they don’t like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.
I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
I’d be curious how many people you think are not willing to “tolerate real intellectual diversity”. I’m not sure if you are saying
“Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it’s worth the cost,” or
“Anyone who is upset by intellectual diversity isn’t someone we want to attract anyway, so losing them isn’t a real cost.”
(Presumably you are saying something between these two points, but I’m not sure where.)
No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn’t associate with people they think are wrong. So the real choice is not between “having all the people who think you shouldn’t associate with people who think they are wrong” and “having all the weird intellectually independent people”, it is instead between “having an intellectually uniform and conformist slice of the people who don’t want to be associated with others they disagree with” and “having a quite intellectually diverse crowd of people who are tolerating dissenting opinions”, with the second possibly actually being substantially larger, though generally I don’t think size is the relevant constraint to look at here.
I think you’re unintentionally dodging both Aaron’s and Ben’s points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn’t the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn’t sound like what you’re talking about.
> A German survivor of sexual abuse is interested in EA Munich’s events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on “gentle silent rape” and find it viscerally unpleasant. They’ve seen other discussion spaces where ideas like Hanson’s were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don’t care about someone who would distance themselves from EA for this reason anyway (it’s not an actual ‘loss’). And I’m not sure which it is or (if the first) what percentages you would give.
The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn’t even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.
This doesn’t seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options:
1. They are uncommon
2. They are not valuable
I am proposing a third option which is:
3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it’s really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.
It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.
[EDIT: As Oli’s next reponse notes, I’m misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
***
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben’s first option, since it’s just a non-numeric write-up of what Ben said:
Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it’s worth the cost
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
***
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don’t want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I’m generally willing to distance from the latter in order to be open to the former. That’s from a purely cold-blooded ‘maximise community output’ perspective, never mind the human element.
Other than that, I have a number of disagremeents with things you wrote, and for brevity I’m not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:
it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you
I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward ‘leopards don’t change their spots’ way.
I can’t speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements I’m not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really can’t be bothered to deal with that. I’m definitely not bringing my children to that event. It’s not a matter of being ‘triggered’ per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, I’m very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/or patience to do that.
There’s also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely haven’t put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.
No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don’t like is a world where you both won’t have a lot of people, and you will have tons of polarization and internal conflict.
How is your summary at all compatible with what I said, given that I explicitly said:
with the second (the one where we select on tolerance) possibly actually being substantially larger
That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying.
I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of “if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart”. You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don’t claim that I am saying something I am pretty clearly not saying.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since ‘this might possibly actually happen’ is not the same as explicitly saying ‘I think this will happen’. The latter certainly does not follow from the former ‘by necessity’.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
Your actual self-quote is an extremely weak version of this, since ‘this might possibly actually happen’ is not the same as explicitly saying ‘I think this will happen’. The latter certainly does not follow from the former ‘by necessity’.
Yeah, sorry, I do think the “by necessity” was too strong.
You’d expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn’t lead Speakers B and C to avoid talking to you
As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).
Had EA Munich hosted Hanson and then been attacked by people using language similar to that of the critics in Hanson’s Twitter thread, I may well have written a post excoriating those people for being uncharitable. I would prefer if we maintained a strong norm of not creating personal risks for people who have to handle difficult questions about speech norms (though I acknowledge that views on which questions are “difficult” will vary, since different people find different things obviously acceptable/unacceptable).
Edited from earlier comment: I think I am mostly confused what diversity has to do with this decision. It seems to me that there are many pro-diversity reasons to not deplatform Hanson. Indeed, the primary one cited, one of intellectual diversity and tolerance of weird ideas, is primary an argument in favor of diversity. So while diversity plays some role, I think I am actually confused why you bring it up here.
I am saying this because I wanted to argue against things in the last section, but realized that you just use really high-level language like “diversity and inclusion” which is very hard to say anything about. Of course everyone is in favor of some types of diversity, but it feels to me like the last section is trying to say something like “people who talked to a lot of people in the community tend to be more concerned about the kind of diversity that having Robin as a speaker might harm”, but I don’t actually know whether that’s what you mean. But if you do mean it, I think that’s mostly backwards, based on the evidence I have seen.
I maybe should have said something like “concerns related to social justice” when I said “diversity.” I wound up picking the shorter word, but at the price of ambiguity.
You’d expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn’t lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
A simple example of the kind of thing I’m thinking of (which I’m aware is too simplistic to represent reality in full, but does draw from the experiences of people I’ve met):
A German survivor of sexual abuse is interested in EA Munich’s events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on “gentle silent rape” and find it viscerally unpleasant. They’ve seen other discussion spaces where ideas like Hanson’s were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
There are many reasons someone might want to engage more or less with EA based on the types of views discussed within the community. Different types of deplatforming probably lead to different types of diversity being more or less present, and send different kinds of signals about effective altruism into the wider world.
For example, I’ve heard one person (who was generally anti-deplatforming) argue against ever promoting events that blend EA and religious thought, because (as best I understood it) they saw religious views as antithetical to things they thought were important about EA. This brings up questions like:
Will the promotion of religion-aligned EA events increase or decrease the positive impact of EA, on net?
There are lots of trade-offs that make this hard to figure out.
Is it okay for an individual EA group to decline to host a speaker if they discover that the speaker is an evangelical Christian who wrote some grisly thought experiments about how Hell might work, if it were real? Even if the event was on an unrelated topic?
This seems okay to me. Again, there are trade-offs, but I leave it to the group to navigate them. I might advise them one way or another if they asked me, but whatever their decision was, I’d assume they did what made the most sense to them, based partly on private information I couldn’t access.
As a movement, EA aims to have a lot of influence across a variety of fields, institutions, geographic regions, etc. This will probably work out better if we have a movement that is diverse in many ways. Entertaining many different ideas probably lets us make more intellectual progress. Being welcoming to people from many backgrounds gives us access to a wider range of ideas, while also making it easier for us to recruit more people*, spread our ideas to more places, etc. On the other hand, if we try to be welcoming by restricting discussion, we might lead the new people we reach to share their ideas less freely, slowing our intellectual progress. Getting the right balance seems difficult.
I could write much more along these themes, but I’ll end here, because I already feel like I’m starting to lose coherence.
*And of course, recruiting more people overall means you get an even wider range of ideas. Even if there aren’t ideas that only people from group X will have, every individual is a new mind with new thoughts.
I find it interesting that you thought “diversity” is a good shorthand for “social justice”, whereas other EAs naturally interpreted it as “intellectual diversity” or at least thought there’s significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
Well, maybe not, if some of the apparent options aren’t real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as I’m starting to suspect that EA has taken quite a few steps down the slope already.)
It may be that in the end EA can’t fight (i.e., can’t win against) SJ-like dynamics, and therefore EA joining cancel culture is more “effective” than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and I’ll defer to them. (If that’s the case, I’ll appreciate that it would be politically impossible to publicly lay out all of their reasoning.) It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about “getting the right balance” without even mentioning the obvious possibility of a slippery slope.
I don’t think it says much about the current moment in EA. It says a few things about me:
That I generated the initial draft for this post in the middle of the night with no intention of publishing
That I decided to post it in a knowingly imperfect state rather than fiddling around with the language at the risk of never publishing, or publishing well after anyone stopped caring (hence the epistemic status)
That I spend too much time on Twitter, which has more discussion of demographic diversity than other kinds. Much of the English-speaking world also seems to be this way:
Is there such a slope? It seems to me as though cultures and institutions can swing back and forth on this point; Donald Trump’s electoral success is a notable example. Throughout American history, different views have been cancel-worthy; is the Overton Window really narrower now than it was in the 1950s? (I’d be happy to read any arguments for this being a uniquely bad time; I don’t think it’s impossible that a slippery slope does exist, or that this is as bad as cancel culture has been in the modern era.)
If you have any concerns about specific moderation decisions or other elements of the way the Forum is managed, please let me know! I’d like to think that we’ve hosted a variety of threads on related topics while managing to maintain a better combination of civility and free speech than almost any other online space, but I’d be surprised if there weren’t ways for us to improve.
As for not mentioning the possibility ; had I written for a few more hours, there might have been 50 or 60 bullet points in this piece, and I might have bounced between perspectives a dozen more times, with the phrase “slippery slope” appearing somewhere. As I said above, I chose a relatively arbitrary time to stop, share what I had with others, and then publish.
I’m open to the possibility that a slippery slope is almost universal when institutions and communities tackle these issues, but I also think that attention tends to be drawn to anecdotes that feed the “slippery slope” narrative, so I remain uncertain.
(For people curious about the topic, Ezra Klein’s podcast with Yascha Mounk includes an interesting argument that the Overton Window may have widened since 2000 or so.)
There were extensive discussions around this at https://www.greaterwrong.com/posts/PjfsbKrK5MnJDDoFr/have-epistemic-conditions-always-been-this-bad, including one about the 1950s. (Note that those discussions were from before the recent cluster of even more extreme cancellations like David Shor and the utility worker who supposedly made a white power sign.)
ETA: See also this Atlantic article that just came out today, and John McWhorter’s tweet:
If you’re not sure whether EA can avoid sharing this fate, shouldn’t figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of “50 or 60 bullet points”? (Unless you know that others are already working on the problem, and it sure doesn’t sound like it.)
Thanks for linking to those discussions.
Having read through them, I’m still not convinced that today’s conditions are worse than those of other eras. It is very easy to find horrible stories of bad epistemics now, but is that because there are more such stories per capita, or because more information is being shared per capita than ever before?
(I should say, before I continue, that many of these stories horrify me — for example, the Yale Halloween incident, which happened the year after I graduated. I’m fighting against my own inclination to assume that things are worse than ever.)
Take John McWhorter’s article. Had a professor in the 1950s written a similar piece, what fraction of the academic population (which is, I assume, much larger today than it was then) might have sent messages to them about e.g. being forced to hide their views on one of that era’s many taboo subjects? What would answers to the survey in the article have looked like?
Or take the “Postcard from Pre-Totalitarian America” you referenced. It’s a chilling anecdote… but also seems wildly exaggerated in many places. Do those young academics actually all believe that America is the most evil country, or that the hijab is liberating? Is he certain that none of his students are cynically repeating mantras the same way he did? Do other professors from a similar background also think the U.S. is worse than the USSR was? Because this is one letter from one person, it’s impossible to tell.
Of course, it could be that things really were better then, but the lack of data from that period bothers me, given the natural human inclination to assume that one’s own time period is worse than prior time periods in various ways. (You can see this on Left Twitter all the time when today’s economic conditions are weighed against those of earlier eras.)
But whether this is the worst time in general isn’t as relevant as:
Taking this question literally, there are a huge number of fates I’m not sure EA can avoid sharing, because nothing is certain. Among these fates, “devolving into cancel culture” seems less prominent than other failure conditions that I have also not made my top priority.
This is because my top priority at work is to write and edit things on behalf of other people. I sometimes think about EA cultural/community issues, but mostly because doing so might help me improve the projects I work on, as those are my primary responsibility. This Forum post happened in my free time and isn’t connected to my job, save for that my job led me to read that Twitter thread in the first place and has informed some of my beliefs.
(For what it’s worth, if I had to choose a top issue that might lead EA to “fail”, I’d cite “low or stagnant growth,” which is something I think about a lot, inside and outside of work.)
There are people whose job descriptions include “looking for threats to EA and trying to plan against them.” Some of them are working on problems like the ones that concern you. For example, many aspects of 80K’s anonymous interview series gets into questions about diversity and groupthink (among other relevant topics).
Of course, the interviews are scattered across many subjects, and many potentially great projects in this area haven’t been done. I’d be interested to see someone take on the “cancel culture” question in a more dedicated way, but I’d also like to see someone do this for movement growth, and that seems even more underworked to me.
I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the “cancel culture” direction. (The EA Munich thing seems like one of a few isolated incidents, and I don’t see a cancel-y trend in EA right now.)
I think the biggest reason I’m worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/arts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and I’ve personally seen it in action in my local parent groups. Doesn’t that give you a high enough base rate that you should think “I better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/dynamics don’t apply to EA”?
Your reasoning (from another comment) is “I’ve seen various incidents that seem worrying, but they don’t seem to form a pattern.” Well if you only get seriously worried once there’s a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/cultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.
“Low or stagnant growth” is less worrying to me because that’s something you can always experiment or change course on, if you find yourself facing that problem. In other words you can keep trying until you get it right. With cancel culture though, if you don’t get it right the first time (i.e., you allow cancel culture to take over) then it seems very hard to recover.
Thanks for this information. It does makes it more understandable why you’re personally not focusing on this problem. I still think it should be on or near the top of your mind too though, especially as you think about and discuss related issues like this particular cancellation of Robin Hanson.
But isn’t it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter).
That is why it’s so important to not give into those people’s demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they don’t like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.
I’d be curious how many people you think are not willing to “tolerate real intellectual diversity”. I’m not sure if you are saying
“Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so it’s worth the cost,” or
“Anyone who is upset by intellectual diversity isn’t someone we want to attract anyway, so losing them isn’t a real cost.”
(Presumably you are saying something between these two points, but I’m not sure where.)
No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn’t associate with people they think are wrong. So the real choice is not between “having all the people who think you shouldn’t associate with people who think they are wrong” and “having all the weird intellectually independent people”, it is instead between “having an intellectually uniform and conformist slice of the people who don’t want to be associated with others they disagree with” and “having a quite intellectually diverse crowd of people who are tolerating dissenting opinions”, with the second possibly actually being substantially larger, though generally I don’t think size is the relevant constraint to look at here.
I think you’re unintentionally dodging both Aaron’s and Ben’s points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isn’t the case that disagreeing about *anything* can get you no-platformed/cancelled/whatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesn’t sound like what you’re talking about.
> A German survivor of sexual abuse is interested in EA Munich’s events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on “gentle silent rape” and find it viscerally unpleasant. They’ve seen other discussion spaces where ideas like Hanson’s were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we don’t care about someone who would distance themselves from EA for this reason anyway (it’s not an actual ‘loss’). And I’m not sure which it is or (if the first) what percentages you would give.
The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn’t even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.
This doesn’t seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options:
1. They are uncommon
2. They are not valuable
I am proposing a third option which is:
3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it’s really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.
It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.
[EDIT: As Oli’s next reponse notes, I’m misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
***
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Ben’s first option, since it’s just a non-numeric write-up of what Ben said:
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
***
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that don’t want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so I’m generally willing to distance from the latter in order to be open to the former. That’s from a purely cold-blooded ‘maximise community output’ perspective, never mind the human element.
Other than that, I have a number of disagremeents with things you wrote, and for brevity I’m not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:
I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward ‘leopards don’t change their spots’ way.
I can’t speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements I’m not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really can’t be bothered to deal with that. I’m definitely not bringing my children to that event. It’s not a matter of being ‘triggered’ per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, I’m very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/or patience to do that.
There’s also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely haven’t put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.
No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don’t like is a world where you both won’t have a lot of people, and you will have tons of polarization and internal conflict.
How is your summary at all compatible with what I said, given that I explicitly said:
That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying.
I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of “if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart”. You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don’t claim that I am saying something I am pretty clearly not saying.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since ‘this might possibly actually happen’ is not the same as explicitly saying ‘I think this will happen’. The latter certainly does not follow from the former ‘by necessity’.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
Yeah, sorry, I do think the “by necessity” was too strong.
As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).
I agree with this.
Had EA Munich hosted Hanson and then been attacked by people using language similar to that of the critics in Hanson’s Twitter thread, I may well have written a post excoriating those people for being uncharitable. I would prefer if we maintained a strong norm of not creating personal risks for people who have to handle difficult questions about speech norms (though I acknowledge that views on which questions are “difficult” will vary, since different people find different things obviously acceptable/unacceptable).