I maybe should have said something like âconcerns related to social justiceâ when I said âdiversity.â I wound up picking the shorter word, but at the price of ambiguity.
Youâd expect having a wider range of speakers to increase intellectual diversity â but only as long as hosting Speaker A doesnât lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
A simple example of the kind of thing Iâm thinking of (which Iâm aware is too simplistic to represent reality in full, but does draw from the experiences of people Iâve met):
A German survivor of sexual abuse is interested in EA Munichâs events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on âgentle silent rapeâ and find it viscerally unpleasant. Theyâve seen other discussion spaces where ideas like Hansonâs were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
There are many reasons someone might want to engage more or less with EA based on the types of views discussed within the community. Different types of deplatforming probably lead to different types of diversity being more or less present, and send different kinds of signals about effective altruism into the wider world.
For example, Iâve heard one person (who was generally anti-deplatforming) argue against ever promoting events that blend EA and religious thought, because (as best I understood it) they saw religious views as antithetical to things they thought were important about EA. This brings up questions like:
Will the promotion of religion-aligned EA events increase or decrease the positive impact of EA, on net?
There are lots of trade-offs that make this hard to figure out.
Is it okay for an individual EA group to decline to host a speaker if they discover that the speaker is an evangelical Christian who wrote some grisly thought experiments about how Hell might work, if it were real? Even if the event was on an unrelated topic?
This seems okay to me. Again, there are trade-offs, but I leave it to the group to navigate them. I might advise them one way or another if they asked me, but whatever their decision was, Iâd assume they did what made the most sense to them, based partly on private information I couldnât access.
As a movement, EA aims to have a lot of influence across a variety of fields, institutions, geographic regions, etc. This will probably work out better if we have a movement that is diverse in many ways. Entertaining many different ideas probably lets us make more intellectual progress. Being welcoming to people from many backgrounds gives us access to a wider range of ideas, while also making it easier for us to recruit more people*, spread our ideas to more places, etc. On the other hand, if we try to be welcoming by restricting discussion, we might lead the new people we reach to share their ideas less freely, slowing our intellectual progress. Getting the right balance seems difficult.
I could write much more along these themes, but Iâll end here, because I already feel like Iâm starting to lose coherence.
*And of course, recruiting more people overall means you get an even wider range of ideas. Even if there arenât ideas that only people from group X will have, every individual is a new mind with new thoughts.
I maybe should have said something like âconcerns related to social justiceâ when I said âdiversity.â I wound up picking the shorter word, but at the price of ambiguity.
I find it interesting that you thought âdiversityâ is a good shorthand for âsocial justiceâ, whereas other EAs naturally interpreted it as âintellectual diversityâ or at least thought thereâs significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
Getting the right balance seems difficult.
Well, maybe not, if some of the apparent options arenât real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as Iâm starting to suspect that EA has taken quite a few steps down the slope already.)
It may be that in the end EA canât fight (i.e., canât win against) SJ-like dynamics, and therefore EA joining cancel culture is more âeffectiveâ than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and Iâll defer to them. (If thatâs the case, Iâll appreciate that it would be politically impossible to publicly lay out all of their reasoning.) It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about âgetting the right balanceâ without even mentioning the obvious possibility of a slippery slope.
I find it interesting that you thought âdiversityâ is a good shorthand for âsocial justiceâ, whereas other EAs naturally interpreted it as âintellectual diversityâ or at least thought thereâs significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
I donât think it says much about the current moment in EA. It says a few things about me:
That I generated the initial draft for this post in the middle of the night with no intention of publishing
That I decided to post it in a knowingly imperfect state rather than fiddling around with the language at the risk of never publishing, or publishing well after anyone stopped caring (hence the epistemic status)
That I spend too much time on Twitter, which has more discussion of demographic diversity than other kinds. Much of the English-speaking world also seems to be this way:
For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope.
Is there such a slope? It seems to me as though cultures and institutions can swing back and forth on this point; Donald Trumpâs electoral success is a notable example. Throughout American history, different views have been cancel-worthy; is the Overton Window really narrower now than it was in the 1950s? (Iâd be happy to read any arguments for this being a uniquely bad time; I donât think itâs impossible that a slippery slope does exist, or that this is as bad as cancel culture has been in the modern era.)
It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about âgetting the right balanceâ without even mentioning the obvious possibility of a slippery slope.
If you have any concerns about specific moderation decisions or other elements of the way the Forum is managed, please let me know! Iâd like to think that weâve hosted a variety of threads on related topics while managing to maintain a better combination of civility and free speech than almost any other online space, but Iâd be surprised if there werenât ways for us to improve.
As for not mentioning the possibility ; had I written for a few more hours, there might have been 50 or 60 bullet points in this piece, and I might have bounced between perspectives a dozen more times, with the phrase âslippery slopeâ appearing somewhere. As I said above, I chose a relatively arbitrary time to stop, share what I had with others, and then publish.
Iâm open to the possibility that a slippery slope is almost universal when institutions and communities tackle these issues, but I also think that attention tends to be drawn to anecdotes that feed the âslippery slopeâ narrative, so I remain uncertain.
(For people curious about the topic, Ezra Kleinâs podcast with Yascha Mounk includes an interesting argument that the Overton Window may have widened since 2000 or so.)
Whew! Because of the Atlantic article today, I am now getting another flood of missives from academics deeply afraid. Folks, I hear you but the volume outstrips my ability to write back. Please know I am reading all of them eventually, and they all make me think.
If youâre not sure whether EA can avoid sharing this fate, shouldnât figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of â50 or 60 bullet pointsâ? (Unless you know that others are already working on the problem, and it sure doesnât sound like it.)
Having read through them, Iâm still not convinced that todayâs conditions are worse than those of other eras. It is very easy to find horrible stories of bad epistemics now, but is that because there are more such stories per capita, or because more information is being shared per capita than ever before?
(I should say, before I continue, that many of these stories horrify me â for example, the Yale Halloween incident, which happened the year after I graduated. Iâm fighting against my own inclination to assume that things are worse than ever.)
Take John McWhorterâs article. Had a professor in the 1950s written a similar piece, what fraction of the academic population (which is, I assume, much larger today than it was then) might have sent messages to them about e.g. being forced to hide their views on one of that eraâs many taboo subjects? What would answers to the survey in the article have looked like?
Or take the âPostcard from Pre-Totalitarian Americaâ you referenced. Itâs a chilling anecdote⊠but also seems wildly exaggerated in many places. Do those young academics actually all believe that America is the most evil country, or that the hijab is liberating? Is he certain that none of his students are cynically repeating mantras the same way he did? Do other professors from a similar background also think the U.S. is worse than the USSR was? Because this is one letter from one person, itâs impossible to tell.
Of course, it could be that things really were better then, but the lack of data from that period bothers me, given the natural human inclination to assume that oneâs own time period is worse than prior time periods in various ways. (You can see this on Left Twitter all the time when todayâs economic conditions are weighed against those of earlier eras.)
But whether this is the worst time in general isnât as relevant as:
If youâre not sure whether EA can avoid sharing this fate, shouldnât figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of â50 or 60 bullet pointsâ?
Taking this question literally, there are a huge number of fates Iâm not sure EA can avoid sharing, because nothing is certain. Among these fates, âdevolving into cancel cultureâ seems less prominent than other failure conditions that I have also not made my top priority.
This is because my top priority at work is to write and edit things on behalf of other people. I sometimes think about EA cultural/âcommunity issues, but mostly because doing so might help me improve the projects I work on, as those are my primary responsibility. This Forum post happened in my free time and isnât connected to my job, save for that my job led me to read that Twitter thread in the first place and has informed some of my beliefs.
(For what itâs worth, if I had to choose a top issue that might lead EA to âfailâ, Iâd cite âlow or stagnant growth,â which is something I think about a lot, inside and outside of work.)
There are people whose job descriptions include âlooking for threats to EA and trying to plan against them.â Some of them are working on problems like the ones that concern you. For example, many aspects of 80Kâs anonymous interview series gets into questions about diversity and groupthink (among other relevant topics).
Of course, the interviews are scattered across many subjects, and many potentially great projects in this area havenât been done. Iâd be interested to see someone take on the âcancel cultureâ question in a more dedicated way, but Iâd also like to see someone do this for movement growth, and that seems even more underworked to me.
I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the âcancel cultureâ direction. (The EA Munich thing seems like one of a few isolated incidents, and I donât see a cancel-y trend in EA right now.)
I think the biggest reason Iâm worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/âarts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and Iâve personally seen it in action in my local parent groups. Doesnât that give you a high enough base rate that you should think âI better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/âdynamics donât apply to EAâ?
Your reasoning (from another comment) is âIâve seen various incidents that seem worrying, but they donât seem to form a pattern.â Well if you only get seriously worried once thereâs a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/âcultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.
For what itâs worth, if I had to choose a top issue that might lead EA to âfailâ, Iâd cite âlow or stagnant growth,â which is something I think about a lot, inside and outside of work.
âLow or stagnant growthâ is less worrying to me because thatâs something you can always experiment or change course on, if you find yourself facing that problem. In other words you can keep trying until you get it right. With cancel culture though, if you donât get it right the first time (i.e., you allow cancel culture to take over) then it seems very hard to recover.
I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the âcancel cultureâ direction.
Thanks for this information. It does makes it more understandable why youâre personally not focusing on this problem. I still think it should be on or near the top of your mind too though, especially as you think about and discuss related issues like this particular cancellation of Robin Hanson.
Youâd expect having a wider range of speakers to increase intellectual diversity â but only as long as hosting Speaker A doesnât lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
But isnât it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter).
That is why itâs so important to not give into those peopleâs demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they donât like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.
I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
Iâd be curious how many people you think are not willing to âtolerate real intellectual diversityâ. Iâm not sure if you are saying
âSure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so itâs worth the cost,â or
âAnyone who is upset by intellectual diversity isnât someone we want to attract anyway, so losing them isnât a real cost.â
(Presumably you are saying something between these two points, but Iâm not sure where.)
No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldnât associate with people they think are wrong. So the real choice is not between âhaving all the people who think you shouldnât associate with people who think they are wrongâ and âhaving all the weird intellectually independent peopleâ, it is instead between âhaving an intellectually uniform and conformist slice of the people who donât want to be associated with others they disagree withâ and âhaving a quite intellectually diverse crowd of people who are tolerating dissenting opinionsâ, with the second possibly actually being substantially larger, though generally I donât think size is the relevant constraint to look at here.
I think youâre unintentionally dodging both Aaronâs and Benâs points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isnât the case that disagreeing about *anything* can get you no-platformed/âcancelled/âwhatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesnât sound like what youâre talking about.
> A German survivor of sexual abuse is interested in EA Munichâs events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on âgentle silent rapeâ and find it viscerally unpleasant. Theyâve seen other discussion spaces where ideas like Hansonâs were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we donât care about someone who would distance themselves from EA for this reason anyway (itâs not an actual âlossâ). And Iâm not sure which it is or (if the first) what percentages you would give.
The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasnât even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.
This doesnât seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options:
1. They are uncommon
2. They are not valuable
I am proposing a third option which is:
3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And itâs really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.
It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.
[EDIT: As Oliâs next reponse notes, Iâm misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
***
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Benâs first option, since itâs just a non-numeric write-up of what Ben said:
Sure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so itâs worth the cost
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
***
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that donât want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so Iâm generally willing to distance from the latter in order to be open to the former. Thatâs from a purely cold-blooded âmaximise community outputâ perspective, never mind the human element.
Other than that, I have a number of disagremeents with things you wrote, and for brevity Iâm not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:
it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you
I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward âleopards donât change their spotsâ way.
I canât speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements Iâm not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really canât be bothered to deal with that. Iâm definitely not bringing my children to that event. Itâs not a matter of being âtriggeredâ per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, Iâm very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/âor patience to do that.
Thereâs also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely havenât put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.
No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they donât like is a world where you both wonât have a lot of people, and you will have tons of polarization and internal conflict.
How is your summary at all compatible with what I said, given that I explicitly said:
with the second (the one where we select on tolerance) possibly actually being substantially larger
That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying.
I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of âif you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apartâ. You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please donât claim that I am saying something I am pretty clearly not saying.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since âthis might possibly actually happenâ is not the same as explicitly saying âI think this will happenâ. The latter certainly does not follow from the former âby necessityâ.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
Your actual self-quote is an extremely weak version of this, since âthis might possibly actually happenâ is not the same as explicitly saying âI think this will happenâ. The latter certainly does not follow from the former âby necessityâ.
Yeah, sorry, I do think the âby necessityâ was too strong.
Youâd expect having a wider range of speakers to increase intellectual diversity â but only as long as hosting Speaker A doesnât lead Speakers B and C to avoid talking to you
As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).
Had EA Munich hosted Hanson and then been attacked by people using language similar to that of the critics in Hansonâs Twitter thread, I may well have written a post excoriating those people for being uncharitable. I would prefer if we maintained a strong norm of not creating personal risks for people who have to handle difficult questions about speech norms (though I acknowledge that views on which questions are âdifficultâ will vary, since different people find different things obviously acceptable/âunacceptable).
I maybe should have said something like âconcerns related to social justiceâ when I said âdiversity.â I wound up picking the shorter word, but at the price of ambiguity.
Youâd expect having a wider range of speakers to increase intellectual diversity â but only as long as hosting Speaker A doesnât lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
A simple example of the kind of thing Iâm thinking of (which Iâm aware is too simplistic to represent reality in full, but does draw from the experiences of people Iâve met):
A German survivor of sexual abuse is interested in EA Munichâs events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on âgentle silent rapeâ and find it viscerally unpleasant. Theyâve seen other discussion spaces where ideas like Hansonâs were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
There are many reasons someone might want to engage more or less with EA based on the types of views discussed within the community. Different types of deplatforming probably lead to different types of diversity being more or less present, and send different kinds of signals about effective altruism into the wider world.
For example, Iâve heard one person (who was generally anti-deplatforming) argue against ever promoting events that blend EA and religious thought, because (as best I understood it) they saw religious views as antithetical to things they thought were important about EA. This brings up questions like:
Will the promotion of religion-aligned EA events increase or decrease the positive impact of EA, on net?
There are lots of trade-offs that make this hard to figure out.
Is it okay for an individual EA group to decline to host a speaker if they discover that the speaker is an evangelical Christian who wrote some grisly thought experiments about how Hell might work, if it were real? Even if the event was on an unrelated topic?
This seems okay to me. Again, there are trade-offs, but I leave it to the group to navigate them. I might advise them one way or another if they asked me, but whatever their decision was, Iâd assume they did what made the most sense to them, based partly on private information I couldnât access.
As a movement, EA aims to have a lot of influence across a variety of fields, institutions, geographic regions, etc. This will probably work out better if we have a movement that is diverse in many ways. Entertaining many different ideas probably lets us make more intellectual progress. Being welcoming to people from many backgrounds gives us access to a wider range of ideas, while also making it easier for us to recruit more people*, spread our ideas to more places, etc. On the other hand, if we try to be welcoming by restricting discussion, we might lead the new people we reach to share their ideas less freely, slowing our intellectual progress. Getting the right balance seems difficult.
I could write much more along these themes, but Iâll end here, because I already feel like Iâm starting to lose coherence.
*And of course, recruiting more people overall means you get an even wider range of ideas. Even if there arenât ideas that only people from group X will have, every individual is a new mind with new thoughts.
I find it interesting that you thought âdiversityâ is a good shorthand for âsocial justiceâ, whereas other EAs naturally interpreted it as âintellectual diversityâ or at least thought thereâs significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
Well, maybe not, if some of the apparent options arenât real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as Iâm starting to suspect that EA has taken quite a few steps down the slope already.)
It may be that in the end EA canât fight (i.e., canât win against) SJ-like dynamics, and therefore EA joining cancel culture is more âeffectiveâ than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and Iâll defer to them. (If thatâs the case, Iâll appreciate that it would be politically impossible to publicly lay out all of their reasoning.) It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about âgetting the right balanceâ without even mentioning the obvious possibility of a slippery slope.
I donât think it says much about the current moment in EA. It says a few things about me:
That I generated the initial draft for this post in the middle of the night with no intention of publishing
That I decided to post it in a knowingly imperfect state rather than fiddling around with the language at the risk of never publishing, or publishing well after anyone stopped caring (hence the epistemic status)
That I spend too much time on Twitter, which has more discussion of demographic diversity than other kinds. Much of the English-speaking world also seems to be this way:
Is there such a slope? It seems to me as though cultures and institutions can swing back and forth on this point; Donald Trumpâs electoral success is a notable example. Throughout American history, different views have been cancel-worthy; is the Overton Window really narrower now than it was in the 1950s? (Iâd be happy to read any arguments for this being a uniquely bad time; I donât think itâs impossible that a slippery slope does exist, or that this is as bad as cancel culture has been in the modern era.)
If you have any concerns about specific moderation decisions or other elements of the way the Forum is managed, please let me know! Iâd like to think that weâve hosted a variety of threads on related topics while managing to maintain a better combination of civility and free speech than almost any other online space, but Iâd be surprised if there werenât ways for us to improve.
As for not mentioning the possibility ; had I written for a few more hours, there might have been 50 or 60 bullet points in this piece, and I might have bounced between perspectives a dozen more times, with the phrase âslippery slopeâ appearing somewhere. As I said above, I chose a relatively arbitrary time to stop, share what I had with others, and then publish.
Iâm open to the possibility that a slippery slope is almost universal when institutions and communities tackle these issues, but I also think that attention tends to be drawn to anecdotes that feed the âslippery slopeâ narrative, so I remain uncertain.
(For people curious about the topic, Ezra Kleinâs podcast with Yascha Mounk includes an interesting argument that the Overton Window may have widened since 2000 or so.)
There were extensive discussions around this at https://ââwww.greaterwrong.com/ââposts/ââPjfsbKrK5MnJDDoFr/ââhave-epistemic-conditions-always-been-this-bad, including one about the 1950s. (Note that those discussions were from before the recent cluster of even more extreme cancellations like David Shor and the utility worker who supposedly made a white power sign.)
ETA: See also this Atlantic article that just came out today, and John McWhorterâs tweet:
If youâre not sure whether EA can avoid sharing this fate, shouldnât figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of â50 or 60 bullet pointsâ? (Unless you know that others are already working on the problem, and it sure doesnât sound like it.)
Thanks for linking to those discussions.
Having read through them, Iâm still not convinced that todayâs conditions are worse than those of other eras. It is very easy to find horrible stories of bad epistemics now, but is that because there are more such stories per capita, or because more information is being shared per capita than ever before?
(I should say, before I continue, that many of these stories horrify me â for example, the Yale Halloween incident, which happened the year after I graduated. Iâm fighting against my own inclination to assume that things are worse than ever.)
Take John McWhorterâs article. Had a professor in the 1950s written a similar piece, what fraction of the academic population (which is, I assume, much larger today than it was then) might have sent messages to them about e.g. being forced to hide their views on one of that eraâs many taboo subjects? What would answers to the survey in the article have looked like?
Or take the âPostcard from Pre-Totalitarian Americaâ you referenced. Itâs a chilling anecdote⊠but also seems wildly exaggerated in many places. Do those young academics actually all believe that America is the most evil country, or that the hijab is liberating? Is he certain that none of his students are cynically repeating mantras the same way he did? Do other professors from a similar background also think the U.S. is worse than the USSR was? Because this is one letter from one person, itâs impossible to tell.
Of course, it could be that things really were better then, but the lack of data from that period bothers me, given the natural human inclination to assume that oneâs own time period is worse than prior time periods in various ways. (You can see this on Left Twitter all the time when todayâs economic conditions are weighed against those of earlier eras.)
But whether this is the worst time in general isnât as relevant as:
Taking this question literally, there are a huge number of fates Iâm not sure EA can avoid sharing, because nothing is certain. Among these fates, âdevolving into cancel cultureâ seems less prominent than other failure conditions that I have also not made my top priority.
This is because my top priority at work is to write and edit things on behalf of other people. I sometimes think about EA cultural/âcommunity issues, but mostly because doing so might help me improve the projects I work on, as those are my primary responsibility. This Forum post happened in my free time and isnât connected to my job, save for that my job led me to read that Twitter thread in the first place and has informed some of my beliefs.
(For what itâs worth, if I had to choose a top issue that might lead EA to âfailâ, Iâd cite âlow or stagnant growth,â which is something I think about a lot, inside and outside of work.)
There are people whose job descriptions include âlooking for threats to EA and trying to plan against them.â Some of them are working on problems like the ones that concern you. For example, many aspects of 80Kâs anonymous interview series gets into questions about diversity and groupthink (among other relevant topics).
Of course, the interviews are scattered across many subjects, and many potentially great projects in this area havenât been done. Iâd be interested to see someone take on the âcancel cultureâ question in a more dedicated way, but Iâd also like to see someone do this for movement growth, and that seems even more underworked to me.
I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the âcancel cultureâ direction. (The EA Munich thing seems like one of a few isolated incidents, and I donât see a cancel-y trend in EA right now.)
I think the biggest reason Iâm worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/âarts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and Iâve personally seen it in action in my local parent groups. Doesnât that give you a high enough base rate that you should think âI better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/âdynamics donât apply to EAâ?
Your reasoning (from another comment) is âIâve seen various incidents that seem worrying, but they donât seem to form a pattern.â Well if you only get seriously worried once thereâs a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/âcultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.
âLow or stagnant growthâ is less worrying to me because thatâs something you can always experiment or change course on, if you find yourself facing that problem. In other words you can keep trying until you get it right. With cancel culture though, if you donât get it right the first time (i.e., you allow cancel culture to take over) then it seems very hard to recover.
Thanks for this information. It does makes it more understandable why youâre personally not focusing on this problem. I still think it should be on or near the top of your mind too though, especially as you think about and discuss related issues like this particular cancellation of Robin Hanson.
But isnât it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter).
That is why itâs so important to not give into those peopleâs demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they donât like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed.
Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.
Iâd be curious how many people you think are not willing to âtolerate real intellectual diversityâ. Iâm not sure if you are saying
âSure, we will lose 95% of the people we want to attract, but the resulting discussion will be >20x more valuable so itâs worth the cost,â or
âAnyone who is upset by intellectual diversity isnât someone we want to attract anyway, so losing them isnât a real cost.â
(Presumably you are saying something between these two points, but Iâm not sure where.)
No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldnât associate with people they think are wrong. So the real choice is not between âhaving all the people who think you shouldnât associate with people who think they are wrongâ and âhaving all the weird intellectually independent peopleâ, it is instead between âhaving an intellectually uniform and conformist slice of the people who donât want to be associated with others they disagree withâ and âhaving a quite intellectually diverse crowd of people who are tolerating dissenting opinionsâ, with the second possibly actually being substantially larger, though generally I donât think size is the relevant constraint to look at here.
I think youâre unintentionally dodging both Aaronâs and Benâs points here, by focusing on the generic idea of intellectual diversity and ignoring the specifics of this case. It simply isnât the case that disagreeing about *anything* can get you no-platformed/âcancelled/âwhatever. Nobody seeks 100% agreement with every speaker at an event; for one thing that sounds like a very dull event to attend! But there are specific areas people are particularly sensitive to, this is one of them, and Aaron gave a stylised example of the kind of person we can lose here immediately after the section you quoted. It really doesnât sound like what youâre talking about.
> A German survivor of sexual abuse is interested in EA Munichâs events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on âgentle silent rapeâ and find it viscerally unpleasant. Theyâve seen other discussion spaces where ideas like Hansonâs were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
Like Ben, I understand you as either saying that this person is sufficiently uncommon that their loss is worth the more-valuable conversation, or that we donât care about someone who would distance themselves from EA for this reason anyway (itâs not an actual âlossâ). And Iâm not sure which it is or (if the first) what percentages you would give.
The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasnât even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.
This doesnât seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options:
1. They are uncommon
2. They are not valuable
I am proposing a third option which is:
3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And itâs really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.
It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.
[EDIT: As Oliâs next reponse notes, Iâm misinterpreting him here. His claim is that the movement would be overall larger in a world where we lose this group but correspondingly pick up others (like Robin, I assume), or at least that the direction of the effect on movement size is not obvious.]
***
Thanks for the response. Contrary to your claim that you are proposing a third option, I think your (3) cleanly falls under mine and Benâs first option, since itâs just a non-numeric write-up of what Ben said:
I assume you would give different percentages, like 30% and 2x or something, but the structure of your (3) appears identical.
***
At that point my disagreement with you on this specific case becomes pretty factual; the number of sexual abuse survivors is large, my expected percentage of them that donât want to engage with Robin Hanson is high, the number of people in the community with on-the-record statements or behaviour that are comparably or more unpleasant to those people is small, and so Iâm generally willing to distance from the latter in order to be open to the former. Thatâs from a purely cold-blooded âmaximise community outputâ perspective, never mind the human element.
Other than that, I have a number of disagremeents with things you wrote, and for brevity Iâm not going to go through them all; you may assume by default that everything you think is obvious I do not think is obvious. But the crux of the disagreement is here I think:
I disagree with the non-hyperbolic version of this, and think it significantly underestimates the extent to which someone repeatedly saying or doing public things that you find odious is a predictor of them saying or doing unpleasant things to you in person, in a fairly straightforward âleopards donât change their spotsâ way.
I canât speak to the sexual abuse case directly, but if someone has a long history of making overtly racist statements Iâm not likely to attend a small-group event that I know they will attend, because I put high probability that they will act in an overtly racist way towards me and I really canât be bothered to deal with that. Iâm definitely not bringing my children to that event. Itâs not a matter of being âtriggeredâ per se, I just have better things to do with my evening than cutting some obnoxious racist down to size. But even then, Iâm very privileged in a number of ways and so very comfortable defending my corner and arguing back if attacked; not everybody has (or should have) the ability and/âor patience to do that.
Thereâs also a large second-order effect that communities which tolerate such behaviour are much more likely to contain other individuals who hold those views and merely havenât put them in writing on the internet, which increases the probability of such an experience considerably. Avoidance of such places is the right default policy here, at an individual level at least.
No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they donât like is a world where you both wonât have a lot of people, and you will have tons of polarization and internal conflict.
How is your summary at all compatible with what I said, given that I explicitly said:
That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying.
I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of âif you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apartâ. You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please donât claim that I am saying something I am pretty clearly not saying.
Thank you for explicitly saying that you think your proposed approach would lead to a larger movement size in the long run, I had missed that. Your actual self-quote is an extremely weak version of this, since âthis might possibly actually happenâ is not the same as explicitly saying âI think this will happenâ. The latter certainly does not follow from the former âby necessityâ.
Still, I could have reasonably inferred that you think the latter based on the rest of your commentary, and should at least have asked if that is in fact what you think, so I apologise for that and will edit my previous post to reflect the same.
That all said, I believe my previous post remains an adequate summary of why I disagree with you on the object level question.
Yeah, sorry, I do think the âby necessityâ was too strong.
As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).
I agree with this.
Had EA Munich hosted Hanson and then been attacked by people using language similar to that of the critics in Hansonâs Twitter thread, I may well have written a post excoriating those people for being uncharitable. I would prefer if we maintained a strong norm of not creating personal risks for people who have to handle difficult questions about speech norms (though I acknowledge that views on which questions are âdifficultâ will vary, since different people find different things obviously acceptable/âunacceptable).