Do you have any thoughts on this earlier comment of mine? In short, are you worried about about EA developing a full-scale cancel culture similar to other places where SJ values currently predominate, like academia or MSM / (formerly) liberal journalism? (By that I mean a culture where many important policy-relevant issues either cannot be discussed, or the discussions must follow the prevailing “party line” in order for the speakers to not face serious negative consequences like career termination.) If you are worried, are you aware of any efforts to prevent this from happening? Or at least discussions around this among EA leaders?
I realize that EA Munich and other EA organizations face difficult trade-offs and believe that they are making the best choices possible given their values and the information they have access to, but people in places like academia must have thought the same when they started what would later turn out to be their first steps towards cancel culture. Do you think EA can avoid sharing the same eventual fate?
[Tangent:] Based on developments since we last engaged on the topic, Wei, I am significantly more worried about this than I was at the time. (I.e., I have updated in your direction.)
Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think it’s very unlikely that “full-scale cancel culture” (I’ll use “CC” from here) evolves within EA.
Some elements of my doubt:
Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
Obviously, none of these are perfect analogies, but I think there’s a noteworthy pattern here.
The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
EA has a strong British influence (rather than being wholly rooted in the United States) and solid bases in other cultures; this makes us a bit less vulnerable to shifts in one nation’s culture. Of course, the entire Western world is moving in a “cancel culture” direction to some degree, so this isn’t complete protection, but it still seems like a protective factor.
I’ve also been impressed by recent EA work I’ve seen come out of Brazil, Singapore, and China, which seem much less likely to be swept by parallel movements than Germany or Britain.
Your comments on this issue include the most upvoted comments on my post, on Cullen’s post, and on “Racial Demographics at Longtermist Organizations”. It seems like the balance of opinion is very firmly anti-CC. If I began to see downvoting brigades on those types of comments, I would become much more concerned.
Compared to all of the above, a single local group’s decision seems minor.
But I’m sure there are other reasons to worry. If anyone sees this and wants to create a counter-list (“elements of concern”?), I’d be very interested to read it.
(I’m occupied with some things so I’ll just address this point and maybe come back to others later.)
It seems like the balance of opinion is very firmly anti-CC.
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of “EA politics”. If there’s a danger or trend of EA going in a CC direction, I should be among the last to know.
Until recently I have had very little interest in politics or even socializing. (I once wrote “And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.”) Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
I’m probably well within the top percentile of all EAs in terms of “cancel proofness”, because I have both an independent source of income and a non-zero amount of “intersectional currency” (e.g., I’m a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I don’t like to do talks/presentations, so there’s pretty much nothing about me that can be canceled.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. “preference falsification”). That seems to already be the situation today.
Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that they’re worried about EA developing/joining CC, and telling me what they’ve seen to make them worried, and saying that they can’t talk publicly about it.
I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly
I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. “preference falsification”). That seems to already be the situation today.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly don’t think about this issue because it’s not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.
If someone convinced me to get more pessimistic about “cancel culture” then I’d definitely think about it more. I’d be interested in concrete forecasts if you have any. For example, what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?
Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I’d want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like “Censure those who don’t censure others for not censuring others for problematic speech” and take that to its extreme, but the rest of the world will get along fine without them and it’s not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn’t look like they will win elections).
in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.)
I don’t feel that way. I think that “exclude people who talk openly about the conditions under which we exclude people” is a deeply pernicious norm and I’m happy to keep blithely violating it. If a group excludes me for doing so, then I think it’s a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)
I’m generally supportive of pro-speech arguments and efforts and I was glad to see the Harper’s letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.
I generally try to state my mind if I believe it’s important, don’t talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can’t discuss in public then my intention is to discuss them.
I would only intend to join an internet discussion about “cancellation” in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don’t recall all that was said, but I think a large part of my argument was that “jumping ship” or being forced off for ideological reasons was not “fine” when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I’m not sure if this changed Paul’s mind.
I’m not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn’t currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I’m happy enough to discuss, but I’m not sure there’s a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like “what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future?” I think I have a probability of perhaps 2-5% on “meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.”
I’m always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that “think about it more” is cost-effective for me, but I’m more skeptical of that (I expect they’d instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about “cancel culture” is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it’s more likely I should do something. In that case I do think it’s clear there is going to be a lot of damage, though again I think we differ a bit in that I’m more scared about the health of our institutions than people like me losing influence.
I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people’s minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don’t want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it’s an important problem that more people should work on. So instead of “and lots of people talk about it already” which seems to suggest that enough people are working on it already, something like “I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.”
Curious how things look from your perspective, or a third party perspective.
Why did it take someone like me to make the concern public?
I don’t think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. It’s not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was “The Importance of Truth-Oriented Discussions in EA”.
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don’t know either way.
I’ve also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they’d feel comfortable commenting about their views, though I can’t be sure.
I also have direct evidence in the form of EAs contacting me privately to say that they’re worried about EA developing/joining CC, and telling me what they’ve seen to make them worried, and saying that they can’t talk publicly about it.
I’ve gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.
My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...
...and because writing for an EA audience about any topic can be intimidating. I’ve had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had “literal anxiety dreams” when he wrote a post about longtermism.
As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. I’m not sure whether similar dynamics hold on Facebook/Twitter/Discord, though.
*****
I have seen incidents in the community that worried me. But I haven’t seen a pattern of such incidents; they’ve been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didn’t cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.
Also: I’d be glad to post something in the EA Polls group I created on Facebook.
Because answers are linked to Facebook accounts, some people might hide their views, but at least it’s a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I don’t know what wording you’d want around such a question.
the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public?
My guess is that your points explain a significant share of the effect, but I’d guess the following is also significant:
Expressing worries about how some external dynamic might affect the EA community isn’t often done on this Forum, perhaps because it’s less naturally “on topic” than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:
How does US immigration policy affect the ability of US-based EA orgs to hire talent?
How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Ventures’s capital might be in Facebook stocks?)
Both of these questions seem quite important and relevant, but I recall less discussion of those than I’d have at-first-glance expected based on their importance.
(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldn’t immediately find. But I think it’s somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)
On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.
That cancellation attempt was clearly a bridge too far. EA Forum is comparatively a bastion of free speech (relative to some EA Facebook groups I’ve observed and as we’ve now seen, local EA events), and Scott Alexander clearly does not make a good initial target. I’m worried however that each “victory” by CC has a ratcheting effect on EA culture, whereas failed cancellations don’t really matter in the long run, as CC can always find softer targets to attack instead, until the formerly hard targets have been isolated and weakened.
Honestly I’m not sure what the solution is in the long run. I mean academia is full of smart people many of whom surely dislike CC as much as most of us and would push back against it if they could, yet academia is now the top example of cancel culture. What is something that we can do that they couldn’t, or didn’t think of?
I agree that that was definitely a step too far. But there are legitimate middle grounds that don’t have slippery slopes.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.
I think there is a lot of detail and complexity here and I don’t think that this comment is going to do it justice, but I want to signal that I’m open to dialog about these things.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
On the face of it, this seems like a bad idea to me. I don’t want “introductory” EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn’t like the discourse norms in the central EA spaces, I don’t want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.
To say it another way, I think it is a mistake to have “advanced” and “introductory” EA spaces, at all.
I am intending to make a pretty strong claim here.
[One operationalization I generated, but want to think more about before I fully endorse it: “I would turn away billions of dollars of funding to EA causes, if that was purchased at the cost of ‘EA’s discourse norms are as good as those in academia.’”]
Some cruxes:
I think what is valuable about the EA movement is the quality of the epistemic discourse in the EA movement, and almost nothing else matters (and to the extent that other factors matter, the indifference curve heavily favors better epistemology). If I changed my mind about that, it would change my view about a lot of things, including the answer to this question.
I think a model by which people gradually “warm up” to “more advanced” discourse norms is false. I predict that people will mostly stay in their comfort zone, and people who like discussion at the “less advanced” level will prefer to stay at that level. If I were wrong about that, I would substantially reconsider my view.
Large number of people at the fringes of a movement tend to influence the direction of the movement, and significantly shape the flow of talent to the core of the movement. If I thought that you could have 90% of the people identifying as EAs have somewhat worse discourse norms than we have on this forum without meaningfully impacting the discourse or action of the people at the core of the movement, I think I might change my mind about this.
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I strongly believe that representation, equity, and inclusiveness is important in the EA movement. I believe it so strongly that I try to look at what people are saying in the safe spaces where they feel comfortable talking about EA norms that scare them away. I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces. I am not merely saying that they are “worried” about where EA is heading; I’m saying that right here, right now, they feel uncomfortable fully participating in generalized EA spaces.
You say that “If people wouldn’t like the discourse norms in the central EA spaces…I would prefer that they bounce off.” In principle, I think we agree on this. Casual demands that we are being alienating should not faze us. But there does exist a point at which I think we might agree that those demands are sufficiently strong, like the holocaust denial example. The question, then, is not one of kind, but of degree. The question turns on whether the harm that is caused by certain forms of speech outweighs the benefits accrued by discussing those things.
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
Q2: You mentioned having similar standards to academia. If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA? Or do you mean only having similar standards to what academics discuss amongst each other, setting aside completely how universities deal with undergraduate students’ spaces.
I have significant cognitive dissonance here. I’m not at all certain about what I personally feel. But I do want to report that there are large numbers of people, in several disparate places, many of which I doubt interact between themselves in any significant way, who all keep saying in private that they do not feel safe here. I have seen people actively go through harm from EAs casually making the case for systemic racism not being real and I can report that it is not a minor harm.
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind. After seeing the level of harm that these kinds of speech acts cause, I think my position of moving that discourse away from introductory spaces is warranted. But I also strongly agree with traditional enlightenment ideals of open discussion, free speech, and that the best way to show an idea is wrong is to seriously discuss it. So I definitely don’t want to ban such speech everywhere. I just want there to be some way for us to have good epistemic standards and also benefit from EAs who don’t feel safe in the main EA Facebook groups.
To borrow a phrase from Nora Caplan-Bricker, they’re not demanding that EA spaces be happy places where they never have to read another word of dissent. Instead, they’re asking for a level of acceptance and ownership that other EAs already have. They just want to feel safe.
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
I agree with your conclusion about this instance, but for very different reasons, and I don’t think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.
It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be very valuable to hear contrary evidence. If, for example, we learnt that far fewer people were being killed than we thought, or that the CCP’s explanations about terrorism were correct, this would be useful information that would help us prioritize our work. Equally, it would be valuable to hear if we had actually under-estimated the death toll, for exactly the same reasons.
Similarly, Animal Rights EAs consider our use of factory farming to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (animals) have moral value?’
Or again, pro-life activists consider our use of abortion to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (fetuses) have moral value?’
It might be the case that people make a dedicated ‘Effective Liberation for Xinjiang’ group, and intend to discuss only methods there, not the fundamental premise. But if they started posting about the Uighurs in other EA groups, criticism of their project, including its fundamental premises, would be entirely legitimate.
I think this is true even if it made some hypothetical Uighur diaspora members of the group feel ‘unsafe’. People have a right to actual safety—clearly no-one should be beating each other up at EA events. But an unlimited right to ‘feel safe’, even when this can only be achieved by imposing strict (and contrary to EA) restrictions on others is clearly tyrannical. If you feel literally unsafe when someone makes an argument on the internet you have a serious problem and it is not our responsibility (or even within our power) to accommodate this. You should feel unsafe while near cliff edges, or around strange men in dark allys—not in a debate. Indeed, if feeling ‘unsafe’ is a trump card then I will simply claim that I feel unsafe when people discuss BLM positively, due to the (from my perspective) implied threat of riots.
The analogy here I think is clear. I think it is legitimate to say we will not discuss the Uighur genocide (or animal rights, or racism) in a given group because they are off-topic. What is not at all legitimate is to say that one side, but not the other, is forbidden.
Finally, I also think your strategy is potentially a bit dishonest. We should not hide the true nature of EA, whatever that is, from newcomers in an attempt to seduce them into the movement.
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.
On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren’t obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don’t have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven’t thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying “I think factory farming is terrible but XYZ” instead of just “XYZ”.
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I’m inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn’t allow that kind of talk. Namely, that “the Holocaust happened, and Holocaust denial is false”.
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates...
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
...
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don’t think we’re on the same page about what the relevant scalar is.
If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA?
It depends entirely on what is meant by “certain forms”, but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as “racist”, because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren’t actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don’t know enough about the world to rule out discussion of that line of thinking entirely.
...
I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces.
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, “look, we’re here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are ‘harmed’ by speech-acts, I’m sorry for you, but tough nuggets. I guess you shouldn’t participate in this discourse. ”
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
...
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I’m tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:
I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.
I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.
In what ways do people not feel safe? (Is it things like this comment?) … I want to know more about this. What kind of harm?
No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.
I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission
“[T]aking action to eliminate racism is critical for improving the world, regardless of the ramifications for animal advocacy. But if the EA and animal advocacy communities fail to stand for (and not simply passively against) antiracism, we will also lose valuable perspectives that can only come from having different lived experiences—not just the perspectives of people of the global majority who are excluded, but the perspective of any talented person who wants to accomplish good for animals without supporting racist systems.
I know this is true because I have almost walked away from these communities myself, disquieted by the attitudes toward racism I found within them.”
“I think a model by which people gradually “warm up” to “more advanced” discourse norms is false.”
I don’t think that’s the main benefit of disallowing certain forms of speech at certain events. I’d imagine it’d be to avoid making EA events attractive and easily accessible for, say, white supremacists. I’d like to make it pretty costly for a white supremacist to be able to share their ideas at an EA event.
We’ve already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don’t moderate away or at least discourage such views will tend to attract them—it’s not the pattern of activity you’d see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It’s also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.
I don’t know anything about Leverage but I can think of another situation where someone involved in the rationalist community was exposed as having misogynistic and white supremacist anonymous online accounts. (They only had loose ties to the rationalist community, it came up another way, but it concerned me.)
I just upvoted this comment as I strongly agree with it, but also, it had −1 karma with 2 votes on it when I did so. I think it would be extremely helpful for folks who disagree with this, or otherwise want to downvote it, to talk about why they disagree or downvoted it.
I didn’t downvote it, though probably I should have. But it seems a stretch to say ‘one guy who works for a weird organization that is supposedly EA’ implies ‘congregation’. I think that would have to imply a large number of people. I would be very disappointed if I had a congregation of less than ten people.
JoshYou also ignores important hedging in the linked comment:
Bennett denies this connection; he says he was trying to make friends with these white nationalists in order to get information on them and white nationalism. I think it’s plausible that this is somewhat true.
So instead of saying
We’ve already seen white nationalists congregate in some EA-adjacent spaces.
It would be more fair to say
We’ve already seen one guy with some evidence he is a white nationalist (though he somewhat plausibly denies it) work for a weird organization that has some EA links.
Which is clearly much less worrying. There are lots of weird ideologies and a lot of weird people in California, who believe a lot of very incorrect things. I would be surprised if ‘white nationalists’ were really high up on the list of threats to EA, especially given how extremely left wing EA is and how low status they are. We probably have a lot more communists! Rather, I think the highlighting of ‘White Nationalists’ is being done for ideological reasons—i.e. to cast shade on more moderate right wing people by using a term that is practically a slur. I think the grandparent would not have made such a sloppy comment had it not been about the hated outgroup.
I also agree that it’s ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I’m not talking about conservatives, or the “IDW”, or people who don’t like the BLM movement or think racism is no big deal. I’d be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett’s leaked emails. I don’t think his defense is credible at all, by the way).
I don’t think it’s accurate to see white nationalists in online communities as just the right tail that develops organically from a wide distribution of political views. White nationalists are more organized than that and have their own social networks (precisely because they’re not just really conservative conservatives). Regular conservatives outnumber white nationalists by orders of magnitude in the general public, but I don’t think that implies that white nationalists will be virtually non-existent in a space just because the majority are left of center.
Describing members of Leverage as “white nationalists” strikes me as pretty extreme, to the level of dishonesty, and is not even backed up by the comment that was linked. I thought Buck’s initial comment was also pretty bad, and he did indeed correct his comment, which is a correction that I appreciate, and I feel like any comment that links to it should obviously also take into account the correction.
I have interfaced a lot with people at Leverage, and while I have many issues with the organization, saying that many white nationalists congregate there, and have congregated in the past, just strikes me as really unlikely.
Buck’s comment also says at the bottom:
Edited to add (Oct 08 2019): I wrote “which makes me think that it’s likely that Leverage at least for a while had a whole lot of really racist employees.” I think this was mistaken and I’m confused by why I wrote it. I endorse the claim “I think it’s plausible Leverage had like five really racist employees”. I feel pretty bad about this mistake and apologize to anyone harmed by it.
I also want us to separate “really racist” from “white nationalist” which are just really not the same term, and which appear to me to be conflated via the link above.
I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It’s not that there are never nazis or communists, but if you want to have a good conversation, it’s better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)
My description was based on Buck’s correction (I don’t have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don’t believe. I don’t mean to imply anything stronger than what Buck claimed about Leverage.
I invoked white nationalists not as a hypothetical representative of ideologies I don’t like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.
Just to be clear, I don’t think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like “advocating for nations based on values historically associated with whiteness”, which would obviously include neoreaction, but would also presumably be a much more tenable position in discourse. So for now I am going to assume you mean something much more straightforwardly based on racial superiority, which also appears to be the Wikipedia definition.
I’ve debated with a number of neoreactionaries, and I’ve never seen them bring up much stuff about racial superiority. Usually just arguing against democracy and in favor of centralized control and various arguments derived from that, though I also don’t have a ton of datapoints. There is definitely a focus on the superiority of western culture in their writing and rhetoric, much of which is flawed and I am deeply opposed to many of the things I’ve seen at least some neoreactionaries propose, but my sense is that I wouldn’t characterize the philosophy fundamentally as white nationalist in the racist sense of the term. Though of course the few neoreactionaries that I have debated are probably selected in various ways that reduces the likelihood of having extreme opinions on these dimensions (though they are also the ones that are most likely to engage with EA, so I do think the sample should carry substantial weight).
Of course, some neoreactionaries are also going to be white nationalists, and being a neoreactionary will probably correlate with white nationalism at least a bit, but my guess is that at least the people adjacent to EA and Rationality that I’ve seen engage with that philosophy haven’t been very focused on white nationalism, and I’ve frequently seen them actively argue against it.
I think that it seems like accusations of EA associations with white supremacy of various sorts come up enough to be pretty concerning.
I also think the claims would be equally concerning if JoshYou had said “white supremacists” or “really racist people” instead of “white nationalists” in the original post, so I feel uncertain that Buck stepping back the original post actually lessens the degree we ought to be concerned?
I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It’s not that there are never nazis or communists, but if you want to have a good conversation, it’s better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)
I didn’t really see the Nazi comparisons (I guess saying white nationalist is sort of one, but I personally associate white nationalism as a phrase much more with individuals in the US than Nazis, though that may be biased by being American).
I guess broadly a trend I feel like I’ve seen lately is occasionally people writing about witnessing racism in the EA community, and having what seem like really genuine concerns, and then those basically not being discussed (at least on the EA Forum) or being framed as shutting down conversation.
I don’t follow how what you’re saying is a response to what I was saying.
I think a model by which people gradually “warm up” to “more advanced” discourse norms is false.
I wasn’t saying “the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms.” I was saying if that I was mistaken about that “warming up effect”, it would cause me to reconsider my view here.
In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed “introductory spaces” like undergrad classes to become “safe spaces”, thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate students and professors and demanded “safe spaces” everywhere they went. And how is anyone supposed to argue against “safety”, especially once its importance has been institutionalized (i.e., departments were built in part to enforce “safe spaces”, which can then easily extend their power beyond “introductory spaces”).
ETA: Jonathan Haidt has a book and an Atlantic article titled The Coddling of the American Mind detailing problems caused by the introduction of “safe spaces” in universities.
I don’t think this is pivotal to anyone, but just because I’m curious:
If we knew for a fact that a slippery slope wouldn’t occur, and the “safe space” was limited just to the EA Facebook group, and there was no risk of this EA forum ever becoming a “safe space”, would you then be okay with this demarcation of disallowing some types of discussion on the EA Facebook group, but allowing that discussion on the EA forum? Or do you strongly feel that EA should not ever disallow these types of discussion, even on the EA Facebook group?
(by “disallowing discussion”, I mean Hansonian level stuff, not obviously improper things like direct threats or doxxing)
yet academia is now the top example of cancel culture
I’m a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don’t think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?
I have lots of friends in academia and follow academic blogs etc., and basically don’t hear any of them talking about cancel culture within that context. I did recently see a philosopher recently post a controversial paper and get backlash for it on Twitter, but then he seemed to basically shrug it off since people complaining on Twitter didn’t really affect him. This fits my general model that most of the cancel culture influence on academia comes from people outside academia trying to affect it, with varying success.
I don’t doubt that there are individual pockets with academia that are more cancely, but the rest of academia seems to me mostly unaffected by them.
I’m a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don’t think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?
Professors are already overwhelmingly leftists or left-leaning (almost all conservatives have been driven away or self-selected away), and now even left-leaning professors are being canceled or fearful of being canceled. See:
and this comment in the comments section of a NYT story about cancel culture among the students:
Having just graduated from the University of Minnesota last year, a very liberal college, I believe these examples don’t adequately show how far cancel culture has gone and what it truly is. The examples used of disassociating from obvious homophobes, or more classic bullying that teenage girls have always done to each other since the dawn of time is not new and not really cancel culture. The cancel culture that is truly new to my generation is the full blocking or shutting out of someone who simply has a different opinion than you. My experience in college was it morphed into a culture of fear for most. The fear of cancellation or punishment for voicing an opinion that the “group” disagreed with created a culture where most of us sat silent. My campus was not one of fruitful debate, but silent adherence to whatever the most “woke” person in the classroom decided was the correct thing to believe or think. This is not how things worked in the past, people used to be able to disagree, debate and sometimes feel offended because we are all looking to get closer to the truth on whatever topic it may be. Our problem with cancel culture is it snuffs out any debate, there is no longer room for dissent or nuance, the group can decide that your opinion isn’t worth hearing and—poof you’ve been canceled into oblivion. Whatever it’s worth I’d like to note I’m a liberal, voted for Obama and Hillary, those who participate in cancel culture aren’t liberals to me, they’ve hijacked the name.
About “I have lots of friends in academia and follow academic blogs etc., and basically don’t hear any of them talking about cancel culture within that context.” there could be a number of explanations aside from cancel culture not being that bad in academia. Maybe you could ask them directly about it?
Thanks. It looks to me that much of what’s being described at these links is about the atmosphere among the students at American universities, which then also starts affecting the professors there. That would explain my confusion, since a large fraction of my academic friends are European, so largely unaffected by these developments.
there could be a number of explanations aside from cancel culture not being that bad in academia.
I do hear them complain about various other things though, and I also have friends privately complaining about cancel culture in non-academic contexts, so I’d generally expect this to come up if it were an issue. But I could still ask, of course.
Do you think EA can avoid sharing the same eventual fate?
No, not even a chance. It is obviously so far gone now that there’s no point in objecting and we should work on building a new movement that avoids this failure mode, from scratch.
What are some specific things that make you believe this, outside the single decision by EA Munich referenced in this post? Regarding the end of my reply to Wei Dai, I’d be interested to see your list of “elements of concern” on this point.
There’s a number of things. Some are things that cannot be mentioned, others are just part of the “new normal” of social justice infiltrating everything.
Do you have any thoughts on this earlier comment of mine? In short, are you worried about about EA developing a full-scale cancel culture similar to other places where SJ values currently predominate, like academia or MSM / (formerly) liberal journalism? (By that I mean a culture where many important policy-relevant issues either cannot be discussed, or the discussions must follow the prevailing “party line” in order for the speakers to not face serious negative consequences like career termination.) If you are worried, are you aware of any efforts to prevent this from happening? Or at least discussions around this among EA leaders?
I realize that EA Munich and other EA organizations face difficult trade-offs and believe that they are making the best choices possible given their values and the information they have access to, but people in places like academia must have thought the same when they started what would later turn out to be their first steps towards cancel culture. Do you think EA can avoid sharing the same eventual fate?
[Tangent:] Based on developments since we last engaged on the topic, Wei, I am significantly more worried about this than I was at the time. (I.e., I have updated in your direction.)
What made you update?
Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think it’s very unlikely that “full-scale cancel culture” (I’ll use “CC” from here) evolves within EA.
Some elements of my doubt:
Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
CEA actively shrank the size of EA Global because they thought it would improve the quality of the event.
80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
Obviously, none of these are perfect analogies, but I think there’s a noteworthy pattern here.
The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
EA has a strong British influence (rather than being wholly rooted in the United States) and solid bases in other cultures; this makes us a bit less vulnerable to shifts in one nation’s culture. Of course, the entire Western world is moving in a “cancel culture” direction to some degree, so this isn’t complete protection, but it still seems like a protective factor.
I’ve also been impressed by recent EA work I’ve seen come out of Brazil, Singapore, and China, which seem much less likely to be swept by parallel movements than Germany or Britain.
Your comments on this issue include the most upvoted comments on my post, on Cullen’s post, and on “Racial Demographics at Longtermist Organizations”. It seems like the balance of opinion is very firmly anti-CC. If I began to see downvoting brigades on those types of comments, I would become much more concerned.
Compared to all of the above, a single local group’s decision seems minor.
But I’m sure there are other reasons to worry. If anyone sees this and wants to create a counter-list (“elements of concern”?), I’d be very interested to read it.
(I’m occupied with some things so I’ll just address this point and maybe come back to others later.)
That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:
I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of “EA politics”. If there’s a danger or trend of EA going in a CC direction, I should be among the last to know.
Until recently I have had very little interest in politics or even socializing. (I once wrote “And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.”) Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
I’m probably well within the top percentile of all EAs in terms of “cancel proofness”, because I have both an independent source of income and a non-zero amount of “intersectional currency” (e.g., I’m a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I don’t like to do talks/presentations, so there’s pretty much nothing about me that can be canceled.
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn’t exist. (Maybe they won’t be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. “preference falsification”). That seems to already be the situation today.
Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that they’re worried about EA developing/joining CC, and telling me what they’ve seen to make them worried, and saying that they can’t talk publicly about it.
I agree with this. This seems like an opportune time for me to say in a public, easy-to-google place that I think cancel culture is a real thing, and very harmful.
It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.
That does seem important. I mostly don’t think about this issue because it’s not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.
If someone convinced me to get more pessimistic about “cancel culture” then I’d definitely think about it more. I’d be interested in concrete forecasts if you have any. For example, what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?
Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I’d want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like “Censure those who don’t censure others for not censuring others for problematic speech” and take that to its extreme, but the rest of the world will get along fine without them and it’s not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn’t look like they will win elections).
I don’t feel that way. I think that “exclude people who talk openly about the conditions under which we exclude people” is a deeply pernicious norm and I’m happy to keep blithely violating it. If a group excludes me for doing so, then I think it’s a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)
I’m generally supportive of pro-speech arguments and efforts and I was glad to see the Harper’s letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.
I generally try to state my mind if I believe it’s important, don’t talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can’t discuss in public then my intention is to discuss them.
I would only intend to join an internet discussion about “cancellation” in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).
To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don’t recall all that was said, but I think a large part of my argument was that “jumping ship” or being forced off for ideological reasons was not “fine” when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I’m not sure if this changed Paul’s mind.
I’m not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).
It doesn’t currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.
It does feel like your estimates for the expected harms are higher than mine, which I’m happy enough to discuss, but I’m not sure there’s a big disagreement (and it would have to be quite big to change my bottom line).
I was trying to get at possible quantitative disagreements by asking things like “what’s the probability that making pro-speech comments would itself be a significant political liability at some point in the future?” I think I have a probability of perhaps 2-5% on “meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence.”
I’m always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that “think about it more” is cost-effective for me, but I’m more skeptical of that (I expect they’d instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.
For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about “cancel culture” is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it’s more likely I should do something. In that case I do think it’s clear there is going to be a lot of damage, though again I think we differ a bit in that I’m more scared about the health of our institutions than people like me losing influence.
I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people’s minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don’t want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.
I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it’s an important problem that more people should work on. So instead of “and lots of people talk about it already” which seems to suggest that enough people are working on it already, something like “I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere.”
Curious how things look from your perspective, or a third party perspective.
I don’t think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. It’s not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was “The Importance of Truth-Oriented Discussions in EA”.
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don’t know either way.
I’ve also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they’d feel comfortable commenting about their views, though I can’t be sure.
I’ve gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.
My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...
...and because writing for an EA audience about any topic can be intimidating. I’ve had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had “literal anxiety dreams” when he wrote a post about longtermism.
As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. I’m not sure whether similar dynamics hold on Facebook/Twitter/Discord, though.
*****
I have seen incidents in the community that worried me. But I haven’t seen a pattern of such incidents; they’ve been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didn’t cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.
Also: I’d be glad to post something in the EA Polls group I created on Facebook.
Because answers are linked to Facebook accounts, some people might hide their views, but at least it’s a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I don’t know what wording you’d want around such a question.
My guess is that your points explain a significant share of the effect, but I’d guess the following is also significant:
Expressing worries about how some external dynamic might affect the EA community isn’t often done on this Forum, perhaps because it’s less naturally “on topic” than discussion of e.g. EA cause areas. I think this applies to worries about so-called cancel culture, but also to e.g.:
How does US immigration policy affect the ability of US-based EA orgs to hire talent?
How do financial crises or booms affect the total amount of EA-aligned funds? (E.g. I think a significant share of Good Ventures’s capital might be in Facebook stocks?)
Both of these questions seem quite important and relevant, but I recall less discussion of those than I’d have at-first-glance expected based on their importance.
(I do think there was some post on how COVID affects fundraising prospects for nonprofits, which I couldn’t immediately find. But I think it’s somewhat telling that here the external event was from a standard EA cause area, and there generally was a lot of COVID content on the Forum.)
On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.
That cancellation attempt was clearly a bridge too far. EA Forum is comparatively a bastion of free speech (relative to some EA Facebook groups I’ve observed and as we’ve now seen, local EA events), and Scott Alexander clearly does not make a good initial target. I’m worried however that each “victory” by CC has a ratcheting effect on EA culture, whereas failed cancellations don’t really matter in the long run, as CC can always find softer targets to attack instead, until the formerly hard targets have been isolated and weakened.
Honestly I’m not sure what the solution is in the long run. I mean academia is full of smart people many of whom surely dislike CC as much as most of us and would push back against it if they could, yet academia is now the top example of cancel culture. What is something that we can do that they couldn’t, or didn’t think of?
I agree that that was definitely a step too far. But there are legitimate middle grounds that don’t have slippery slopes.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.
I think there is a lot of detail and complexity here and I don’t think that this comment is going to do it justice, but I want to signal that I’m open to dialog about these things.
On the face of it, this seems like a bad idea to me. I don’t want “introductory” EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn’t like the discourse norms in the central EA spaces, I don’t want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.
To say it another way, I think it is a mistake to have “advanced” and “introductory” EA spaces, at all.
I am intending to make a pretty strong claim here.
[One operationalization I generated, but want to think more about before I fully endorse it: “I would turn away billions of dollars of funding to EA causes, if that was purchased at the cost of ‘EA’s discourse norms are as good as those in academia.’”]
Some cruxes:
I think what is valuable about the EA movement is the quality of the epistemic discourse in the EA movement, and almost nothing else matters (and to the extent that other factors matter, the indifference curve heavily favors better epistemology). If I changed my mind about that, it would change my view about a lot of things, including the answer to this question.
I think a model by which people gradually “warm up” to “more advanced” discourse norms is false. I predict that people will mostly stay in their comfort zone, and people who like discussion at the “less advanced” level will prefer to stay at that level. If I were wrong about that, I would substantially reconsider my view.
Large number of people at the fringes of a movement tend to influence the direction of the movement, and significantly shape the flow of talent to the core of the movement. If I thought that you could have 90% of the people identifying as EAs have somewhat worse discourse norms than we have on this forum without meaningfully impacting the discourse or action of the people at the core of the movement, I think I might change my mind about this.
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.
I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.
I strongly believe that representation, equity, and inclusiveness is important in the EA movement. I believe it so strongly that I try to look at what people are saying in the safe spaces where they feel comfortable talking about EA norms that scare them away. I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces. I am not merely saying that they are “worried” about where EA is heading; I’m saying that right here, right now, they feel uncomfortable fully participating in generalized EA spaces.
You say that “If people wouldn’t like the discourse norms in the central EA spaces…I would prefer that they bounce off.” In principle, I think we agree on this. Casual demands that we are being alienating should not faze us. But there does exist a point at which I think we might agree that those demands are sufficiently strong, like the holocaust denial example. The question, then, is not one of kind, but of degree. The question turns on whether the harm that is caused by certain forms of speech outweighs the benefits accrued by discussing those things.
Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn’t really apply.
Q2: You mentioned having similar standards to academia. If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA? Or do you mean only having similar standards to what academics discuss amongst each other, setting aside completely how universities deal with undergraduate students’ spaces.
I have significant cognitive dissonance here. I’m not at all certain about what I personally feel. But I do want to report that there are large numbers of people, in several disparate places, many of which I doubt interact between themselves in any significant way, who all keep saying in private that they do not feel safe here. I have seen people actively go through harm from EAs casually making the case for systemic racism not being real and I can report that it is not a minor harm.
I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.
Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you’ll agree that these are questions of degree, not of kind. After seeing the level of harm that these kinds of speech acts cause, I think my position of moving that discourse away from introductory spaces is warranted. But I also strongly agree with traditional enlightenment ideals of open discussion, free speech, and that the best way to show an idea is wrong is to seriously discuss it. So I definitely don’t want to ban such speech everywhere. I just want there to be some way for us to have good epistemic standards and also benefit from EAs who don’t feel safe in the main EA Facebook groups.
To borrow a phrase from Nora Caplan-Bricker, they’re not demanding that EA spaces be happy places where they never have to read another word of dissent. Instead, they’re asking for a level of acceptance and ownership that other EAs already have. They just want to feel safe.
I agree with your conclusion about this instance, but for very different reasons, and I don’t think it supports your wider point of view. It would be bad if EAs spent all the time discussing the holocaust, because the holocaust happened in the past, and so there is nothing we can possible do to prevent it. As such the discussion is likely to be a purely academic exercise that does not help improve the world.
It would be very different to discuss a currently occurring genocide. If EAs were considering investing resources in fighting the Uighur genocide, for example, it would be very valuable to hear contrary evidence. If, for example, we learnt that far fewer people were being killed than we thought, or that the CCP’s explanations about terrorism were correct, this would be useful information that would help us prioritize our work. Equally, it would be valuable to hear if we had actually under-estimated the death toll, for exactly the same reasons.
Similarly, Animal Rights EAs consider our use of factory farming to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (animals) have moral value?’
Or again, pro-life activists consider our use of abortion to be a modern holocaust, far larger than any prior. But debate about this is a perfectly acceptable EA topic—even on debate around subjects like ‘but do the victims (fetuses) have moral value?’
It might be the case that people make a dedicated ‘Effective Liberation for Xinjiang’ group, and intend to discuss only methods there, not the fundamental premise. But if they started posting about the Uighurs in other EA groups, criticism of their project, including its fundamental premises, would be entirely legitimate.
I think this is true even if it made some hypothetical Uighur diaspora members of the group feel ‘unsafe’. People have a right to actual safety—clearly no-one should be beating each other up at EA events. But an unlimited right to ‘feel safe’, even when this can only be achieved by imposing strict (and contrary to EA) restrictions on others is clearly tyrannical. If you feel literally unsafe when someone makes an argument on the internet you have a serious problem and it is not our responsibility (or even within our power) to accommodate this. You should feel unsafe while near cliff edges, or around strange men in dark allys—not in a debate. Indeed, if feeling ‘unsafe’ is a trump card then I will simply claim that I feel unsafe when people discuss BLM positively, due to the (from my perspective) implied threat of riots.
The analogy here I think is clear. I think it is legitimate to say we will not discuss the Uighur genocide (or animal rights, or racism) in a given group because they are off-topic. What is not at all legitimate is to say that one side, but not the other, is forbidden.
Finally, I also think your strategy is potentially a bit dishonest. We should not hide the true nature of EA, whatever that is, from newcomers in an attempt to seduce them into the movement.
I think this comment says what I was getting at in my own reply, though more strongly.
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.
On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren’t obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don’t have. E.g. if someone tells you that they think eating animals is morally acceptable, you should probably just ignore them because most people who say that haven’t thought about the issue very much. But there are a small number of people who do make that statement and are still worth listening to, and they often intentionally signal it by saying “I think factory farming is terrible but XYZ” instead of just “XYZ”.
First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.
[Everything that I say in this comment is tentative, and I may change my mind.]
If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I’m inclined to bite the bullet of allowing that sort of conversation.
The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn’t allow that kind of talk. Namely, that “the Holocaust happened, and Holocaust denial is false”.
Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.
I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.
If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.
In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)
But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)
...
I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don’t think we’re on the same page about what the relevant scalar is.
It depends entirely on what is meant by “certain forms”, but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as “racist”, because that is a convenient and unarguable way to attack those ideas.
I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren’t actually going to do anything), Eli-University would come down on them hard.
If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)
The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don’t know enough about the world to rule out discussion of that line of thinking entirely.
...
I would love to hear more about the details there. In what ways do people not feel safe?
(Is it things like this comment?)
Yeah. I want to know more about this. What kind of harm?
My default stance is something like, “look, we’re here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are ‘harmed’ by speech-acts, I’m sorry for you, but tough nuggets. I guess you shouldn’t participate in this discourse. ”
That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.
Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.
...
I’m tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:
I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.
I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.
No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.
I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help: https://sentientmedia.org/racism-in-animal-advocacy-and-effective-altruism-hinders-our-mission
“I think a model by which people gradually “warm up” to “more advanced” discourse norms is false.”
I don’t think that’s the main benefit of disallowing certain forms of speech at certain events. I’d imagine it’d be to avoid making EA events attractive and easily accessible for, say, white supremacists. I’d like to make it pretty costly for a white supremacist to be able to share their ideas at an EA event.
We’ve already seen white nationalists congregate in some EA-adjacent spaces. My impression is that (especially online) spaces that don’t moderate away or at least discourage such views will tend to attract them—it’s not the pattern of activity you’d see if white nationalists randomly bounce around places or people organically arrive at those views. I think this is quite dangerous for epistemic norms, because white nationalist/supremacist views are very incorrect and deter large swaths of potential participants and also people with those views routinely argue in bad faith by hiding how extreme their actual opinions are while surreptitiously promoting the extreme version. It’s also in my view a fairly clear and present danger to EA given that there are other communities with some white nationalist presence that are quite socially close to EA.
I don’t know anything about Leverage but I can think of another situation where someone involved in the rationalist community was exposed as having misogynistic and white supremacist anonymous online accounts. (They only had loose ties to the rationalist community, it came up another way, but it concerned me.)
I just upvoted this comment as I strongly agree with it, but also, it had −1 karma with 2 votes on it when I did so. I think it would be extremely helpful for folks who disagree with this, or otherwise want to downvote it, to talk about why they disagree or downvoted it.
I didn’t downvote it, though probably I should have. But it seems a stretch to say ‘one guy who works for a weird organization that is supposedly EA’ implies ‘congregation’. I think that would have to imply a large number of people. I would be very disappointed if I had a congregation of less than ten people.
JoshYou also ignores important hedging in the linked comment:
So instead of saying
It would be more fair to say
Which is clearly much less worrying. There are lots of weird ideologies and a lot of weird people in California, who believe a lot of very incorrect things. I would be surprised if ‘white nationalists’ were really high up on the list of threats to EA, especially given how extremely left wing EA is and how low status they are. We probably have a lot more communists! Rather, I think the highlighting of ‘White Nationalists’ is being done for ideological reasons—i.e. to cast shade on more moderate right wing people by using a term that is practically a slur. I think the grandparent would not have made such a sloppy comment had it not been about the hated outgroup.
I also agree that it’s ridiculous when left-wingers smear everyone on the right as Nazis, white nationalists, whatever. I’m not talking about conservatives, or the “IDW”, or people who don’t like the BLM movement or think racism is no big deal. I’d be quite happy for more right-of-center folks to join EA. I do mean literal white nationalists (like on par with the views in Jonah Bennett’s leaked emails. I don’t think his defense is credible at all, by the way).
I don’t think it’s accurate to see white nationalists in online communities as just the right tail that develops organically from a wide distribution of political views. White nationalists are more organized than that and have their own social networks (precisely because they’re not just really conservative conservatives). Regular conservatives outnumber white nationalists by orders of magnitude in the general public, but I don’t think that implies that white nationalists will be virtually non-existent in a space just because the majority are left of center.
Describing members of Leverage as “white nationalists” strikes me as pretty extreme, to the level of dishonesty, and is not even backed up by the comment that was linked. I thought Buck’s initial comment was also pretty bad, and he did indeed correct his comment, which is a correction that I appreciate, and I feel like any comment that links to it should obviously also take into account the correction.
I have interfaced a lot with people at Leverage, and while I have many issues with the organization, saying that many white nationalists congregate there, and have congregated in the past, just strikes me as really unlikely.
Buck’s comment also says at the bottom:
I also want us to separate “really racist” from “white nationalist” which are just really not the same term, and which appear to me to be conflated via the link above.
I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It’s not that there are never nazis or communists, but if you want to have a good conversation, it’s better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)
My description was based on Buck’s correction (I don’t have any first-hand knowledge). I think a few white nationalists congregated at Leverage, not that most Leverage employees are white nationalists, which I don’t believe. I don’t mean to imply anything stronger than what Buck claimed about Leverage.
I invoked white nationalists not as a hypothetical representative of ideologies I don’t like but quite deliberately, because they literally exist in substantial numbers in EA-adjacent online spaces and they could view EA as fertile ground if the EA community had different moderation and discursive norms. (Edited to avoid potential collateral reputational damage) I think the neo-reactionary community and their adjacency to rationalist networks are a clear example.
Just to be clear, I don’t think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like “advocating for nations based on values historically associated with whiteness”, which would obviously include neoreaction, but would also presumably be a much more tenable position in discourse. So for now I am going to assume you mean something much more straightforwardly based on racial superiority, which also appears to be the Wikipedia definition.
I’ve debated with a number of neoreactionaries, and I’ve never seen them bring up much stuff about racial superiority. Usually just arguing against democracy and in favor of centralized control and various arguments derived from that, though I also don’t have a ton of datapoints. There is definitely a focus on the superiority of western culture in their writing and rhetoric, much of which is flawed and I am deeply opposed to many of the things I’ve seen at least some neoreactionaries propose, but my sense is that I wouldn’t characterize the philosophy fundamentally as white nationalist in the racist sense of the term. Though of course the few neoreactionaries that I have debated are probably selected in various ways that reduces the likelihood of having extreme opinions on these dimensions (though they are also the ones that are most likely to engage with EA, so I do think the sample should carry substantial weight).
Of course, some neoreactionaries are also going to be white nationalists, and being a neoreactionary will probably correlate with white nationalism at least a bit, but my guess is that at least the people adjacent to EA and Rationality that I’ve seen engage with that philosophy haven’t been very focused on white nationalism, and I’ve frequently seen them actively argue against it.
Thanks for elaborating!
I think that it seems like accusations of EA associations with white supremacy of various sorts come up enough to be pretty concerning.
I also think the claims would be equally concerning if JoshYou had said “white supremacists” or “really racist people” instead of “white nationalists” in the original post, so I feel uncertain that Buck stepping back the original post actually lessens the degree we ought to be concerned?
I didn’t really see the Nazi comparisons (I guess saying white nationalist is sort of one, but I personally associate white nationalism as a phrase much more with individuals in the US than Nazis, though that may be biased by being American).
I guess broadly a trend I feel like I’ve seen lately is occasionally people writing about witnessing racism in the EA community, and having what seem like really genuine concerns, and then those basically not being discussed (at least on the EA Forum) or being framed as shutting down conversation.
I don’t follow how what you’re saying is a response to what I was saying.
I wasn’t saying “the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms.” I was saying if that I was mistaken about that “warming up effect”, it would cause me to reconsider my view here.
In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.
You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed “introductory spaces” like undergrad classes to become “safe spaces”, thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate students and professors and demanded “safe spaces” everywhere they went. And how is anyone supposed to argue against “safety”, especially once its importance has been institutionalized (i.e., departments were built in part to enforce “safe spaces”, which can then easily extend their power beyond “introductory spaces”).
ETA: Jonathan Haidt has a book and an Atlantic article titled The Coddling of the American Mind detailing problems caused by the introduction of “safe spaces” in universities.
I don’t think this is pivotal to anyone, but just because I’m curious:
If we knew for a fact that a slippery slope wouldn’t occur, and the “safe space” was limited just to the EA Facebook group, and there was no risk of this EA forum ever becoming a “safe space”, would you then be okay with this demarcation of disallowing some types of discussion on the EA Facebook group, but allowing that discussion on the EA forum? Or do you strongly feel that EA should not ever disallow these types of discussion, even on the EA Facebook group?
(by “disallowing discussion”, I mean Hansonian level stuff, not obviously improper things like direct threats or doxxing)
I’m a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don’t think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?
I have lots of friends in academia and follow academic blogs etc., and basically don’t hear any of them talking about cancel culture within that context. I did recently see a philosopher recently post a controversial paper and get backlash for it on Twitter, but then he seemed to basically shrug it off since people complaining on Twitter didn’t really affect him. This fits my general model that most of the cancel culture influence on academia comes from people outside academia trying to affect it, with varying success.
I don’t doubt that there are individual pockets with academia that are more cancely, but the rest of academia seems to me mostly unaffected by them.
Professors are already overwhelmingly leftists or left-leaning (almost all conservatives have been driven away or self-selected away), and now even left-leaning professors are being canceled or fearful of being canceled. See:
https://www.theatlantic.com/ideas/archive/2020/09/academics-are-really-really-worried-about-their-freedom/615724/
https://www.greaterwrong.com/posts/pFAavCTW56iTsYkvR/ai-alignment-open-thread-october-2019#comment-Pbpb4JNszz3o22vv9
and this comment in the comments section of a NYT story about cancel culture among the students:
About “I have lots of friends in academia and follow academic blogs etc., and basically don’t hear any of them talking about cancel culture within that context.” there could be a number of explanations aside from cancel culture not being that bad in academia. Maybe you could ask them directly about it?
Thanks. It looks to me that much of what’s being described at these links is about the atmosphere among the students at American universities, which then also starts affecting the professors there. That would explain my confusion, since a large fraction of my academic friends are European, so largely unaffected by these developments.
I do hear them complain about various other things though, and I also have friends privately complaining about cancel culture in non-academic contexts, so I’d generally expect this to come up if it were an issue. But I could still ask, of course.
No, not even a chance. It is obviously so far gone now that there’s no point in objecting and we should work on building a new movement that avoids this failure mode, from scratch.
What are some specific things that make you believe this, outside the single decision by EA Munich referenced in this post? Regarding the end of my reply to Wei Dai, I’d be interested to see your list of “elements of concern” on this point.
One example:
https://www.reddit.com/r/EffectiveAltruism/comments/ijp16r/decolonizing_effective_altruism/
There’s a number of things. Some are things that cannot be mentioned, others are just part of the “new normal” of social justice infiltrating everything.