Turning to the object level: I feel pretty torn here.
On the one hand, I agree the business with CARE was quite bad and share all the standard concerns about SJ discourse norms and cancel culture.
On the other hand, we’ve had quite a bit of anti-cancel-culture stuff on the Forum lately. There’s been much more of that than of pro-SJ/pro-DEI content, and it’s generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly.
I’m sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk. I think a lot of more SJ-sympathetic EAs already feel that the Forum is not a space for them – simply affirming that doesn’t seem to me to be terribly useful. Not giving ACE prior warning before publishing the post further cements an adversarial us-and-them dynamic I’m not very happy about.
I don’t really know how that cashes out as far as this post and posts like it are concerned. Biting one’s tongue about what does seem like problematic behaviour would hardly be ideal. But as I’ve said several times in the past, I do wish we could be having this discussion in a more productive and conciliatory way, which has less of a chance of ending in an acrimonious split.
I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor, but as an intuition pump imagine the following comment.
“On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem. On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I’m worried about the second-order effects of talking about this misconduct.”
I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.
More generally I am opposed to “Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X.”
Another take here is that if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them. I don’t think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like “it’s more interesting to hear arguments you haven’t heard before).)
EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.
Whilst I agree with you that there is some risk in the pattern of not criticising bad thing X because of concerns about second-order effects, I think you chose a really bad substitution for ‘X’ here, and as a result can totally understand where Khorton’s response is coming from (although I think ‘campaigning against racism’ is also a mischaracterisation of X here).
Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.
Why is it important to not throw out nuance here? Because of Will’s original comment: there are downsides to being very critical, especially publicly, where we might cause more split or be unwelcoming. I agree with you that we shouldn’t be trying to appeal to everyone or take a balanced position on every issue, but I don’t think we should ignore the importance of creating a culture that is welcoming to all either. These things do not in principle need to be traded-off against each other, we can have both (if we are skillful).
Despite you saying that you agree with the content of Will’s comment, I think you didn’t fully grok Will’s initial concern, because when you say:
”if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them”
More generally, I think our disagreement here probably comes down to something like this:
There’s a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome. As you say, if we’re skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.
But this comes at a cost. I personally feel much less excited about writing about certain topics because I’d have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it’s much harder for these EAs to explain their thoughts on things.
And so I think it’s noticeably costly to criticise people for not being more careful and tactful. It’s worth it in some cases, but we should remember that it’s costly when we’re considering pushing people to be more careful and tactful.
I personally think that “you shouldn’t write criticisms of an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations” is too far in the “restrict people’s ability to say true things, for the sake of making people feel welcome”.
(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)
I don’t disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).
I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result. (Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
I am not sure whether I think it’s a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren’t obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)
But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it’s pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it’s worth putting some effort into not mocking religions or political views.
In cases like these, I mostly agree with “you need to figure out the exchange rate between welcomingness and unfiltered conversations”.
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result.
I guess I expect the net result of Will’s comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn’t have made his other top level comment.
(Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
There’s a difference between understanding a consideration and thinking that it’s the dominant consideration in a particular situation :)
I do too, FWIW. I read this post and its comments because I’m considering donating to/through ACE, and I wanted to understand exactly what ACE did and what the context was. Reading through a sprawling, nearly 15k-word discussion mostly about social justice and discourse norms was not conducive to that goal.
Presumably knowing the basis of ACE’s evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.
Knowing the basis of ACE’s evaluations is of course essential to deciding whether to donate to/through them and I’d be surprised if esantorella disagreed. It’s just that this post and discussion is not only or even mostly about that. In my view, it would have been a far more valuable/better post if it were focused more tightly on that serious issue and the evidence for and against it, and left out altogether small issues like publishing and taking down bad blog posts, and the general discourse norms discussion was in a separate post labelled appropriately.
Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn’t seem necessary to discuss that fully here.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I’d be very interested to hear I was wrong about that.
I think that isn’t the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums. So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.
But fundamentally if we’re running either of these counterfactuals I think we’re already leaving a bunch of value on the table, as expressed by EricHerboso’s post about false dilemmas.
I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying.
[...]
If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.
I would guess it depends quite a bit on these people’s total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is “not for them”).
If we’re imagining people who’ve already had 10 or even 100 hours of total EA exposure, then I’m inclined to agree with your claim and sentiment. (Though I think there would still be exceptions, and I suspect I’m at least a bit more into “try hard to avoid people bouncing for reasons unrelated to actual goal misalignment” than you.)
I’m less sure for people who are super new to EA as a school of thought or community.
We don’t need to look at hypothetical cases to establish this. My memory of events 10 years ago is obviously hazy but I’m fairly sure that I had encountered both GiveWell’s website and Overcoming Biasyears before I actually got into EA. At that time I didn’t understand what they were really about, and from skimming they didn’t clear my bar of “this seems worth engaging with”. I think Overcoming Bias seemed like some generic libertarian blog to me, and at the time I thought libertarians were deluded and callous; and for GiveWell I had landed on some in-the-weeds page on some specific intervention and I was like “whatever I’m not that interested in malaria [or whatever the page was about]”. Just two of the many links you open, glance at for a few seconds, and then never (well, in this case luckily not quite) come back to.
This case is obviously very different from what we’re discussing here. But I think it serves to reframe the discussion by illustrating that there are a number of different reasons for why someone might bounce from EA depending on a number of that person’s properties, with the amount of prior exposure being a key one. I’m skeptical that any blanket statement of type “it’s OK if people bounce for reason X” will do a good job at describing a good strategy for dealing with this issue.
I agree it’s good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we’re already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it’s only tangentially relevant to the OP.
Aside: I don’t know about Will’s intentions, I just read his comment and your reply, and don’t think ‘he could have made a different comment’ is good evidence of his intentions. I’m going to assume you know much more about the situation/background than I do, but if not I do think it’s important to give people benefit of the doubt on the question of intentions.
[Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]
I think you and Khorton are misinterpreting the analogy. Buck focused on a practice that is unequivocally bad precisely so that he can establish, to the satisfaction of everyone involved in this discussion, that Will’s reasoning applies only up to a point: if a practice is judged to be sufficiently harmful, it seems appropriate to have lots of posts condemning it, even if this has some undesirable side effects. Then the question becomes: how should those who regard “cancel culture” as very harmful indeed respond, given that others do not at all share this assessment, and that continuing to write about this topic risks causing a split in the community to which both groups of people belong?
(I enclose ‘cancel culture’ in scare quotes because I am hesitant to use a term that some object to as having inappropriate connotations. It would be nice to find an expression for the phenomenon in question which we are all happy to use.)
Sure, I do appreciate the point that Buck is bringing. I agree with it in fact (as the first part of my first sentence said). I just additionally found the particular X he substituted not a good one for separate reasons to the main point he was making. I also think the real disagreement with Buck and myself is getting closer to it on a sister branch.
I do think your question is good here, and decomposes the discussion into two disagreements: 1) was this an instance of ‘cancel culture’, if so how bad is it? 2) what is the risk of writing about this kind of thing (causing splits) vs. the risk of not?
On 1) I feel, like Neel below, that moving charities ratings for an evaluator is a serious thing which requires a high bar of scrutiny, whereas the other two concerns outlined (blogpost and conference) seem far more minor. I think the OP would be far better if only focused on that and evidence for/against.
On 2) I think this is a discussion worth having, and that the answer is not 0 risk for any side.
EDIT to add: sorry I think I didn’t respond properly/clearly enough to your main point. I get that Buck is conditioning on 1) above, and saying if we agree it’s really bad, then what. I just think that he was not very explicit about that. If Buck had said something like, ‘I want to pick up on a minor point, and to do this will need to condition on the world where we come to the conclusion that ACE did something unequivocally bad here...’ at the beginning, I don’t think the first part of my objections would have applied so much. EDIT to add: Although I still think he should have chosen a different bad thing X.
(I’m writing these comments kind of quickly, sorry for sloppiness.)
With regard to
Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.
In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.
I would have no meta-level objection to a comment saying “I disagree that X is bad, I think it’s actually fine”.
I think the meta-level objection you raised (which I understood as: there may be costs of not criticising bad things because of worry about second-order effects) is totally fair and there is indeed some risk in this pattern (said this in the first line of my comment). This is not what I took issue with in your comment. I see you’ve responded to our main disagreement though, so I’ll respond on that branch.
and actively campaigning against racism has nothing in common with sexual harassment.
Universal statements like this strike me as almost always wrong. Of course there are many similarities that seem relevant here, and a simple assertion that they are not doesn’t seem to help the discussion.
I would really quite strongly prefer to not have comments like this on the forum, so I downvoted it. I would have usually just left it at the downvote, but i think Khorton has in the past expressed a preference for having downvotes explained, so I opted on the side of transparency.
While I didn’t like Khorton’s original comment, this comment comes across as spiteful and mean, while contributing little or nothing of value. I strong-downvoted it.
“On the other hand, we’ve had quite a bit of anti-cancel-culture stuff on the Forum lately. There’s been much more of that than of pro-SJ/pro-DEI content, and it’s generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly”
Perhaps. However, this post makes specific claims about ACE. And even though these claims have been discussed somewhat informally on Facebook, this post provides a far more solid writeup. So it does seem to be making a signficantly new contribution to the discussion and not just rewarming leftovers.
It would have been better if Hypatia had emailed the organisation ahead of time. However, I believe ACE staff members might have already commented on some of these issues (correct me if I’m wrong). And it’s more of a good practise than something than a strict requirement—I totally understand the urge to just get something out of there.
“I’m sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk”
On the contrary, now that this has been written up on the forum it gives people something to link to. So forum posts aren’t just read by people who regularly read the forum. In any case, this kind of high quality write-up is unlikely to have a significnat effect on alienating people compared to some of the lower quality discussions on these topics that occur in person or on Facebook. So, from my perspective it doesn’t really make any sense to be focusing on this post. If you want to avoid a split in the movement, I’d like to encourage you to join the Effective Altruists for Political Tolerance Facebook group and contribute there.
I would also suggest worrying less about PR risks. People who want to attack EA can already go around shouting about ‘techno-capitalists’, ‘overwhelmingly white straight males’, ‘AI alarmists’, ect. If someone wants to find something negative, they’ll find something negative.
Perhaps. However, this post makes specific claims about ACE. And even though these claims have been discussed somewhat informally on Facebook, this post provides a far more solid writeup. So it does seem to be making a signficantly new contribution to the discussion and not just rewarming leftovers.
My claim was not that this post didn’t contain new information, or that it was badly written – merely that it is part of a pattern that concerns me, and that more effort could be being made to mitigate the bad effects of this pattern.
One could imagine, for example, a post that contains similar content but is written with far more sympathy for what ACE and co. are trying to do here, even if the author disagrees (strongly) with its implementation. I think this post actually does better on this than many past posts on this topic, but taken as a whole we are still a long way from where I would like to be.
On the contrary, now that this has been written up on the forum it gives people something to link to. So forum posts aren’t just read by people who regularly read the forum.
I wasn’t saying they wouldn’t see it, I was saying they wouldn’t engage with it – that they will disagree with it silently, feel more alienated from the Forum, and move a little further away from the other side of EA than they were before. I think the anonymous comment below is quite strong evidence that I’m on the right track here.
If you want to avoid a split in the movement, I’d like to encourage you to join the Effective Altruists for Political Tolerance Facebook group and contribute there.
I’m honestly a bit flummoxed here. Why would contributing to a Facebook group explicitly aligned with one side of this dispute help avoid a split?
I’m honestly a bit flummoxed here. Why would contributing to a Facebook group explicitly aligned with one side of this dispute help avoid a split?
I set up the group, and while I have my own views on which groups are less tolerant/tolerated I’m very keen for the group to do what it suggests in the title: bring people together/encourage cooperation/tolerance in all directions etc. It is absolutely not ‘explicitly aligned with one one side’ .
(I have strong downvoted your comment for making this claim without giving any basis for it. I’ll retract the downvote if you edit/moderate this remark, since otherwise I’m fairly agnostic about the comment content)
(I’m not sure how much the group admins want the group description waved around on the Forum, given that nobody has linked to it so far. I’ve tried to strike the right balance here but am open to cutting stuff if a group admin tells me they’d prefer something different.)
The group describes itself as a “group for EAs into getting on with conservatives and liberals alike, and who want EA itself to be more welcoming to people of all different political stripes”, and links to resources that are clearly in support of open discussion and against censoring true beliefs for the sake of avoiding offence. It even explicitly says controversial topics “are welcome”, as long as you “use stricter epistemic standards in proportion to how offensive [your claim] is”.
Even though it does not make any angry claims about cancel culture, I defend my claim that this group is clearly oriented towards the free-speech end of EA and away from the censor-opposing-views-to-protect-vulnerable-groups end.
I’m not saying the group is bad! Merely that I think, based on evidence, that my claim is reasonable. I also still don’t understand why joining this group would address these problems; I think explaining the model for the last thing might be a more effective way to change my mind, but it also might be too much of a tangent for this comment thread.
Maybe your sense of what you’re claiming and my sense of what you’re claiming are using different meanings of ‘cancel culture’. In your previous comment, you wrote
‘On the other hand, we’ve had quite a bit of anti-cancel-culture stuff on the Forum lately. There’s been much more of that than of pro-SJ/pro-DEI content, and it’s generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly’
So I’ve been assuming that you were referring to ‘pro-SJ/DEI’ and ‘anti-cancel-culture’ more or less antonymonously. Yes, the group is against deplatforming (at least, without extreme epistemic/moral caution), no it’s not against SJ/DEI.
Inasmuch as they’re different concepts, then I don’t see you you couldn’t think anti-cancel-culture—which is basically ‘pro-segregation’ - culture wouldn’t help prevent a split! The point is then not to exclude any cultural group, but to discourage segregation, hostility, and poor epistemics when discussing this stuff.
I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide.
I’m honestly a bit flummoxed here. Why would contributing to a Facebook group explicitly aligned with one side of this dispute help avoid a split?
The group is still new, so it’s still unclear exactly how it’ll turn out. But I don’t think that’s a completely accurate way of characterisating the group. I expect that there are two main strands of thought within the group—some see themselves as fighting against woke tendencies, whilst others are more focused on peace-making and want to avoid taking a side.
I do wish we could be having this discussion in a more productive and conciliatory way, which has less of a chance of ending in an acrimonious split.
At the risk of stating the obvious: emailing organizations (anonymously, if you want) is a pretty good way of raising concerns with them.
I’ve emailed a number of EA organizations (including ACE) with question/concerns, and generally find they are responsive.
And I’ve been on the receiving side of emails as well, and usually am appreciative; I often didn’t even consider that there could be some confusion or misinterpretation of what I said, and am appreciative of people who point it out.
I think that this totally misses the point. The point of this post isn’t to inform ACE that some of the things they’ve done seem bad—they are totally aware that some people think this. It’s to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.
I guess I don’t know OP’s goals but yeah if their goal is to publicly shame ACE then publicly shaming ACE is a good way to accomplish that goal.
My point was a) sending a quick emails to someone about concerns you have with their work often has a very high benefit to cost ratio, and b) despite this, I still regularly talk to people who have concerns about some organization but have not sent them an email.
I think those claims are relatively uncontroversial, but I can say more if you disagree.
The key part of running feedback by an org isn’t to inform the org of the criticism, it’s to hear their point of view, and see whether any events have been misrepresented (from their point of view). And, ideally, to give them a heads up to give a response shortly after the criticism goes up
I think private discussions are very important, but I don’t feel good about a world where they entirely substitute for this kind of public disagreement. I think past Forum controversies of this kind have often been quite valuable.
Yep, definitely don’t want people to swing too far in the opposite direction. Just commenting that “talk to people about your concerns with them” is a surprisingly underutilized approach, in my experience.
I talked to ACE (Jacy Reese/Anthis in particular) in 2015 about ACE dramatically overstating effectiveness of leaflets. Jacy was extremely responsive in the call, and nothing changed until two years later when a dramatically more inflammatory article got wide distribution.
I’m sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk.
I’ve refrained from making certain posts/comments on EAF in part for these reasons. I think in the long run these outcomes will be very hard to avoid, given the vastly different epistemic approaches between the two sides, and e.g., “silence is violence”, but it could be that in the short/medium term it’s really important for EA to not become a major “public enemy” of the dominant ideology of our times.
ETA: If anyone disagrees with my long-run prediction (and it’s not because something happens that makes the issue moot, like AIs take over), I’d be interested to read a story/scenario in which these outcomes are avoided.
Is “social justice” ideology really the dominant ideology in our society now? My impression is that it’s only taken seriously among young, highly-educated people.
Agreed that it’s not dominant in society at-large, though I think it is dominant in a number of important institutions (esp. higher ed, the liberal nonprofit sphere, and journalism)
Those are the circles many of us exist in. So a more precise rephrasing might be “we want to stay in touch with the political culture of our peers beyond EA.”
This could be important for epistemic reasons. Antagonistic relationships make it hard to gather information when things are wrong internally.
Of course, PR-based deference is also a form of antagonistic relationship. What would a healthy yet independent relationship between EA and the social justice movement look like?
Maybe we’re just using the word “dominant” in different ways? I meant it in the sense of “most important, powerful, or influential”, and not something like “sincerely believed by the majority of people” which may be what you have in mind? (I don’t believe the latter is true yet.)
It makes sense that what is most important, powerful, or influential in national politics is still highly correlated with what most people in our society sincerely believe, due to secret ballot voting and the national scope, but I think in many other arenas, some arguably more important than current national politics (e.g. because they play an outsized role in the economy or in determining what future generations will believe), local concentrating of true believers and preference falsification have caused a divergence between the two senses of “dominant”.
I think this post is pretty damning of ACE. Are you saying OP shouldn’t have posted important information about how ACE is evaluting animal charities because there has been too much anti-SJ/DEI stuff on the forum lately?
I feel I have explained myself fairly well on this thread already, see for example here:
One could imagine, for example, a post that contains similar content but is written with far more sympathy for what ACE and co. are trying to do here, even if the author disagrees (strongly) with its implementation. I think this post actually does better on this than many past posts on this topic, but taken as a whole we are still a long way from where I would like to be.
Whatever information you want to convey, there are always a very wide range of ways to convey that information, which will vary substantially in their effects. With very controversial stuff like this, it is especially worth putting thought into how to convey that information in the manner that is best for the world.
I’ve actually been quite impressed with Hypatia’s behaviour on this point since the post went up, in terms of updating the post based on feedback and moderating its tone. I think my version of this post would try even harder to be nice and sympathetic to pro-SJ EAs than this, but I’m not very unhappy with the current version of the OP.
(The ensuing discussion has also brought to light several things that made me update in the direction of ACE’s behaviour being even worse than I thought, which makes me more sympathetic to the OP in its original form, though I stand by my original comments.)
The more substantial point that I’m trying to make is that the political balance of the EA Forum shouldn’t be a big factor in someone’s decision to publicize important information about a major charity evaluator, or probably even in how they put the criticism. Many people read posts linked from the EA Forum who never read the comments or don’t visit the Forum often for other posts, i.e. they are not aware of the overall balance of political sympathies on the Forum. The tenor of the Forum as a whole is something that should be managed (though I wouldn’t advocate doing that through self-censorship) to make EA welcoming or for the health of the community, but it’s not that important compared to the quality of information accessible through the Forum, imo.
I’m a little offended at the suggestion that expressing ideas or important critiques of charities should in any way come second to diplomatic concerns about the entire Forum.
I found this post to be quite refreshing compared to the previous one criticizing Effective Altruism Munich for uninviting Robin Hanson to speak. I’m not against “cancel culture” when it’s cancelling speakers for particularly offensive statements they’ve made in the past (e.g., Robin Hanson in my opinion, but let’s not discuss Robin Hanson much further since that’s not the topic of this post). Sometimes though, cancelling happens in response to fairly innocuous statements, and it looks like that’s what ACE has done with the CARE incident.
Turning to the object level: I feel pretty torn here.
On the one hand, I agree the business with CARE was quite bad and share all the standard concerns about SJ discourse norms and cancel culture.
On the other hand, we’ve had quite a bit of anti-cancel-culture stuff on the Forum lately. There’s been much more of that than of pro-SJ/pro-DEI content, and it’s generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly.
I’m sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk. I think a lot of more SJ-sympathetic EAs already feel that the Forum is not a space for them – simply affirming that doesn’t seem to me to be terribly useful. Not giving ACE prior warning before publishing the post further cements an adversarial us-and-them dynamic I’m not very happy about.
I don’t really know how that cashes out as far as this post and posts like it are concerned. Biting one’s tongue about what does seem like problematic behaviour would hardly be ideal. But as I’ve said several times in the past, I do wish we could be having this discussion in a more productive and conciliatory way, which has less of a chance of ending in an acrimonious split.
I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor, but as an intuition pump imagine the following comment.
“On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem. On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I’m worried about the second-order effects of talking about this misconduct.”
I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.
More generally I am opposed to “Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X.”
Another take here is that if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them. I don’t think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like “it’s more interesting to hear arguments you haven’t heard before).)
EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.
Whilst I agree with you that there is some risk in the pattern of not criticising bad thing X because of concerns about second-order effects, I think you chose a really bad substitution for ‘X’ here, and as a result can totally understand where Khorton’s response is coming from (although I think ‘campaigning against racism’ is also a mischaracterisation of X here).
Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.
Why is it important to not throw out nuance here? Because of Will’s original comment: there are downsides to being very critical, especially publicly, where we might cause more split or be unwelcoming. I agree with you that we shouldn’t be trying to appeal to everyone or take a balanced position on every issue, but I don’t think we should ignore the importance of creating a culture that is welcoming to all either. These things do not in principle need to be traded-off against each other, we can have both (if we are skillful).
Despite you saying that you agree with the content of Will’s comment, I think you didn’t fully grok Will’s initial concern, because when you say:
”if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them”
you are doing the thing (being unwelcoming)
More generally, I think our disagreement here probably comes down to something like this:
There’s a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome. As you say, if we’re skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.
But this comes at a cost. I personally feel much less excited about writing about certain topics because I’d have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it’s much harder for these EAs to explain their thoughts on things.
And so I think it’s noticeably costly to criticise people for not being more careful and tactful. It’s worth it in some cases, but we should remember that it’s costly when we’re considering pushing people to be more careful and tactful.
I personally think that “you shouldn’t write criticisms of an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations” is too far in the “restrict people’s ability to say true things, for the sake of making people feel welcome”.
(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)
I don’t disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).
I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result. (Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
I am not sure whether I think it’s a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren’t obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)
But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it’s pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it’s worth putting some effort into not mocking religions or political views.
In cases like these, I mostly agree with “you need to figure out the exchange rate between welcomingness and unfiltered conversations”.
I guess I expect the net result of Will’s comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn’t have made his other top level comment.
There’s a difference between understanding a consideration and thinking that it’s the dominant consideration in a particular situation :)
I bounce off posts like this. Not sure if you’d consider me net positive or not. :)
I do too, FWIW. I read this post and its comments because I’m considering donating to/through ACE, and I wanted to understand exactly what ACE did and what the context was. Reading through a sprawling, nearly 15k-word discussion mostly about social justice and discourse norms was not conducive to that goal.
Presumably knowing the basis of ACE’s evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.
Knowing the basis of ACE’s evaluations is of course essential to deciding whether to donate to/through them and I’d be surprised if esantorella disagreed. It’s just that this post and discussion is not only or even mostly about that. In my view, it would have been a far more valuable/better post if it were focused more tightly on that serious issue and the evidence for and against it, and left out altogether small issues like publishing and taking down bad blog posts, and the general discourse norms discussion was in a separate post labelled appropriately.
Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn’t seem necessary to discuss that fully here.
I am glad to have you around, of course.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I’d be very interested to hear I was wrong about that.
I think that isn’t the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums. So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.
But fundamentally if we’re running either of these counterfactuals I think we’re already leaving a bunch of value on the table, as expressed by EricHerboso’s post about false dilemmas.
I would guess it depends quite a bit on these people’s total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is “not for them”).
If we’re imagining people who’ve already had 10 or even 100 hours of total EA exposure, then I’m inclined to agree with your claim and sentiment. (Though I think there would still be exceptions, and I suspect I’m at least a bit more into “try hard to avoid people bouncing for reasons unrelated to actual goal misalignment” than you.)
I’m less sure for people who are super new to EA as a school of thought or community.
We don’t need to look at hypothetical cases to establish this. My memory of events 10 years ago is obviously hazy but I’m fairly sure that I had encountered both GiveWell’s website and Overcoming Bias years before I actually got into EA. At that time I didn’t understand what they were really about, and from skimming they didn’t clear my bar of “this seems worth engaging with”. I think Overcoming Bias seemed like some generic libertarian blog to me, and at the time I thought libertarians were deluded and callous; and for GiveWell I had landed on some in-the-weeds page on some specific intervention and I was like “whatever I’m not that interested in malaria [or whatever the page was about]”. Just two of the many links you open, glance at for a few seconds, and then never (well, in this case luckily not quite) come back to.
This case is obviously very different from what we’re discussing here. But I think it serves to reframe the discussion by illustrating that there are a number of different reasons for why someone might bounce from EA depending on a number of that person’s properties, with the amount of prior exposure being a key one. I’m skeptical that any blanket statement of type “it’s OK if people bounce for reason X” will do a good job at describing a good strategy for dealing with this issue.
I agree it’s good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we’re already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it’s only tangentially relevant to the OP.
Aside: I don’t know about Will’s intentions, I just read his comment and your reply, and don’t think ‘he could have made a different comment’ is good evidence of his intentions. I’m going to assume you know much more about the situation/background than I do, but if not I do think it’s important to give people benefit of the doubt on the question of intentions.
[Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]
I appreciate you trying to find our true disagreement here.
I think you and Khorton are misinterpreting the analogy. Buck focused on a practice that is unequivocally bad precisely so that he can establish, to the satisfaction of everyone involved in this discussion, that Will’s reasoning applies only up to a point: if a practice is judged to be sufficiently harmful, it seems appropriate to have lots of posts condemning it, even if this has some undesirable side effects. Then the question becomes: how should those who regard “cancel culture” as very harmful indeed respond, given that others do not at all share this assessment, and that continuing to write about this topic risks causing a split in the community to which both groups of people belong?
(I enclose ‘cancel culture’ in scare quotes because I am hesitant to use a term that some object to as having inappropriate connotations. It would be nice to find an expression for the phenomenon in question which we are all happy to use.)
Sure, I do appreciate the point that Buck is bringing. I agree with it in fact (as the first part of my first sentence said). I just additionally found the particular X he substituted not a good one for separate reasons to the main point he was making. I also think the real disagreement with Buck and myself is getting closer to it on a sister branch.
I do think your question is good here, and decomposes the discussion into two disagreements:
1) was this an instance of ‘cancel culture’, if so how bad is it?
2) what is the risk of writing about this kind of thing (causing splits) vs. the risk of not?
On 1) I feel, like Neel below, that moving charities ratings for an evaluator is a serious thing which requires a high bar of scrutiny, whereas the other two concerns outlined (blogpost and conference) seem far more minor. I think the OP would be far better if only focused on that and evidence for/against.
On 2) I think this is a discussion worth having, and that the answer is not 0 risk for any side.
EDIT to add: sorry I think I didn’t respond properly/clearly enough to your main point. I get that Buck is conditioning on 1) above, and saying if we agree it’s really bad, then what. I just think that he was not very explicit about that. If Buck had said something like, ‘I want to pick up on a minor point, and to do this will need to condition on the world where we come to the conclusion that ACE did something unequivocally bad here...’ at the beginning, I don’t think the first part of my objections would have applied so much. EDIT to add: Although I still think he should have chosen a different bad thing X.
(I’m writing these comments kind of quickly, sorry for sloppiness.)
With regard to
In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.
I would have no meta-level objection to a comment saying “I disagree that X is bad, I think it’s actually fine”.
I think the meta-level objection you raised (which I understood as: there may be costs of not criticising bad things because of worry about second-order effects) is totally fair and there is indeed some risk in this pattern (said this in the first line of my comment). This is not what I took issue with in your comment. I see you’ve responded to our main disagreement though, so I’ll respond on that branch.
No one is enthusiastic about sexual harassment, and actively campaigning against racism has nothing in common with sexual harassment.
Universal statements like this strike me as almost always wrong. Of course there are many similarities that seem relevant here, and a simple assertion that they are not doesn’t seem to help the discussion.
I would really quite strongly prefer to not have comments like this on the forum, so I downvoted it. I would have usually just left it at the downvote, but i think Khorton has in the past expressed a preference for having downvotes explained, so I opted on the side of transparency.
I appreciate the self-consistency of this sentence :)
Look who’s never heard of intersectionality
While I didn’t like Khorton’s original comment, this comment comes across as spiteful and mean, while contributing little or nothing of value. I strong-downvoted it.
Seems like others agreed with you. I meant it mostly seriously.
“On the other hand, we’ve had quite a bit of anti-cancel-culture stuff on the Forum lately. There’s been much more of that than of pro-SJ/pro-DEI content, and it’s generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly”
Perhaps. However, this post makes specific claims about ACE. And even though these claims have been discussed somewhat informally on Facebook, this post provides a far more solid writeup. So it does seem to be making a signficantly new contribution to the discussion and not just rewarming leftovers.
It would have been better if Hypatia had emailed the organisation ahead of time. However, I believe ACE staff members might have already commented on some of these issues (correct me if I’m wrong). And it’s more of a good practise than something than a strict requirement—I totally understand the urge to just get something out of there.
“I’m sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk”
On the contrary, now that this has been written up on the forum it gives people something to link to. So forum posts aren’t just read by people who regularly read the forum. In any case, this kind of high quality write-up is unlikely to have a significnat effect on alienating people compared to some of the lower quality discussions on these topics that occur in person or on Facebook. So, from my perspective it doesn’t really make any sense to be focusing on this post. If you want to avoid a split in the movement, I’d like to encourage you to join the Effective Altruists for Political Tolerance Facebook group and contribute there.
I would also suggest worrying less about PR risks. People who want to attack EA can already go around shouting about ‘techno-capitalists’, ‘overwhelmingly white straight males’, ‘AI alarmists’, ect. If someone wants to find something negative, they’ll find something negative.
My claim was not that this post didn’t contain new information, or that it was badly written – merely that it is part of a pattern that concerns me, and that more effort could be being made to mitigate the bad effects of this pattern.
One could imagine, for example, a post that contains similar content but is written with far more sympathy for what ACE and co. are trying to do here, even if the author disagrees (strongly) with its implementation. I think this post actually does better on this than many past posts on this topic, but taken as a whole we are still a long way from where I would like to be.
I wasn’t saying they wouldn’t see it, I was saying they wouldn’t engage with it – that they will disagree with it silently, feel more alienated from the Forum, and move a little further away from the other side of EA than they were before. I think the anonymous comment below is quite strong evidence that I’m on the right track here.
I’m honestly a bit flummoxed here. Why would contributing to a Facebook group explicitly aligned with one side of this dispute help avoid a split?
I set up the group, and while I have my own views on which groups are less tolerant/tolerated I’m very keen for the group to do what it suggests in the title: bring people together/encourage cooperation/tolerance in all directions etc. It is absolutely not ‘explicitly aligned with one one side’ .
(I have strong downvoted your comment for making this claim without giving any basis for it. I’ll retract the downvote if you edit/moderate this remark, since otherwise I’m fairly agnostic about the comment content)
(I’m not sure how much the group admins want the group description waved around on the Forum, given that nobody has linked to it so far. I’ve tried to strike the right balance here but am open to cutting stuff if a group admin tells me they’d prefer something different.)
The group describes itself as a “group for EAs into getting on with conservatives and liberals alike, and who want EA itself to be more welcoming to people of all different political stripes”, and links to resources that are clearly in support of open discussion and against censoring true beliefs for the sake of avoiding offence. It even explicitly says controversial topics “are welcome”, as long as you “use stricter epistemic standards in proportion to how offensive [your claim] is”.
Even though it does not make any angry claims about cancel culture, I defend my claim that this group is clearly oriented towards the free-speech end of EA and away from the censor-opposing-views-to-protect-vulnerable-groups end.
I’m not saying the group is bad! Merely that I think, based on evidence, that my claim is reasonable. I also still don’t understand why joining this group would address these problems; I think explaining the model for the last thing might be a more effective way to change my mind, but it also might be too much of a tangent for this comment thread.
Maybe your sense of what you’re claiming and my sense of what you’re claiming are using different meanings of ‘cancel culture’. In your previous comment, you wrote
So I’ve been assuming that you were referring to ‘pro-SJ/DEI’ and ‘anti-cancel-culture’ more or less antonymonously. Yes, the group is against deplatforming (at least, without extreme epistemic/moral caution), no it’s not against SJ/DEI.
Inasmuch as they’re different concepts, then I don’t see you you couldn’t think anti-cancel-culture—which is basically ‘pro-segregation’ - culture wouldn’t help prevent a split! The point is then not to exclude any cultural group, but to discourage segregation, hostility, and poor epistemics when discussing this stuff.
I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide.
NB: I didn’t downvote this comment and would be interested to know why people did.
I’m confused: the bit you’re quoting is asking a question, not making a claim.
The embedded claim being objected to is that the group is “explicitly aligned with one side” (of this dispute).
Thanks! I missed that was disputed
I checked in with the other two admins about our approx political positions, and the answers were:
radical centrist
centre left-ish
centre left-ish
We’re trying to find both a social justice and conservative admin to add some balance, but so far no-one’s come forward for either.
The group is still new, so it’s still unclear exactly how it’ll turn out. But I don’t think that’s a completely accurate way of characterisating the group. I expect that there are two main strands of thought within the group—some see themselves as fighting against woke tendencies, whilst others are more focused on peace-making and want to avoid taking a side.
At the risk of stating the obvious: emailing organizations (anonymously, if you want) is a pretty good way of raising concerns with them.
I’ve emailed a number of EA organizations (including ACE) with question/concerns, and generally find they are responsive.
And I’ve been on the receiving side of emails as well, and usually am appreciative; I often didn’t even consider that there could be some confusion or misinterpretation of what I said, and am appreciative of people who point it out.
I think that this totally misses the point. The point of this post isn’t to inform ACE that some of the things they’ve done seem bad—they are totally aware that some people think this. It’s to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.
I guess I don’t know OP’s goals but yeah if their goal is to publicly shame ACE then publicly shaming ACE is a good way to accomplish that goal.
My point was a) sending a quick emails to someone about concerns you have with their work often has a very high benefit to cost ratio, and b) despite this, I still regularly talk to people who have concerns about some organization but have not sent them an email.
I think those claims are relatively uncontroversial, but I can say more if you disagree.
The key part of running feedback by an org isn’t to inform the org of the criticism, it’s to hear their point of view, and see whether any events have been misrepresented (from their point of view). And, ideally, to give them a heads up to give a response shortly after the criticism goes up
That seems correct, but doesn’t really defend Ben’s point, which is what I was criticizing.
I think private discussions are very important, but I don’t feel good about a world where they entirely substitute for this kind of public disagreement. I think past Forum controversies of this kind have often been quite valuable.
Yep, definitely don’t want people to swing too far in the opposite direction. Just commenting that “talk to people about your concerns with them” is a surprisingly underutilized approach, in my experience.
I talked to ACE (Jacy Reese/Anthis in particular) in 2015 about ACE dramatically overstating effectiveness of leaflets. Jacy was extremely responsive in the call, and nothing changed until two years later when a dramatically more inflammatory article got wide distribution.
I’ve refrained from making certain posts/comments on EAF in part for these reasons. I think in the long run these outcomes will be very hard to avoid, given the vastly different epistemic approaches between the two sides, and e.g., “silence is violence”, but it could be that in the short/medium term it’s really important for EA to not become a major “public enemy” of the dominant ideology of our times.
ETA: If anyone disagrees with my long-run prediction (and it’s not because something happens that makes the issue moot, like AIs take over), I’d be interested to read a story/scenario in which these outcomes are avoided.
Is “social justice” ideology really the dominant ideology in our society now? My impression is that it’s only taken seriously among young, highly-educated people.
Agreed that it’s not dominant in society at-large, though I think it is dominant in a number of important institutions (esp. higher ed, the liberal nonprofit sphere, and journalism)
Those are the circles many of us exist in. So a more precise rephrasing might be “we want to stay in touch with the political culture of our peers beyond EA.”
This could be important for epistemic reasons. Antagonistic relationships make it hard to gather information when things are wrong internally.
Of course, PR-based deference is also a form of antagonistic relationship. What would a healthy yet independent relationship between EA and the social justice movement look like?
Cullen asked a similar question here recently. Progressives and social justice movement are definitely not the same, but there’s some overlap.
Maybe we’re just using the word “dominant” in different ways? I meant it in the sense of “most important, powerful, or influential”, and not something like “sincerely believed by the majority of people” which may be what you have in mind? (I don’t believe the latter is true yet.)
I don’t think the former is true either (with respect to national politics).
It makes sense that what is most important, powerful, or influential in national politics is still highly correlated with what most people in our society sincerely believe, due to secret ballot voting and the national scope, but I think in many other arenas, some arguably more important than current national politics (e.g. because they play an outsized role in the economy or in determining what future generations will believe), local concentrating of true believers and preference falsification have caused a divergence between the two senses of “dominant”.
I think this post is pretty damning of ACE. Are you saying OP shouldn’t have posted important information about how ACE is evaluting animal charities because there has been too much anti-SJ/DEI stuff on the forum lately?
I feel I have explained myself fairly well on this thread already, see for example here:
Whatever information you want to convey, there are always a very wide range of ways to convey that information, which will vary substantially in their effects. With very controversial stuff like this, it is especially worth putting thought into how to convey that information in the manner that is best for the world.
I’ve actually been quite impressed with Hypatia’s behaviour on this point since the post went up, in terms of updating the post based on feedback and moderating its tone. I think my version of this post would try even harder to be nice and sympathetic to pro-SJ EAs than this, but I’m not very unhappy with the current version of the OP.
(The ensuing discussion has also brought to light several things that made me update in the direction of ACE’s behaviour being even worse than I thought, which makes me more sympathetic to the OP in its original form, though I stand by my original comments.)
The more substantial point that I’m trying to make is that the political balance of the EA Forum shouldn’t be a big factor in someone’s decision to publicize important information about a major charity evaluator, or probably even in how they put the criticism. Many people read posts linked from the EA Forum who never read the comments or don’t visit the Forum often for other posts, i.e. they are not aware of the overall balance of political sympathies on the Forum. The tenor of the Forum as a whole is something that should be managed (though I wouldn’t advocate doing that through self-censorship) to make EA welcoming or for the health of the community, but it’s not that important compared to the quality of information accessible through the Forum, imo.
I’m a little offended at the suggestion that expressing ideas or important critiques of charities should in any way come second to diplomatic concerns about the entire Forum.
I found this post to be quite refreshing compared to the previous one criticizing Effective Altruism Munich for uninviting Robin Hanson to speak. I’m not against “cancel culture” when it’s cancelling speakers for particularly offensive statements they’ve made in the past (e.g., Robin Hanson in my opinion, but let’s not discuss Robin Hanson much further since that’s not the topic of this post). Sometimes though, cancelling happens in response to fairly innocuous statements, and it looks like that’s what ACE has done with the CARE incident.