It is just people trying to figure out the best way to improve the world. What could be controversial about that?
How to best improve the world is far from straightforward, and EA hardly has a bulletproof or incontrovertible position on what needs to be improved or how. Even if you can get broad agreement on something like “we should fix poverty”, there are dozens of questions about why poverty happens, how the root causes should or shouldn’t be approved, what is the most effective and most ethical ways of addressing it (which are not necessarily always the same), etc. I think the comments from S. E. Montgomery and bruce do an excellent job summing up a lot of the points where EA is open to (often well-deserved) critiques.
Even before the whole FTX thing, EAs were being vilified on social media and even in academia.
I think there’s some value in separating the “public” image and criticism of EA from academic ones, and I wanted to comment on the academic aspect in particular. While I’m sure that, like all movements, EA has been subject to vilification by academics, the academically published critiques of EA I’ve read tend to be fair, reasoned critiques primarily stemming from different approaches to doing good:
Institutional critiques that argue that EA is too focused on “band-aid” solutions that don’t address the root causes of the problems it addresses, which would require institutional or radical change.
Methodological critiques: this includes critiques of utilitarianism as a method for cause and intervention prioritization in general, and specific criticisms of calculations that address biases or models.
More general critiques on what should be prioritized for doing good: utilitarian or deontological views
Critiques that argue the value of local (read: community based, bottom up) vs. global (read: top-down foreign aid and intervention) approaches
Critiques of the charity in general along the lines of the nonprofit industrial complex that are extended to EA.
Critiques of charity in general as a means of redistribution of wealth and prioritizing who gets access to which services as opposed to democratic methods.
Anti-capitalist critiques that argue that EA is too tied in with capitalist ideals and methods to affect real change when it comes to poverty, which is considered to be a symptom of capitalism.
I don’t think that all these critiques are excellent or as well backed up as they could be, but I think it’s important to recognize that there are good reasons to find EA objectionable, controversial or even, on net, harmful based on any of these without having cognitive dissonance, resentment, or psychological effects involved.
I’m not a community builder, but I’d like to share some observations as someone who has been involved with EA since around 2016 and has only gotten heavily involved with the “EA community” over the past year. I’m not sure how helpful they’ll be but I hope it’s useful to you and others.
I strongly agree with Karthik’s comment about the focus on highly-engaged EA’s as the desired result of community building being counterproductive to learning. I think part of this definitely comes down to the relative inexperience of both members and groups leaders, particularly in university groups. There seems to be a lot of focus on convincing people to get involved in EA rather than facilitating their engagement with ideas, and this seems to lead to a selection effect where only members who wholly buy into EA as a complete guideline for doing good stick with the community, creating an intellectual echo chamber of sorts where people don’t feel very motivated to meaningfully engage with non-EA perspectives.
One reflection of this unwillingness to engage that I’ve come across recently is EAs online asking how to best defend point X or Y or how to best respond to a certain criticism. The framing of these questions as “how do I convince person that X is right/wrong”, “which arguments work best on people who believe Y” or “how do I respond to criticism Z” makes it apparent to me that they are not interested in understanding the other points perspective as much as “defeating” it, and that they are trying to defend ideas or points that they are not convinced of themselves (as demonstrated in the way that they are not able to respond to criticisms themselves but feel the need to defend the point), presumably because it’s an EA talking point.
Another issue I’ve seen in similar online spaces is a sneer-y and morally superior attitude towards “fuzzies” and non-utilitarian approached to doing good. This is both hostile to non-EAs and thus makes it less likely for them to be willing to engage, and it is demonstrating an unwillingness on the side of the EAs to engage as well. I’m not sure how prevalent this kind of thing is or how it can be counteracted, but it may be worth thinking about.
While not as severe, I think it may be worth looking into discussion norms in this context as well. EAs as a community tend to value relatively highly polished arguments that are backed with evidence and their preferred modes of analysis (Bayesian analysis, utilitarian calculus, expected value etc.) and presented in a very “neutral”, “unemotional” tone. There have been posts on this forum over the past few weeks both pointing this out and exemplifying it in the responses. While I do agree with criticisms of discussion norms , I think that it’s fairly easy to see that this presents an obstacle to learning regardless of how one feels about it. If our intention is to learn from others, EAs need to be able to meaningfully engage with perspectives that are presented in their preferred style and engage with content over style and presentation, particularly where criticisms or fundamental differences of opinion are concerned.
I’ve spoken to multiple community builders, both for university groups and local groups, who expressed frustration or disappointment in not being able to get members to “engage” with EA because members weren’t making career changes into direct work on EA causes. I think this is not only a bad approach to community building for reasons stated above, but that it also creates a dynamic where people who could be doing good work and learning elsewhere are implicitly told that this kind of work is not valuable, thus both alienating people are not able to find direct work and further implying that non-EA work is valueless. This is probably something that can be addressed both in community building best practices and by tweaking any existing incentive structures for community building to emphasize highly-engaged EAs less as a desirable end result.