21 criticisms of EA I’m thinking about

Note — I’m writing this in a personal capacity, and am not representing the views of either of my employers (Rethink Priorities and EA Funds). This post was also not reviewed in advance by anyone.

I really like all the engagement with criticism of EA as a part of the criticism and red-teaming contest and I hope our movement becomes much stronger as a result. Here’s some criticism I’ve recently found myself thinking about.

Similar to Abraham’s style I’m going to write these out in bullet points because while I’d love to have spent more time to make this a longer post, you can tell by the fact that I’m rushing it out on the last day that this plan didn’t come together. So consider these more of starting points for conversation than decisive critiques.

Also apologies if someone else has already articulated these critiques—I’m happy to be pointed to resources and would be happy to edit my post to include them. I’d be very excited also for others to expand upon any of these points. Also, if there’s particular interest, I might be able to personally expand on a particular point at a later date.

Furthermore, apologies if any of these criticisms are wrongheaded or lack nuance—some nuance can’t be included given the format, but I’d be happy to remove or alter criticism I no longer stand by or link to discussion threads where more nuance is discussed.

Lastly I concede that these aren’t necessarily the most important criticisms of EA. These are criticisms I am listing with the intention of producing novelty[1] and articulating some things I’ve been thinking about rather than producing a prioritized list of the most important criticisms. In many cases, I think the correct response to a criticism I list here could be to ignore it and keep going.

With those caveats in mind, here it goes:

1.) I think criticizing EA is kinda hard and slippery. I agree that EA is a tower of assumptions but I think it is pretty hard to attack this tower as it can instead operate as a motte and bailey. I think a lot of my criticisms are of the form of attacks on specific projects /​ tactics rather than EA as a philosophy/​strategy/​approach. And usually these criticisms are met with “Yeah but that doesn’t undermine EA [as a philosophy/​strategy/​approach] and we can just change X”. But then X doesn’t actually change. So I think the criticism is still important, assuming EA-with-X is indeed worse than EA-without-X or EA-with-different-X. It’s still important to criticize things as they are actually are implemented. Though of course this isn’t necessarily EA’s fault as driving change in institutions—especially more diffuse leaderless institutions—is very hard.

2.) I think criticism of EA may be more discouraging than it is intended to be and we don’t think about this enough. I really like this contest and the participatory spirit and associated scout mindset. EA is a unique and valuable approach and being willing to take criticism is a big part of that. (Though the changing may be harder, as mentioned.) But I think filling up the EA Forum with tons of criticism may not be a good experience for newcomers and may be particularly poorly timed with Will’s big “What We Owe the Future” media launch bringing EA into the public eye for the first time. I think as self-aggrandizing as it might appear, I should probably make it clear to everyone that I actually think the EA community is pretty great and dramatically much better than every other movement I’ve been a part of. I think it’s human nature to see people focus on the criticisms and think that there are serious problems and get demotivated, so perhaps we ought to balance that out with more positives[^though of course I don’t think a contest for who can say the best things about EA is a good idea].

3.) I don’t think we do enough to optimize separately for EA newcomers and EA veterans. These two groups have different needs. This relates to my previous point—I think EA newcomers would like to see an EA Forum homepage full of awesome ambitious direct work projects. But EA veterans probably want to see all the criticism and meta stuff. I think it’s hard to optimize for this both on one site. I don’t think we need to split the EA Forum into two but we should think more about this. I think this is also a big problem for EA meetups and local groups that I’m unsure if it has been solved well or not[2].

4.) The EA funnel doesn’t seem fully thought out yet. We seem pretty good at bringing in a bunch of new people. And we seem pretty good at empowering people who are able to make progress on our specific problems. But there’s a lot of bycatch in between those two groups and it’s not really clear what we should do with people who have learned about EA but aren’t ready to, e.g., become EA Funds grantees working on the cutting edge of biosecurity.

5.) I think the grantee experience doesn’t do a good job of capturing the needs of mid-career people. I think it’s great that people who can make progress on key EA issues can pretty easily get a grant to do that work. But I think it can be a rough experience for them if they are only funded for a year, have absolutely zero job security, are an independent contractor with zero benefits and limited advice on how to manage that, and get limited engagement/​mentorship on their work. This seems pretty hard to do for a young person and basically impossible to do as someone who also wants to maintain a family. I think things like SERI/​CHERI/​etc. are a great step in the right direction on a lot of these but ideally we should also build up more mentorship and management capacity in general and be able to transition people into more stable jobs doing EA work.

6.) We should think more about existential risks to the EA movement itself. I don’t think enough attention is paid to the fact that EA is a social movement like others and is prone to the same effects that make other movements less effective than they could be, or collapse entirely. I really like what the CEA Community Health team is doing and I think the EA movement may already have had some serious problems without them. I’d like to see more research to notice the skulls of other movements and see what we can do to try to proactively prevent them.

7.) EA movement building needs more measurement. I’m not privy to all the details of how EA movement building works but it comes across to me as more of a “spray and pray” strategy than I’d like. While we have done some work I think we’ve still really underinvested in market research to test how our movement appeals to the public before running the movement out into the wild big-time. I also think we should do more to track how our current outreach efforts are working, measuring conversion rates, etc. It’s weird that EA has a reputation of being so evidence-based but doesn’t really take much of an evidence-based orientation to its own growth as far as I can tell.

8.) I think we could use more intentional work on EA comms, especially on Twitter. The other points here nonwithstanding, I think EA ideally should’ve had a proactive media strategy a lot earlier. I think the progress with Will’s book has been nothing short of phenomenal and it’s great that CEA has made more progress here but I’d love to see more of this. I also think that engagement on Twitter is still pretty underdeveloped and neglected (especially relative to the more nascent Progress Studies movement) as it seems like a lot of intellectuals frequent there and can be pretty moved by the content they see there regularly.

9.) I don’t think we’ve given enough care to the worry that EA advice may be demotivating. One time when I tried promoting 80,000 Hours on Twitter, I was met with criticism that 80K directs people “into hypercompetitive career paths where they will probably fail to get hired at all, and if they do get hired likely burn out in a few years”. This is probably uncharitable but contains a grain of truth. If you’re rejected from a bunch of EA jobs, there’s understandable frustration there around how you can best contribute and I don’t think we do a good enough job addressing that.

10.) I think we’ve de-emphasized earning to give too much. We went pretty hard on messaging that was interpreted as “EA has too much money it can’t spend and has zero funding gaps, so just direct work and don’t bother donating”. This was uncharitable but I think it was understandable how people interpreted it that way. I think earning to give is great, it’s something everyone can do and contribute a genuine ton—even someone working full-time on minimum wage 50 weeks per year in Chicago donating 10% of their pre-tax income can expect to save a life on average more than once per two years! But for some reason we don’t think of that as incredible and inclusive and instead think of that as a waste of potential. I do like trying to get people to try direct work career paths first but I think we should make earning to give still feel special and we should have more institutions dedicated to supporting that[3].

11.) I think EA, and especially longtermism, has pretty homogenous demographics in a way that I think reduces our impact. In particular, the 2020 EA Survey showed effective altruism as being 70% male. Using a longtermism-neartermism scale, the top half of the EA Survey respondents most interested in longtermism were 80% male. This longtermism-interest effect on gender persists in EA Survey data even after controlling for engagement. I think being able to more successfully engage non-male individuals in EA and longtermism is pretty important for our ability to have an impact as a movement, as this likely means we are missing out on a tremendous amount of talent as well as a important different perspectives. Secondarily, we risk there being downward spirals where talented women don’t want to join what they perceive to be a male-dominated movement and our critics reject our movement by associating us with an uncharitable “techbro” image. This is difficult to talk about and I’m not exactly sure what should or could be done to work on this issue, but I think it’s important to acknowledge this. I think this issue also applies not just to gender, but also to EA being very skewed towards younger individuals, and likely to other areas as well.

12.) We need to think more about how our messaging will be interpreted in uncharitable low-nuance ways. Related to the above few points, it’s pretty easy for some messages to get out there in a way that is interpreted in a way we don’t expect. I think some formal message testing work and additional reflection could help with this. I know I personally haven’t thought enough about this when writing this post.

13.) We need more competition in EA. For one example, I think too many people were wary of doing anything related to careers because 80,000 Hours kinda “owned” the space. But there’s a lot they were not doing. And even some of what they are doing could potentially be done better by someone else. I don’t think the first org to try a space should get to own a space and while coordination is important and while neglectedness is a useful criterion, I also encourage more people to try to improve upon existing spaces, including by starting your own EA consultancies /​ think tanks.

14.) I don’t think we pay enough attention to some aspects of EA that could be at cross-purposes. For example, some aspects of work in global health and development may come at the cost of increased factory farming, harming animal welfare goals. Moreover, some animal welfare goals may drive up food prices, coming at the cost of economic development. Similarly, some work on reducing great power war and some work on promoting science may have trade-offs with racing scenarios in AI.

15.) I think longtermist EAs ignore animals too much. I certainly hope that AI alignment work can do a good job of producing AIs in line with human values and I’m pretty worried they will do something really bad. But I’m also worried that even if we nail the “aligned with human values” part that we will get a scenario that basically seems fine/​ok to the typical person but replicates the existing harms of human values on nonhumans. We need to ensure that AI alignment is animal-inclusive.

16.) I think EA ignores digital sentience too much. I don’t think any current AI systems are conscious but I think it could happen, and happen a lot sooner than we think. Transformative AI is potentially less than 20 years away and I think conscious AI could be even sooner, especially given that e.g., bees are likely conscious and we already have enough compute to achieve bee-compute parity on some tasks. Also there’s so much we don’t know about how consciousness works, I think it is important to tred with caution here. But if computers could easily make billions of digital minds, I don’t think we’re at all prepared for how that could unfold into a very near-term moral catastrophe.

17.) I think existential risk is too often conflated with extinction risk. I think “What We Owe the Future” does a good job of fighting this. My tentative best guess is the most likely scenario for existential risk is not human extinction, but some bad lock-in state.

18.) I think longtermists/​EAs ignore s-risks too much. Similarly, I think some lock-in states could actually be much worse than extinction, and this is not well accounted for in current EA prioritization. I don’t think you have to be a negative or negative-leaning utilitarian to think s-risks are worth dedicated thought and effort and we seem to be really underinvesting in this.

19.) I think longtermists /​ x-risk scenario thinking ignores too much the possibility of extraterrestrial intelligence, though I’m not sure what to do about it. For example, extraterrestrials could have the strength to easily wipe-out humanity or at least severely curtail our growth. Likewise, they could already have produced an unfriendly AI (or a friendly AI for that matter) and there’s not much we could do about it. In another view, if there are sufficiently friendly extraterrestrials, it’s possible this could reduce the urgency of humanity in particular reaching and expanding through the cosmos. I’m really not sure how to think about this, but I think it requires more analysis as right now it basically does not seem at all to be on the longtermist radar except insofar as expecting extaterrestrials to be unlikely.

20.) Similarly, I think longtermists /​ x-risk scenario thinking ignores too much the possibility of the simulation hypothesis too much, though I’m also not sure what to do about it.

21.) I think in general we still under-invest in research. EAs get criticized that we think too much and don’t do enough, but my guess is actually we’d benefit from much more thinking, and a lot of my criticisms are more within this than out. Of course, I’d be pretty inclined to think about this given my job at Rethink Priorities and this could be self-serving if people taking this criticism to heart results in people donating more to my own organization. However, I think it remains the case that there are many more important topics than we could possibly research even with our 38 research FTE[4] and there remain a decent amount of talented research hires we’d be inclined to hire if we had more funding and more management capacity. I think this is the case for the entire movement as a whole.


  1. ↩︎

    Not to say that all of these critiques are original to me, of course.

  2. ↩︎

    at least it was a big problem when I was last in a local group, which was in 2019. And it may vary by group.

  3. ↩︎
  4. ↩︎

    at Rethink Priorities, as of end of year 2022, not counting contract work we pay for and not counting our fiscal sponsors that do research like Epoch