The case against “EA cause areas”
Everyone reasonably familiar with EA knows that AI safety, pandemic preparedness, animal welfare and global poverty are considered EA cause areas, whereas feminism, LGBT rights, wildlife conservation and dental hygiene aren’t.
The state of some very specific cause areas being held in high regard by the EA community is the result of long deliberations by many thoughtful people who have reasoned that work in these areas could be highly effective. This collective cause prioritization is often the outcome of weighing and comparing the scale, tractability and neglectedness of different causes. Neglectedness in particular seems to play a crucial role in swaying the attention of many EAs and concluding, for example, that working on pandemic preparedness is likely more promising than working on climate change, due to the different levels of attention that these causes currently receive worldwide. Some cause areas, such as AI safety and global poverty, have gained so much attention within EA (both in absolute terms and, more so, compared to the general public) that the EA movement has become somewhat identified with them.
Prioritizing and comparing cause areas is at the very core of EA. Nevertheless, I would like to argue that while cause prioritization is extremely important and should continue, having the EA movement identified with specific cause areas has negative consequences. I would like to highlight the negative aspects of having such a large fraction of the attention and resources of EA going into such a small number of causes, and present the case for more cause diversification and pluralism within EA.
The identification of EA with a small set of cause areas has many manifestations, but the one I’m mostly worried about is the feeling shared by many in the community that if they work on a cause that is not particularly prioritized by the movement (like feminism) then what they do “is not really EA”, even if they use evidence and reason to find the most impactful avenues to tackle the problem they try to solve. I want to stress that I’m not attempting to criticize any specific organizations or individuals, nor the method or philosophy of the movement. Rather, I’d like to address what I see as a general (largely unintentional) feature of our community.
In this post I try to focus on the downsides of our community being too narrowly focused. I recognize there are also benefits to this approach and counterarguments to my claims, and look forward to them being introduced to the discussion.
I will elaborate on the main arguments I see for being more supportive of most causes:
Even if a cause is not neglected, one might still have a comparative advantage working on it
Many cause areas are pushed off EA’s radar due to the criterion of neglectedness. For example, there is no doubt that cancer research has enormous potential in terms of the scale of the problem, and it also seems quite promising with respect to its tractability. But because “curing cancer” is such a mainstream cause, with tons of talent and funding already pouring into it, EAs tend to not see the point of dedicating yet another individual’s career to solving the problem. It is commonly believed that an EA-aligned individual with a background in biology would do more good working on clean meat, pandemic preparedness or consciousness research, all far more neglected than cancer.
However, this calculus can be somewhat incomplete, as it doesn’t take into account the personal circumstances of the particular biologist debating her career. What if she’s a very promising cancer researcher (as a result of her existing track record, reputation or professional inclinations) but it’s not entirely clear how she’d do in the space of clean meat? What if she feels an intense inner drive working on cancer (since her mother died of melanoma)? These considerations should factor in when she tries to estimate her expected career-long impact.
Such considerations are acknowledged in EA, and personal fit plays a central role in EA career advice. But, while personal circumstances are explicitly discussed in the context of choosing a career, they are not part of the general considerations raising specific causes to prominence within EA (as personal conditions cannot be factored into general-purpose research). As a result, while the cancer researcher knows in the abstract that her personal fit should be taken into account, she still feels like she’s doing something un-EA-like by pursuing this career.
Furthermore, pandemic preparedness might be very neglected when considering the resources allocated to the cause by humanity as a whole, but if we only consider the resources allocated by the EA movement, then it is very likely that cancer research is actually more neglected than pandemic preparedness within EA. Scarcity of EAs working in an otherwise crowded area could matter if we think that EAs have the capacity to contribute in unique ways. For example, impact evaluation and impact-oriented decision making, which are standard tools in the EA toolkit, could be highly valuable towards most causes I could think of. I suspect that very few LGBT activists think in those terms, and even fewer have the relevant tools. The competence of EAs in doing specific types of work (that involves thinking about impact explicitly) is a generic type of comparative advantage that most of us have. I believe that in many cases this comparative advantage could have a massive effect, big enough to counteract even sizable diminishing returns in otherwise crowded areas.
From the perspective of the entire EA movement, it might be a better strategy to allocate the few individuals who possess the rare “EA mindset” across a diverse set of causes, rather than stick everyone in the same 3-4 cause areas. Work done by EAs (who explicitly think in terms of impact) could have a multiplying effect on the work and resources that are already allocated to causes. Pioneer EAs who choose such “EA-neglected” causes can make a significant difference, just because an EA-like perspective is rare and needed in those areas, even in causes that are well-established outside of EA (like human rights or nature conservation). For example, they could carry out valuable intra-cause prioritization (as opposed to inter-cause prioritization).
Rather than considering how neglected a cause is in general, I think it is often more helpful to ask yourself how much comparative advantage you might have in it compared to everyone else currently working in this area. If very few people work on it (in other words, if it’s neglected) then your comparative advantage is simply your willingness to work on it. In this particular case the criteria of neglectedness and comparative advantage align. But neglectedness is only a special case of comparative advantage, which in general could include many other unique advantages (other criticisms of the neglectedness criterion have been expressed here and here).
Generally, I feel there is a lot of focus in EA on cause areas, but perhaps not enough emphasis on specific opportunities to improve the world. Even if a cause area is truly not very impactful in general, in the sense that most work done there is not very high-impact, it doesn’t necessarily mean that every single path pursued in this area is destined to be low-impact. Moreover, when considering the space of unique opportunities available to a specific individual (as a result of their specific background, or just sheer luck), it’s possible they would have exceptionally good options not available to other EAs (such as a highly influential role). In that case, having encountered a unique opportunity to do good can be considered another special case of comparative advantage.
Spreading the attention of EA across more causes may be a better exploration strategy for finding the best opportunities
It’s widely acknowledged in EA that we don’t want to miss the very best opportunities to do the most good. To tackle the risk that we might, our general countermeasure is to think long and hard about potential missed opportunities and candidates for cause-X, trying to study and map ever greater territories. In other words, we generally take a top-down approach: we first try to make the case for working on cause X, and only then, if sufficient evidence and reason point to it having a high potential, we actually start to work in the area and explore it in greater detail. But is it the most efficient strategy to explore the space of opportunities? I suspect that in many cases you can’t really see very clearly the full utility of acting in some space until you actually try to do it. Moreover, exploring the space of opportunities to improve the world through the lens of cause areas is a rather low-resolution mapping strategy, which may lead us to miss some of the highest (but narrow) peaks on the impact landscape. Operating at the resolution of opportunities rather than cause areas could therefore be useful also at the community level.
If EAs felt more comfortable to pursue diverse causes and report back to the community about their conclusions and insights working in those spaces, then as a community we might do better in mapping our options. By ignoring most of the cause areas that exist out there we might be under-exploring the space of possible ideas to improve the world. Allowing more ideas to receive more attention by EAs, we might find out that they are more promising than appeared at first sight. Encouraging cause diversification, where more EAs feel comfortable working on causes that they feel personally attracted to, might prove a more effective exploration strategy than collective deliberation. This can be seen as a sort of a hits-based approach: if you operate in an area not recognized as generally impactful, the probability of having an incredibly impactful career is lower, but if you identify a huge opportunity the EA community has missed, you could make an enormous impact.
Being supportive of most causes can help the growth and influence of the movement
When people first encounter EA, they often get the impression that becoming seriously involved with the movement would require them to make radical life changes and give up on what they currently work on and care about. As a result, they may prefer to carry on with their lives without EA. I feel we might be losing a lot of promising members and followers as a result of being identified with such a narrow set of causes (an intuition also supported by some empirical evidence). I know many talented and capable individuals who could do high-impact work, but feel like they don’t really fit in any of the classic EA causes, due to lack of relevant background or emotional connection. Many people also can’t find career opportunities in those areas (e.g. due to the low number of job openings in EA organizations and their limited geographic distribution). In the end, most people can’t be AI researchers or start their own organization.
Most EAs I know personally are very open minded and some of the least judgemental people I know. That’s one of the reasons I really enjoy hanging out with EAs. Yet, strangely, it seems that as a collective group we somehow often make each other feel judged. From my experience, a biologist choosing to spend her career doing cancer research would often feel inferior to other EAs choosing a more EA-stereotypic career such as pandemic preparedness or clean meat. When introducing herself in front of other EAs, she may start with an apology like “What I’m working on isn’t really related to EA”.
Scott Alexander described (humorously, but probably with a grain of truth) his experience from EAG 2017:
I had been avoiding the 80,000 Hours people out of embarrassment after their career analyses discovered that being a doctor was low-impact, but by bad luck I ended up sharing a ride home with one of them. I sheepishly introduced myself as a doctor, and he said “Oh, so am I!” I felt relieved until he added that he had stopped practicing medicine after he learned how low-impact it was, and gone to work for 80,000 Hours instead.
What if we tried more actively to let people feel that whatever they want to work on is really fine, and simply tried to support and help them do it better through evidence and reason? I believe this could really boost the growth and influence of the movement, and attract people with more diverse backgrounds and skills (which is certainly a problem in EA). Moreover, maybe after engaging with EA for a while, some would eventually come to terms with the harder-to-digest aspects of EA. Learning how to do things one cares about more effectively could serve as a “gateway drug” to eventually changing cause area after all. By focusing on a very narrow set of causes we make ourselves invisible to most of the world.
Helping people become more effective in what they already do might be more impactful than trying to convince them to change cause area
What is the purpose of cause prioritization? The obvious answer is that by knowing that cause A is more effective than cause B, we could choose A over B. But are we always able to make this choice? What if it’s up to someone else to decide? What if that someone else is not receptive to making big changes?
If we encounter a nonprofit promoting dental hygiene in US schools, chances are we won’t be able to make it pivot into an AI think tank, or even just into operating in developing countries. At the point of encountering us, the nonprofit may already be too constrained by pre-existing commitments (e.g. to its funders), the preferences of its employees and volunteers and their emotional connection to the cause area, and inertia. On the other hand, the nonprofit team might be open to start doing impact evaluation and follow evidence-based decision making.
I’m sure there are far more organizations and individuals who are open to advice on how to be more effective in what they already do than there are folks who are open to changing cause areas. As a result, even if it’s more impactful to engage with someone in the latter group, the overall impact of engaging with the former group might still be dramatically greater overall. Of course, there are more conditions that have to be met for that to be true, and I’m not trying to say it is necessarily so, only raise the possibility.
The kind of cause prioritization comparing the effectiveness of distributing bed nets in malaria-struck countries vs. producing US-school-targeted videos portraying the horrors of tooth decay is only relevant in certain contexts. It used to be very relevant in the early days of EA when the primary goal of the movement was to find proven effective charities to donate to, and the audience was a small group of people highly committed to impartiality and doing the most good. If we are in the business of giving career advice to wider publics, these comparisons are not always as relevant.
The attitude of making things better without attempting to replace them by something else entirely could also be relevant to individual career choices, for example if presented with an opportunity to influence a program already funded by government or philanthropic money. While such programs tend to have a defined scope (meaning they are unlikely to turn into top GiveWell charities), there might still be a great deal of flexibility in how they operate. If led by someone who is truly impact-oriented, the counterfactual impact could be quite substantial.
More independent thinking could be healthy for EA
I think we have a lot of trust within the EA community, and that’s generally a good thing. If a prominent EA organization or individual puts significant efforts into investigating the pros and cons of operating in certain areas and then makes an honest attempt to thoroughly and transparently present their conclusions, then we tend to take them seriously. However, our mutual trust might have downsides as well. I think that having more critical thinking within EA, putting each other’s work into more scrutiny and doubt, could actually be healthy for our movement.
For example, for a long period of time many EAs used to downplay climate change as a cause area and believed that it wasn’t a very effective cause to work on (among other reasons, for not being extremely neglected outside of EA). Only quite recently this view started to get some pushback. Mild groupthink could have played a role in this dynamic, as prominent EA figures underappreciated climate change early on, and other EAs just went along without thinking about it too much. Maybe our community is more susceptible to groupthink than we would like to think. I sometimes get the impression that many EAs reiterate what other EAs are saying, just because it’s been said in EA so it’s probably true (I often catch myself in this state of mind, saying things with more confidence than I probably should, only because I trust my fellow EAs to have gotten it right). Likewise, just as we shouldn’t automatically accept everything said within EA without questions, it would also be a bad idea to overlook ideas and beliefs held by people outside of EA just because they are not presented to us in our preferred style.
Cause prioritization is notoriously difficult, because there are so many crucial considerations and high-order effects to contemplate. For example, there is still an ongoing debate within EA on how we should think about systemic change (which is relevant to many cause areas). I think there’s a non-negligible chance that we got a wide range of causes wrong. And, similarly, we might be mistaken in putting too much emphasis on certain cause areas just because they are popular and trendy in EA. By spreading our collective attention into more areas we minimize the risk of groupthink and getting things horribly wrong.
This reminds me of the “replication crisis” recently coming to the attention of many scientific fields. Psychologists used to be too eager to believe claims made by their peers, even those backed by single studies, until they realized that a shockingly large number of these studies just didn’t replicate. What if we are overlooking the possibility of a similar replication crisis in EA? I think it would be beneficial to dedicate more resources into revisiting long-held EA beliefs.
Cause divergence is important to preserving the essence of EA
Cause neutrality is a central value in EA. If EA collapsed into 3-4 specific causes, even if circumstances justified that, it would later be difficult for EA to recover and remember that it once was a general, cause-agnostic movement. I can imagine a scenario where EA became so identified with specific causes, that many EAs would feel pressured to move into these areas, while those less enthusiastic about the causes would become frustrated and eventually leave the movement. At the same time, many non-EAs may start identifying themselves with EA just because they work on causes so identified with the movement, even if they are not really impact-oriented. At this point, “being part of EA” may just become synonymous to working on a specific cause, whether or not one really cares about impact.
What should we do about it?
You may worry that I advocate for turning EA into something it is not, a cheery bouncy feel-good everyone-is-welcome society like so many other communities out there, thereby taking the edge off EA and irreversibly converting it into a distasteful “EA lite” for the masses. That’s really not what I want. I think we should continue doing cause prioritization (maybe even more of it), and we shouldn’t be afraid to say out loud that we think cause A is generally more impactful than cause B. I am, however, worried about the movement becoming identified with a very small set of specific causes. I would like members of EA to feel more legitimacy to pursue mainstream, non-EA-stereotypic causes, and feel comfortable to openly talk about it with the community without feeling like second-class citizens. I would like to see more EAs using evidence, reason and the rest of the powerful EA toolkit to improve humanity’s impact in medicine, science, education, social justice and pretty much every mainstream cause area out there.
To bring the discussion more down to earth, I list a few concrete suggestions for things we can do as a community to address some of the concerns mentioned in this post, without compromising our movement’s core values and integrity (notice that this is far from being an exhaustive list, and I’d be happy to see more suggestions):
I think we need to better communicate that working on a cause area considered promising by EA is not equivalent (and is neither necessary nor sufficient) to doing high-impact work or being a dedicated EA.
I think it should be clear in most EA discussions, and especially in public outreach, that broad statements about categories (in particular about cause areas) are only first-order estimates and never capture the full complexity of a decision like choosing a career.
When interacting with others (especially non-EAs or new EAs), I think we should be very inclusive and support pretty much any cause people are interested in (as long as it’s not actively causing harm). While it’s ok to nudge people into making more impactful choices, I think we should avoid too much pressure and be very intellectually modest.
In careers, I think it’s common that people’s unique opportunities, comparative advantage, or ability to explore new territories will lead them outside of the main EA focus areas, and this should be encouraged (together with a meaningful process of prioritization).
When donating, I think there is room for narrow opportunities in less-promising-on-average cause areas (but unless aware of such an opportunity, EAs are likely to achieve more impact by donating to established causes).
I think that identifying and investigating impactful cause areas should continue to be as concrete and unrelenting as it is now.
I think it would be useful to have more cause prioritization done by actively operating and gaining hands-on experience in new spaces (bottom-up approach) rather than by using first principles to think long and hard about missed opportunities (top-down approach).
I think EAs should be critical but charitable towards information and views presented to them, and might benefit from being more charitable to non-EA claims and more critical of EA claims.
Additionally, I think we should have a discussion on the following questions (again, this is not an exhaustive list):
How strongly should we nudge others into working on and donating to causes we consider more promising? On the one hand, we don’t want to alienate people by trying to push them too aggressively down a path they don’t really want (or are not yet ready for), and we should be (at least somewhat) intellectually modest. On the other hand, we don’t want EA to decay into an empty feel-good movement and lose our integrity.
How can we foster an atmosphere in which people interacting with EA (especially new members) feel less judged, without compromising our values?
Do you have more arguments against the prominence that a small number of cause areas take within EA? Do you have counterarguments? Do you have concrete suggestions or more open questions? I’d really like to hear your thoughts!