The case against “EA cause areas”
Everyone reasonably familiar with EA knows that AI safety, pandemic preparedness, animal welfare and global poverty are considered EA cause areas, whereas feminism, LGBT rights, wildlife conservation and dental hygiene aren’t.
The state of some very specific cause areas being held in high regard by the EA community is the result of long deliberations by many thoughtful people who have reasoned that work in these areas could be highly effective. This collective cause prioritization is often the outcome of weighing and comparing the scale, tractability and neglectedness of different causes. Neglectedness in particular seems to play a crucial role in swaying the attention of many EAs and concluding, for example, that working on pandemic preparedness is likely more promising than working on climate change, due to the different levels of attention that these causes currently receive worldwide. Some cause areas, such as AI safety and global poverty, have gained so much attention within EA (both in absolute terms and, more so, compared to the general public) that the EA movement has become somewhat identified with them.
Prioritizing and comparing cause areas is at the very core of EA. Nevertheless, I would like to argue that while cause prioritization is extremely important and should continue, having the EA movement identified with specific cause areas has negative consequences. I would like to highlight the negative aspects of having such a large fraction of the attention and resources of EA going into such a small number of causes, and present the case for more cause diversification and pluralism within EA.
The identification of EA with a small set of cause areas has many manifestations, but the one I’m mostly worried about is the feeling shared by many in the community that if they work on a cause that is not particularly prioritized by the movement (like feminism) then what they do “is not really EA”, even if they use evidence and reason to find the most impactful avenues to tackle the problem they try to solve. I want to stress that I’m not attempting to criticize any specific organizations or individuals, nor the method or philosophy of the movement. Rather, I’d like to address what I see as a general (largely unintentional) feature of our community.
In this post I try to focus on the downsides of our community being too narrowly focused. I recognize there are also benefits to this approach and counterarguments to my claims, and look forward to them being introduced to the discussion.
I will elaborate on the main arguments I see for being more supportive of most causes:
Even if a cause is not neglected, one might still have a comparative advantage working on it.
Being supportive of most causes can help the growth and influence of the movement.
Cause divergence is important to preserving the essence of EA.
Even if a cause is not neglected, one might still have a comparative advantage working on it
Many cause areas are pushed off EA’s radar due to the criterion of neglectedness. For example, there is no doubt that cancer research has enormous potential in terms of the scale of the problem, and it also seems quite promising with respect to its tractability. But because “curing cancer” is such a mainstream cause, with tons of talent and funding already pouring into it, EAs tend to not see the point of dedicating yet another individual’s career to solving the problem. It is commonly believed that an EA-aligned individual with a background in biology would do more good working on clean meat, pandemic preparedness or consciousness research, all far more neglected than cancer.
However, this calculus can be somewhat incomplete, as it doesn’t take into account the personal circumstances of the particular biologist debating her career. What if she’s a very promising cancer researcher (as a result of her existing track record, reputation or professional inclinations) but it’s not entirely clear how she’d do in the space of clean meat? What if she feels an intense inner drive working on cancer (since her mother died of melanoma)? These considerations should factor in when she tries to estimate her expected career-long impact.
Such considerations are acknowledged in EA, and personal fit plays a central role in EA career advice. But, while personal circumstances are explicitly discussed in the context of choosing a career, they are not part of the general considerations raising specific causes to prominence within EA (as personal conditions cannot be factored into general-purpose research). As a result, while the cancer researcher knows in the abstract that her personal fit should be taken into account, she still feels like she’s doing something un-EA-like by pursuing this career.
Furthermore, pandemic preparedness might be very neglected when considering the resources allocated to the cause by humanity as a whole, but if we only consider the resources allocated by the EA movement, then it is very likely that cancer research is actually more neglected than pandemic preparedness within EA. Scarcity of EAs working in an otherwise crowded area could matter if we think that EAs have the capacity to contribute in unique ways. For example, impact evaluation and impact-oriented decision making, which are standard tools in the EA toolkit, could be highly valuable towards most causes I could think of. I suspect that very few LGBT activists think in those terms, and even fewer have the relevant tools. The competence of EAs in doing specific types of work (that involves thinking about impact explicitly) is a generic type of comparative advantage that most of us have. I believe that in many cases this comparative advantage could have a massive effect, big enough to counteract even sizable diminishing returns in otherwise crowded areas.
From the perspective of the entire EA movement, it might be a better strategy to allocate the few individuals who possess the rare “EA mindset” across a diverse set of causes, rather than stick everyone in the same 3-4 cause areas. Work done by EAs (who explicitly think in terms of impact) could have a multiplying effect on the work and resources that are already allocated to causes. Pioneer EAs who choose such “EA-neglected” causes can make a significant difference, just because an EA-like perspective is rare and needed in those areas, even in causes that are well-established outside of EA (like human rights or nature conservation). For example, they could carry out valuable intra-cause prioritization (as opposed to inter-cause prioritization).
Rather than considering how neglected a cause is in general, I think it is often more helpful to ask yourself how much comparative advantage you might have in it compared to everyone else currently working in this area. If very few people work on it (in other words, if it’s neglected) then your comparative advantage is simply your willingness to work on it. In this particular case the criteria of neglectedness and comparative advantage align. But neglectedness is only a special case of comparative advantage, which in general could include many other unique advantages (other criticisms of the neglectedness criterion have been expressed here and here).
Generally, I feel there is a lot of focus in EA on cause areas, but perhaps not enough emphasis on specific opportunities to improve the world. Even if a cause area is truly not very impactful in general, in the sense that most work done there is not very high-impact, it doesn’t necessarily mean that every single path pursued in this area is destined to be low-impact. Moreover, when considering the space of unique opportunities available to a specific individual (as a result of their specific background, or just sheer luck), it’s possible they would have exceptionally good options not available to other EAs (such as a highly influential role). In that case, having encountered a unique opportunity to do good can be considered another special case of comparative advantage.
Spreading the attention of EA across more causes may be a better exploration strategy for finding the best opportunities
It’s widely acknowledged in EA that we don’t want to miss the very best opportunities to do the most good. To tackle the risk that we might, our general countermeasure is to think long and hard about potential missed opportunities and candidates for cause-X, trying to study and map ever greater territories. In other words, we generally take a top-down approach: we first try to make the case for working on cause X, and only then, if sufficient evidence and reason point to it having a high potential, we actually start to work in the area and explore it in greater detail. But is it the most efficient strategy to explore the space of opportunities? I suspect that in many cases you can’t really see very clearly the full utility of acting in some space until you actually try to do it. Moreover, exploring the space of opportunities to improve the world through the lens of cause areas is a rather low-resolution mapping strategy, which may lead us to miss some of the highest (but narrow) peaks on the impact landscape. Operating at the resolution of opportunities rather than cause areas could therefore be useful also at the community level.
If EAs felt more comfortable to pursue diverse causes and report back to the community about their conclusions and insights working in those spaces, then as a community we might do better in mapping our options. By ignoring most of the cause areas that exist out there we might be under-exploring the space of possible ideas to improve the world. Allowing more ideas to receive more attention by EAs, we might find out that they are more promising than appeared at first sight. Encouraging cause diversification, where more EAs feel comfortable working on causes that they feel personally attracted to, might prove a more effective exploration strategy than collective deliberation. This can be seen as a sort of a hits-based approach: if you operate in an area not recognized as generally impactful, the probability of having an incredibly impactful career is lower, but if you identify a huge opportunity the EA community has missed, you could make an enormous impact.
Being supportive of most causes can help the growth and influence of the movement
When people first encounter EA, they often get the impression that becoming seriously involved with the movement would require them to make radical life changes and give up on what they currently work on and care about. As a result, they may prefer to carry on with their lives without EA. I feel we might be losing a lot of promising members and followers as a result of being identified with such a narrow set of causes (an intuition also supported by some empirical evidence). I know many talented and capable individuals who could do high-impact work, but feel like they don’t really fit in any of the classic EA causes, due to lack of relevant background or emotional connection. Many people also can’t find career opportunities in those areas (e.g. due to the low number of job openings in EA organizations and their limited geographic distribution). In the end, most people can’t be AI researchers or start their own organization.
Most EAs I know personally are very open minded and some of the least judgemental people I know. That’s one of the reasons I really enjoy hanging out with EAs. Yet, strangely, it seems that as a collective group we somehow often make each other feel judged. From my experience, a biologist choosing to spend her career doing cancer research would often feel inferior to other EAs choosing a more EA-stereotypic career such as pandemic preparedness or clean meat. When introducing herself in front of other EAs, she may start with an apology like “What I’m working on isn’t really related to EA”.
Scott Alexander described (humorously, but probably with a grain of truth) his experience from EAG 2017:
I had been avoiding the 80,000 Hours people out of embarrassment after their career analyses discovered that being a doctor was low-impact, but by bad luck I ended up sharing a ride home with one of them. I sheepishly introduced myself as a doctor, and he said “Oh, so am I!” I felt relieved until he added that he had stopped practicing medicine after he learned how low-impact it was, and gone to work for 80,000 Hours instead.
What if we tried more actively to let people feel that whatever they want to work on is really fine, and simply tried to support and help them do it better through evidence and reason? I believe this could really boost the growth and influence of the movement, and attract people with more diverse backgrounds and skills (which is certainly a problem in EA). Moreover, maybe after engaging with EA for a while, some would eventually come to terms with the harder-to-digest aspects of EA. Learning how to do things one cares about more effectively could serve as a “gateway drug” to eventually changing cause area after all. By focusing on a very narrow set of causes we make ourselves invisible to most of the world.
Helping people become more effective in what they already do might be more impactful than trying to convince them to change cause area
What is the purpose of cause prioritization? The obvious answer is that by knowing that cause A is more effective than cause B, we could choose A over B. But are we always able to make this choice? What if it’s up to someone else to decide? What if that someone else is not receptive to making big changes?
If we encounter a nonprofit promoting dental hygiene in US schools, chances are we won’t be able to make it pivot into an AI think tank, or even just into operating in developing countries. At the point of encountering us, the nonprofit may already be too constrained by pre-existing commitments (e.g. to its funders), the preferences of its employees and volunteers and their emotional connection to the cause area, and inertia. On the other hand, the nonprofit team might be open to start doing impact evaluation and follow evidence-based decision making.
I’m sure there are far more organizations and individuals who are open to advice on how to be more effective in what they already do than there are folks who are open to changing cause areas. As a result, even if it’s more impactful to engage with someone in the latter group, the overall impact of engaging with the former group might still be dramatically greater overall. Of course, there are more conditions that have to be met for that to be true, and I’m not trying to say it is necessarily so, only raise the possibility.
The kind of cause prioritization comparing the effectiveness of distributing bed nets in malaria-struck countries vs. producing US-school-targeted videos portraying the horrors of tooth decay is only relevant in certain contexts. It used to be very relevant in the early days of EA when the primary goal of the movement was to find proven effective charities to donate to, and the audience was a small group of people highly committed to impartiality and doing the most good. If we are in the business of giving career advice to wider publics, these comparisons are not always as relevant.
The attitude of making things better without attempting to replace them by something else entirely could also be relevant to individual career choices, for example if presented with an opportunity to influence a program already funded by government or philanthropic money. While such programs tend to have a defined scope (meaning they are unlikely to turn into top GiveWell charities), there might still be a great deal of flexibility in how they operate. If led by someone who is truly impact-oriented, the counterfactual impact could be quite substantial.
More independent thinking could be healthy for EA
I think we have a lot of trust within the EA community, and that’s generally a good thing. If a prominent EA organization or individual puts significant efforts into investigating the pros and cons of operating in certain areas and then makes an honest attempt to thoroughly and transparently present their conclusions, then we tend to take them seriously. However, our mutual trust might have downsides as well. I think that having more critical thinking within EA, putting each other’s work into more scrutiny and doubt, could actually be healthy for our movement.
For example, for a long period of time many EAs used to downplay climate change as a cause area and believed that it wasn’t a very effective cause to work on (among other reasons, for not being extremely neglected outside of EA). Only quite recently this view started to get some pushback. Mild groupthink could have played a role in this dynamic, as prominent EA figures underappreciated climate change early on, and other EAs just went along without thinking about it too much. Maybe our community is more susceptible to groupthink than we would like to think. I sometimes get the impression that many EAs reiterate what other EAs are saying, just because it’s been said in EA so it’s probably true (I often catch myself in this state of mind, saying things with more confidence than I probably should, only because I trust my fellow EAs to have gotten it right). Likewise, just as we shouldn’t automatically accept everything said within EA without questions, it would also be a bad idea to overlook ideas and beliefs held by people outside of EA just because they are not presented to us in our preferred style.
Cause prioritization is notoriously difficult, because there are so many crucial considerations and high-order effects to contemplate. For example, there is still an ongoing debate within EA on how we should think about systemic change (which is relevant to many cause areas). I think there’s a non-negligible chance that we got a wide range of causes wrong. And, similarly, we might be mistaken in putting too much emphasis on certain cause areas just because they are popular and trendy in EA. By spreading our collective attention into more areas we minimize the risk of groupthink and getting things horribly wrong.
This reminds me of the “replication crisis” recently coming to the attention of many scientific fields. Psychologists used to be too eager to believe claims made by their peers, even those backed by single studies, until they realized that a shockingly large number of these studies just didn’t replicate. What if we are overlooking the possibility of a similar replication crisis in EA? I think it would be beneficial to dedicate more resources into revisiting long-held EA beliefs.
Cause divergence is important to preserving the essence of EA
Cause neutrality is a central value in EA. If EA collapsed into 3-4 specific causes, even if circumstances justified that, it would later be difficult for EA to recover and remember that it once was a general, cause-agnostic movement. I can imagine a scenario where EA became so identified with specific causes, that many EAs would feel pressured to move into these areas, while those less enthusiastic about the causes would become frustrated and eventually leave the movement. At the same time, many non-EAs may start identifying themselves with EA just because they work on causes so identified with the movement, even if they are not really impact-oriented. At this point, “being part of EA” may just become synonymous to working on a specific cause, whether or not one really cares about impact.
What should we do about it?
You may worry that I advocate for turning EA into something it is not, a cheery bouncy feel-good everyone-is-welcome society like so many other communities out there, thereby taking the edge off EA and irreversibly converting it into a distasteful “EA lite” for the masses. That’s really not what I want. I think we should continue doing cause prioritization (maybe even more of it), and we shouldn’t be afraid to say out loud that we think cause A is generally more impactful than cause B. I am, however, worried about the movement becoming identified with a very small set of specific causes. I would like members of EA to feel more legitimacy to pursue mainstream, non-EA-stereotypic causes, and feel comfortable to openly talk about it with the community without feeling like second-class citizens. I would like to see more EAs using evidence, reason and the rest of the powerful EA toolkit to improve humanity’s impact in medicine, science, education, social justice and pretty much every mainstream cause area out there.
To bring the discussion more down to earth, I list a few concrete suggestions for things we can do as a community to address some of the concerns mentioned in this post, without compromising our movement’s core values and integrity (notice that this is far from being an exhaustive list, and I’d be happy to see more suggestions):
I think we need to better communicate that working on a cause area considered promising by EA is not equivalent (and is neither necessary nor sufficient) to doing high-impact work or being a dedicated EA.
I think it should be clear in most EA discussions, and especially in public outreach, that broad statements about categories (in particular about cause areas) are only first-order estimates and never capture the full complexity of a decision like choosing a career.
When interacting with others (especially non-EAs or new EAs), I think we should be very inclusive and support pretty much any cause people are interested in (as long as it’s not actively causing harm). While it’s ok to nudge people into making more impactful choices, I think we should avoid too much pressure and be very intellectually modest.
In careers, I think it’s common that people’s unique opportunities, comparative advantage, or ability to explore new territories will lead them outside of the main EA focus areas, and this should be encouraged (together with a meaningful process of prioritization).
When donating, I think there is room for narrow opportunities in less-promising-on-average cause areas (but unless aware of such an opportunity, EAs are likely to achieve more impact by donating to established causes).
I think that identifying and investigating impactful cause areas should continue to be as concrete and unrelenting as it is now.
I think it would be useful to have more cause prioritization done by actively operating and gaining hands-on experience in new spaces (bottom-up approach) rather than by using first principles to think long and hard about missed opportunities (top-down approach).
I think EAs should be critical but charitable towards information and views presented to them, and might benefit from being more charitable to non-EA claims and more critical of EA claims.
Additionally, I think we should have a discussion on the following questions (again, this is not an exhaustive list):
How strongly should we nudge others into working on and donating to causes we consider more promising? On the one hand, we don’t want to alienate people by trying to push them too aggressively down a path they don’t really want (or are not yet ready for), and we should be (at least somewhat) intellectually modest. On the other hand, we don’t want EA to decay into an empty feel-good movement and lose our integrity.
How can we foster an atmosphere in which people interacting with EA (especially new members) feel less judged, without compromising our values?
Do you have more arguments against the prominence that a small number of cause areas take within EA? Do you have counterarguments? Do you have concrete suggestions or more open questions? I’d really like to hear your thoughts!
Acknowledgements
I am grateful to Edo Arad, Sella Nevo and Shay Ben Moshe for incredibly valuable discussion and feedback on this post.
- Should the EA community be cause-first or member-first? by 29 May 2023 15:50 UTC; 206 points) (
- Disentangling “Improving Institutional Decision-Making” by 13 Sep 2021 23:50 UTC; 92 points) (
- doing more good vs. doing the most good possible by 2 Jan 2022 16:27 UTC; 32 points) (
- 23 Aug 2022 18:11 UTC; 23 points) 's comment on EAs underestimate uncertainty in cause prioritisation by (
- EA Updates for August 2021 by 6 Aug 2021 13:21 UTC; 21 points) (
- Narration: The case against “EA cause areas” by 24 Jul 2021 20:39 UTC; 12 points) (
Agree with the spirit—there is too much herding, and I would love for Schubert’s distinctions to be core concepts. However, I think the problem you describe appears in the gap between the core orgs and the community, and might be pretty hard to fix as a result.
What material implies that EA is only about ~4 things?
the Funds
semi-official intro talks and Fellowship syllabi
the landing page has 3 main causes and mentions 6 more
the revealed preferences of what people say they’re working on, the distribution of object-level post tags
What emphasises cause divergence and personal fit?
80k have their top 7 of course, but the full list of recommended ones has 23
Personal fit is the second thing they raise, after importance
New causes, independent thinking, outreach, cause X, and ‘question > ideology’ is a major theme at every EAG and (by eye) in about a fifth of the top-voted Forum posts.
So maybe limited room for improvements to communication? Since it’s already pretty clear.
Intro material has to mention some examples, and only a couple in any depth. How should we pick examples? Impact has to come first. Could be better to not always use the same 4 examples, but instead pick the top 3 by your own lights and then draw randomly from the top 20.
Also, I’ve always thought of cause neutrality as conditional—“if you’re able to pivot, and if you want to do the most good, what should you do?” and this is emphasised in plenty of places. (i.e. Personal fit and meeting people where they are by default.) But if people are taking it as an unconditional imperative then that needs attention.
As technicalities noted, it’s easy to see the merits of these arguments in general, but harder to see who should actually do things, and what they should do.
To summarize the below:
EA orgs already look at a wide range of causes, and the org with most of the money looks at perhaps the widest range of causes
Our community is small and well-connected; new causes can get attention and support pretty easily if someone presents a good argument, and there’s a strong historical precedent for this
People should be welcoming and curious to people from many different backgrounds, and attempts to do more impactful work should be celebrated for many kinds of work
If this isn’t the case now, people should be better about this
If you have suggestions for what specific orgs or funders should do, I’m interested in hearing them!
*****
To quote myself:
Open Philanthropy does more funding and research than anyone, and they work in a broad range of areas. Maybe the concrete argument here is that they should develop more shallow investigations into medium-depth investigations?
Rethink Priorities probably does the second-most research among EA orgs, and they also look at a lot of different topics.
Founders Pledge is probably top-five among orgs, and… again, lots of variety.
Past those two organizations, most research orgs in EA have pretty specific areas of focus. Animal Charity Evaluators looks at animal charities. GiveWell looks at global health and development interventions with strong RCT support. If you point ACE at a promising new animal charity to fund, or GiveWell at a new paper showing a cool approach to improving health in the developing world, they’d probably be interested! But they’re not likely to move into causes outside their focus areas, which seems reasonable.
After all of this, which organizations are left that actually have “too narrow” a focus? 80,000 Hours? The Future of Humanity Institute?
A possible argument here is that some new org should exist to look for totally new causes; on the other hand, Open Philanthropy already does a lot of this, and if they were willing to fund other people to do more of it, I assume they’d rather hire those people — and they have, in fact, been rapidly expanding their research team.
*****
On your example of cancer: Open Philanthropy gave a $6.5 million grant to cancer research in 2017, lists cancer as one of the areas they support on their “Human Health and Wellbeing” page, and notes it as a plausible focus area in a 2014 report. I’m guessing they’ve looked at other cancer research projects and found them somewhat less promising than their funding bar.
Aside from Open Phil, I don’t know which people or entities in EA are well-positioned to focus on cancer. It seems like someone would have to encourage existing bio-interested people to focus on cancer instead of biosecurity or neglected tropical diseases, which doesn’t seem obviously good.
In the case of a cancer researcher looking for funding from an EA organization, there just aren’t many people who have the necessary qualifications to judge their work, because EA is a tiny movement with a lot of young people and few experienced biologists.
The best way for someone who isn’t a very wealthy donor to change this would probably be to write a compelling case for cancer research on the Forum; lots of people read this website, including people with money to spend. Same goes for other causes someone thinks are neglected.
This path has helped organizations like ALLFED and the Happier Lives Institute get more attention for their novel research agendas, and posts with the “less-discussed causes” tag do pretty well here.
As far as I can tell, we’re bottlenecked on convincing arguments that other areas and interventions are worth funding, rather than willingness to consider or fund new areas and interventions for which convincing arguments exist.
*****
Fortunately, there’s good historical precedent here: EA is roughly 12 years old, and has a track record of integrating new ideas at a rapid pace. Here’s my rough timeline (I’d welcome corrections on this):
2007: GiveWell is founded
2009: Giving What We Can is founded, launching the “EA movement” (though the term “effective altruism” didn’t exist yet). The initial focus was overwhelmingly on global development.
2011: The Open Philanthropy Project is founded (as GiveWell Labs). Initial shallow investigations include climate change, in-country migration, and asteroid detection (conducted between 2011 and 2013).
2012: Animal Charity Evaluators is founded.
2013: The Singularity Institute for Artificial Intelligence becomes MIRI
2014: The first EA Survey is run. The most popular orgs people mention as donation targets are (in order) AMF, SCI, GiveDirectly, MIRI, GiveWell, CFAR, Deworm the World, Vegan Outreach, the Humane League, and 80,000 Hours.
To be fair, the numbers look pretty similar for the 2019 survey, though they are dwarfed by donations from Open Phil and other large funders.
Depending on where you count the “starting point”, it took between 5 and 7 years to get from “effective giving should exist” to something resembling our present distribution of causes.
In the seven years since, we’ve seen:
The launch of multiple climate-focused charity recommenders (I’d argue that the Clean Air Task Force is now as well-established an “EA charity” as most of the charities GiveWell recommends)
The rise of wild animal suffering and AI governance/policy as areas of concern (adding a ton of depth and variety to existing cause areas — it hasn’t been that long since “AI” meant MIRI’s technical research and “animal advocacy” meant lobbying against factory farming when those things came up in EA)
The founding of the Good Food Institute (2016) and alternative protein becoming “a thing”
The founding of Charity Entrepreneurship and resultant founding of orgs focused on tobacco taxation, lead abatement, fish welfare, family planning, and other “unusual” causes
Open Philanthropy going from a few million dollars in annual grants to in the neighborhood of ~$200 million. Alongside “standard cause area” grants, 2021 grants include $7 million for the Centre for Pesticide Suicide Prevention, $1.5 million for Fair and Just Prosecution, and $0.6 million for Abundant Housing Massachusetts (over two years — but given that the org has a staff of one person right now, I imagine that’s a good chunk of their total funding)
Three of the ten highest-karma Forum posts of all time (1, 2, 3) discuss cause areas with little existing financial support within EA
I’d hope that all this would also generate a better social environment for people to talk about different types of work — if not, individuals need better habits.
*****
I think that any of these causes could easily get a bunch of interest and support if someone published a single compelling Forum post arguing that putting some amount of funding into an existing organization or intervention would lead to a major increase in welfare. (Maybe not wildlife conservation, because it seems insanely hard for that to be competitive with farmed animal welfare, but I’m open to having my mind blown.)
Until that post exists (or some other resource written with EA principles in mind), there’s not much for a given person in the community to do. Though I do think that individuals should generally try to read more research outside of the EA-sphere, to get a better sense for what’s out there.
If someone is reading this and wants to try writing a compelling post about a new area, I’d be psyched to hear about it!
Or, if you aren’t sure what area to focus on, but want to embrace the challenge of opening a new conversation, I’ve got plenty of suggestions for you (starting here).
*****
I think that very few people in this community would disagree, at least in the example you’ve put forth.
*****
This is where I agree with you, in that I strongly support “letting people feel that what they want to work on is fine” and “not making people feel apologetic about what they do”.
But I’m not sure how many people actually feel this way, or whether the way people respond to them actually generates this kind of feeling. My experience is that when people tell me they work on something unusual, I try to say things like “Cool!” and “What’s that like?” and “What do you hope to accomplish with that?” and “Have you thought about writing this up on the Forum?” (I don’t always succeed, because small talk is an imperfect art, but that’s the mindset.)
I’d strongly advocate for other people in social settings also saying things like this. Maybe the most concrete suggestion from here is for EA groups, and orgs that build resources for them, to encourage this more loudly than they do now? I try to be loud, here and in the EA Newsletter, but I’m one person :-(
*****
I think that the EA community should be a big tent for people who want to do a better job of measuring and increasing their impact, no matter what they work on.
I think that EA research should generally examine a wide range of options in a shallow way, before going deeper on more promising options (Open Phil’s approach). But EA researchers should look at whatever seems interesting or promising to them, as long as they understand that getting funded to pursue research will probably require presenting strong evidence of impact/promise to a funder.
I think that EA funding should generally be allocated based on the best analysis we can do on the likely impact of different work. But EA funders should fund whatever seems interesting or promising to them, as long as they understand that they’ll probably get less impact if they fund something that few other people in the community think is a good funding target. (Value of learning is real, and props to small funders who make grants with a goal of learning more about some area.)
I think that EA advice should try to work out what the person being advised actually wants — is it “have an impactful career in dental hygiene promotion”, or “have an impactful career, full stop”? Is it “save kids from cancer”, or “save kids, full stop”?
And I think we should gently nudge people to consider the “full stop” options, because the “follow your passions wherever they go” argument seems more common in the rest of society than it ought to be. Too many people choose a cause or career based on a few random inputs (“I saw a movie about it”, “I got into this lab and not that lab”, “I needed to pay off my student loans ASAP”) without thinking about a wide range of options first.
But in the end, there’s nothing wrong with wanting to do a particular thing, and trying to have the most impact you can with the thing you do. This should be encouraged and celebrated, whether or not someone chooses to donate to it.
Thank you Aaron for taking the time to write this detailed and thoughtful comment to my post!
I’ll start with saying that I pretty much agree with everything you say, especially in your final remarks—that we should be really receptive to what people actually want and advise them accordingly, and maybe try to gently nudge them into taking a more open-minded general-impact-oriented approach (but not try to force it on them if they don’t want to).
I also totally agree that most EA orgs are doing a fantastic job at exploring diverse causes and ways to improve the world, and that the EA movement is very open-minded to accepting new causes in the presence of good evidence.
To be clear, I don’t criticize specific EA orgs. The thing I do criticize is pretty subtle, and refers more to the EA community itself—sometimes to individuals in the community, but mostly to our collective attitude and the atmospheres we create as groups.
When I say “I think we need to be more open to diverse causes”, it seems that your main answer is “present me with good evidence that a new cause is promising and I’ll support it”, which is totally fair. I think this is the right attitude for an EA to have, but it doesn’t exactly address what I allude to. I don’t ask EAs to start contributing to new unproven causes themselves, but rather that they be open to others contributing to them.
I agree with you that most EAs would not confront a cancer researcher and blame her of doing something un-EA-like (and I presume many would even be kind and approach her with curiosity about the motives for her choice). But in the end, I think it is still very likely she would nonetheless feel somewhat judged. Because even if every person she meets at EA Global tries to nudge her only very gently (“Oh, that’s interesting! So why did you decide to work on cancer? Have you considered pandemic preparedness? Do you think cancer is more impactful?”), those repeating comments can accumulate into a strong feeling of unease. To be clear, I’m not blaming any of the imaginary people who met the imaginary cancer researcher at the imaginary EAG conference for having done anything wrong, because each one of them tried to be kind and welcoming. It’s only their collective action that made her feel off.
I think the EA community should be more welcoming to people who want to operate in areas we don’t consider particularly promising, even if they don’t present convincing arguments for their decisions.
I like this example! It captures something I can more easily imagine happening (regularly) in the community.
One proposal for how to avoid this collective action problem would be for people to ask the same sorts of questions, no matter what area someone works on (assuming they don’t know enough to have more detailed/specific questions).
For example, instead of:
Have you considered X?
Do you think your thing, Y, is more impactful than X?
You’d have questions like:
What led you to work on Y?
And then, if they say something about impact, “Were there any other paths you considered? How did you choose Y in the end?”
What should someone not involved in Y know about it?
What are your goals for this work? How is it going so far?
What are your goals for this event? (If it’s a major event and not e.g. a dinner party)
These should work about equally well for people in most fields, and I think that “discussing the value/promise of an area” conversations will typically go better than “discussing whether a new area ‘beats’ another area by various imperfect measures”. We still have to take the second step at some point as a community, but I’d rather leave that to funders, job-seekers, and Forum commentators.
Depends on the context.
Plenty of people in the EA space are doing their own thing (disconnected from standard paths) but still provide interesting commentary, ask good questions, etc. I have no idea what some Forum users do for work, but I don’t feel the need to ask. If they’re a good fit for the culture and the community seems better for their presence, I’m happy.
The difficulty comes when certain decisions have to be made — whose work to fund, which people are likely to get a lot of benefit from EA Global, etc. At that point, you need solid evidence or a strong argument that your work is likely to have a big impact.
In casual settings, the former “vibe” seems better — but sometimes, I think that people who thrive in casual spaces get frustrated when they “hit a wall” in the latter situations (not getting into a conference, not getting a grant, etc.)
In the end, EA can’t really incorporate an area without having a good reason to do so. I’d be satisfied if we could split “social EA” from “business EA” in terms of how much evidence and justification people are asked for, but we should be transparent about the difference between enjoying the community and looking for career or charity support.
I like your suggestions for questions one could ask a stranger at an EA event!
About “social EA” vs. “business EA”, I think I’d make a slightly different distinction. If you ask for someone else’s (or some org’s) time or money, then of course you need to come up with good explanations for why the thing you are offering (whether it is your employment or some project) is worthwhile. It’s not even a unique feature of EA. But, if you are just doing your own thing and not asking for anyone’s time or money, and just want to enjoy the company of other EAs, then this is the case where I think the EA community should be more welcoming and be happy to just let you be.
Thank you for putting this together, I strongly agree with many of these points, especially the point of independent thinking.
I think the strength of this post’s argument varies when taking into account different “services” that the EA movement can provide individuals. For instance, for someone in their mid-career who is interested in EA in order to rethink their career path, there would be much more value in a more divergent EA movement that is focused on the EA toolkit.
Yet, that wouldn’t be the same for someone who looks for donation advice, for which we’d rather put much more focus on very few cause areas and donation opportunities.
That might also be true for someone in their early career who looks for career advice, but that would depend on how much independent thinking they’re willing to do, because I strongly agree that this is missing. I’ll add a quote from a Q&A with Will MacAskill ( Aug 2020) supporting that:
I’m personally quite worried that the EA movement would end up filled with people who are fans of a certain cause area without being neutral about their cause. EA shouldn’t be about cheering for certain cause areas, it should be about prioritizing opportunities to do good, and not communicating this good enough internally and externally could be very dangerous for the movement in the long term and would make us miss a lot of our potential.
I’ve noticed that there are quite a few downvotes for this post, and not enough criticizing comments. I’d be happy to hear others’ opinions on this subject!
I’m adding another suggestion to the list: I think that instead of removing emphasis from the movement’s top causes, we might want to put an equal emphasis on the EA toolkit.
I believe that if you would ask all highly active EAs “what tools do EAs use in order to prioritize opportunities to do good?” you’d get very different answers, while I would hope that everyone could easily be able to recall a closed set of actionable guiding principles.
I think the obvious challenge here is how to be more inclusive in the ways you suggest without destroying the thing that makes EA valuable. The trouble as I see it is that you only have 4-5 words to explain an idea to most people, and I’m not sure you can cram the level of nuance you’re advocating for into that for EA.
I agree that when you first present EA to someone, there is a clear limitation on how much nuance you can squeeze in. For the sake of being concrete and down to earth, I don’t see harm in giving examples from classic EA cause areas (giving the example of distributing bed nets to prevent malaria as a very cost-effective intervention is a great way to get people to start appreciating EA’s attitude).
The problem I see is more in later stages of engagement with EA, when people already have a sense of what EA is but still get the impression (often unconsciously) that “if you really want to be part of EA then you need to work on one of the very specific EA cause areas”.
Also, The fidelity model of spreading ideas
Great piece! FYI, I wrote an essay with a similar focus and some of the same arguments about five years ago called All Causes are EA Causes. This article adds some helpful arguments, though, in particular the point about the risk of being over-identified with particular cause areas undermining the principle of cause neutrality itself. I continue to be an advocate for applying EA-style critical thinking within cause areas, not just across them!
Thank you for bringing this post to my attention, I really like it! We appear to make similar arguments, but frame them quite differently, so I think our two posts are very complementary.
I really like your framing of domain-specific vs. cause-neutral EA. I think you also do a better job than me in presenting the case for why helping people become more effective in what they already do might be more impactful than trying to convince them to change cause area.
Hi,
I think I strongly agree with this and I expect most EA do too.
My interpretation is that EA as a normative, prescriptive guide for life doesn’t seem right. Indeed, if anything, there’s evidence that EA doesn’t really do a good job, or maybe even substantively neglects this while appearing to do so, in a pernicious way. From a “do no harm” perspective, addressing this is important. This seems like a “communication problem” (which seems historically undervalued in EA and other communities).
This is a really different thought than your other above and I want to comment more to make sure I understand.
While agreeing with the essence, I think I differ and I want to get at the crux of the difference:
Overall, I think “using data”, cost effective analysis, measurement and valuation, aren’t far from mainstream in major charities. To get a sense of this, I have spoken (worked with?) to leaders in say, environmental movements and they specifically “talk the talk”, e.g. there’s specific grants for “data science” like infrastructure, for example. However, while nominally trying, many of these charities don’t succeed—the reason is an immense topic beyond the scope of this comment or post.
But the point is that it seems hard to make these methodological or leadership changes that motivates you dissemination.
Note that it seems very likely we would agree and trust any EA who reported that any particular movement or cause area would benefit from better methods.
However, actually effecting change is really difficult.
To be tangible, imagine trying to get the Extinction Rebellion to use measurement and surveys to regularly interrogate their theory of change.
For another example, the leadership and cohesion of many movements can be far lower than they appear. Together with the fact that applying reasoning might foreclose large sections of activity or initiatives, this would make implementation impractical.
While rational, data driven and reasoned approaches are valuable, it’s unclear if EA is the path to improving this, and this is a headwind to your point that EAs should disseminate widely. I guess the counterpoint would be that focus is valuable and this supports focus on cause areas closer to the normal sense that you argue against.
Thank you for sharing your thoughts!
About your second point, I totally agree with the spirit of what you say, specifically that:
1. Contrary to what might be implied from my post, EAs are clearly not the only ones who think that impact, measurement and evidence are important, and these concepts are also gaining popularity outside of EA.
2. Even in an area where most current actors lack the motivation or skills to act in an impact-oriented way, there are more conditions that have to be met before I would deem it high-impact to work in this area. In particular, there need to be some indications that the other people acting in this area would be interested or persuaded to change their priorities once evidence is presented to them.
My experience working with non-EA charities is similar to yours: while they also talk about evidence and impact, it seems that in most cases they don’t really think about these topics straightly. I’ve found that in most cases it’s not very helpful to have this conversation with them, because, in the end, they are not really open to change their behavior based on evidence (I think it’s more a lip service for charities to say they want to do impact evaluation, because it’s becoming cool and popular these days). But in some cases (probably a minority of non-EA charities), there is genuine interest to learn how to be more impactful through impact evaluation. In these cases I think that having EAs around might be helpful.
Thanks for the thoughtful reply.
I think we are probably agreed that we should be cautious against prescribing EAs to go to charities or cause areas where the culture doesn’t seem welcoming. Especially given the younger age of many EAs, and lower income and career capital produced by some charities, this could be a very difficult experience or even a trap for some people.
I think I have updated based on your comment. It seems that having not just acceptance but also active discussion or awareness of “non-canonical” cause areas seems useful.
I wonder, to what degree is your post or concerns addressed if new cause areas were substantively explored by EAs to add to the “EA roster”? (even if few cause areas were ultimately “added” as a result, e.g. because they aren’t feasible).
I totally agree with you that many charities and causes can be a trap for young EAs and put their long-term career in danger. In some cases I think it’s also true of classic EA cause areas, if people end up doing work that doesn’t really fit their skill set or doesn’t develop their career capital. I think this is pretty well acknowledged and discussed in EA circles, so I’m not too worried about it (with the exception, maybe, that I think one of the possible traps is to lock someone with career capital that only fits EA-like work, thereby blocking them from working outside of EA).
As to your question, if new cause areas were substantively explored by EAs, that would mitigate some of my concerns, but not all of them. In particular, besides having community members theoretically exploring diverse causes and writing posts on the forum summarizing their thinking process (which is beneficial), I’d also like to see some EAs actively trying to work in more diverse areas (what I called the bottom-up approach), and I’d like the greater EA community to be supportive of that.
Thank you for the write up!
Thank you for posting. I’m sorry to hear that some people in the community have been made to feel excluded or “not EA enough”, and agree with ideas already shared above about how the community can behave better.
I generally agree with Aaron’s comments above, and just had a few points that I don’t think people have already made:
--
I agree that we shouldn’t just think about other cause areas, but jumping into working on them directly is sort of on the other extreme, pretty costly. I think people wanting to assess other cause areas that seem promising should research their history, accomplishments, failures, etc. and talk to some of the key people working in those areas. I hope and expect that people working on cause prioritization are in fact doing something like this.
--
As new potentially high-impact causes are identified, I agree, and think that this is happening. For causes which we’re pretty sure are generally not as high impact, I think that EAs will most likely do more good by working within priority causes rather than working to “lift up” lower priority causes. This is based on my understanding that causes can vary in effectiveness by ~100x or more. There are surely exceptions though, where a certain intervention change in a “low priority” cause area could have a huge impact, and it’d be exciting if we found more of those opportunities.
--
Minor point here, but:
FWIW, 80k is not in that business. They say, “Our advice is focused on people who have the good fortune to have options for how to spend their career, and who want to make helping the world one of their main goals. We especially focus on college students and graduates living in rich countries like the U.S. or U.K. who want to take an analytical approach to doing good.”
Thank you for writing down these good counterarguments.
About your first and second points, that it’s a wasteful to have someone’s career dedicated to a less promising cause area, I generally agree with that, but with a few caveats (which, for the most part, just reiterate and rephrase points already made in my post):
I agree there’s value in considering whole causes as more or less promising on average, but I think that this low-resolution view overlooks a lot of important nuance, and that a better comparison should consider specific opportunities that an individual has access to. I think it is entirely plausible that a better opportunity would actually present itself in a less-promising-on-average cause area.
The EA community’s notion of what constitutes a promising or not-so-promising cause area could be wrong, and there is value in challenging the community’s common wisdom. I agree with your point that it’s better to assess the effectiveness of an opportunity in question without yet dedicating your entire career to it and that it’s a good idea to take a middle-ground approach between just thinking about it on the one extreme and immediately deciding to work on it for the next 40 years of your career on the other extreme. I think that trying non-conventional ideas for a short period of time (e.g. through a one-month side project or an internship program) and then reporting back to the community could be very valuable in many cases, and could also help people learn more about themselves (what they like to do and are good at).
I would not urge people who are very cause neutral and EA-minded to work on a mainstream non-EA cause like curing cancer (but I would also not completely rule that out, mainly due to the “opportunity perspective” mentioned in point #1). But for people who are not that cause neutral, I would try to be more accepting of their choice than I feel the EA community currently is. As I wrote in my discussion with Aaron, I see this post being more about “we should be more accepting of non-EA causes” than “we should encourage non-EA causes”.
About your last comment, I really appreciate 80k’s directness about what the scope of their activity is (and their being nonterritorial and encouraging of the presence of other orgs targeting populations that 80k don’t see as their main target audience). As an entire community (that transcends the scopes of specific orgs) I think we totally should be in the business of giving career advice to wider publics.
Agree on all points :)
And thank you, again, for bringing up this issue of acceptance.
A minor point, but I think this overestimates the extent to which a small number of people with an EA mindset can help in crowded cause areas that lack such people. Like, I don’t think PETA’s problem is that there’s nobody there talking about impact and effectiveness. Or rather, that is their problem, but adding a few people to do that wouldn’t help much, because they wouldn’t be listened to. The existing internal political structures and discourse norms of these spaces aren’t going to let these ideas gain traction, so while EAs in these areas might be able to be more individually effective than non-EAs, I think it’s mostly going to be limited to projects they can do more or less on their own, without much support from the wider community of people working in the area.
I totally agree. In order for an impact-oriented individual to contribute significantly in an area, there has to be some degree of openness to good ideas in that area, and if it is likely that no one will listen to evidence and reason then I’d tend to advise EAs to stay away from there. I think there are such areas where EAs could contribute and be heard. And I think the more mainstream the EA mindset will be, the more such places will exist. That’s one of the reasons why we really should want EA to become more mainstream, and why we shouldn’t hide ourselves from the rest of the world by operating in such a narrow set of domains.
Love this article! As a new EA person, I appreciate this way of thinking, but I’m not willing to give up on some causes that I care about more due to life experiences than any kind of logic. If I didn’t give to them the alternative wouldn’t to give more to givewell, the alternative would be to not give that money at all.
One downside of this is that without some cause curation, you could end up with zero sum games when charities or causes were in opposition—encouraging more effectiveness from feminists and MRAs, nuclear power advocates vs Greenpeace, gun control charities and the NRA, etc.
That’s not to say a greater emphasis away from central cause areas would be bad, just not an uncritical one.