Should the EA community be cause-first or member-first?

It’s really hard to do community building well. Opinions on strategy and vision vary a lot, and we don’t yet know enough about what actually works and how well. Here, I’ll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.[1]


Will Macaskill’s proposed Definition of Effective Altruism is composed of[2]:

  1. An overarching effort to figure out what are the best opportunities to do good.

  2. A community of people that work to bring more resources to these opportunities, or work on these directly.

This suggests a “cause-first” community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.

CEA’s definition and strategy seem to be mostly along these lines:

Effective altruism is a project that aims to find the best ways to help others, and put them into practice.

It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.


Our mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.


Let’s try out a different definition for the EA community, taken from CEA’s guiding principles[3]:

What is the effective altruism community?

The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.

This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a “member-first” strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/​growth ecosystem that works to make the principles of EA more universal[4][5].

What’s the difference?

I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.

I’ll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:

Leaning cause-firstLeaning member-first
Keep EA Small and WeirdBig Tent EA
Current EA Handbook (focus on introducing major causes)2015′s EA Handbook (focus on core EA principles)
80,000 HoursProbably Good
Wants more people doing high-quality AI Safety work, regardless of their acceptance of EA principlesWants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to.
Targeted outreach to students in high ranking universitiesBroad outreach with diverse messaging
Encourages people to change occupations to focus on the world’s most pressing problemsEncourages people to use the tools and principles of EA to do more good in their current trajectory
Risk of people not finding useful ways to contribute to top causesRisk of not enough people who want to contribute to the world’s top causes
The community as a whole leads by example, by taking in-depth prioritization research with the proper seriousnessEach individual is focused more on how to implement EA principles in their own lives, taking their personal worldview and situation into account
Community members delegate to high-quality research, think less for themselves but more people end up working in higher-impact causesCommunity members think for themselves, which improves their ability to do more good, but they make more mistakes
The case of the missing cause prioritization research, Nobody’s on the ball on AGI alignment, and many amazing object-level posts making progress on particular causesThe case against “EA cause areas” , EA is three radical ideas I want to protect, “Big tent” effective altruism is very important (particularly right now), and many posts where people share their own decisions and dilemmas

Personal takeaways

I think the EA community is leaning toward “cause-first” as the main overarching strategy. That could be the correct call. For example, I guess that a lot of the success of EA in promoting highly neglected causes[6] was due to community-builders and community-focused organizations having a large focus on spreading the relevant ideas to many promising people and helping them to work on these areas.

However, there are important downsides to the “cause-first” approach, such as a possible lock-in of main causes and less diversification in the community. Many problems with the EA community are possibly explained by this decision.

It is a decision. For example, EA Israel, particularly as led by @GidiKadosh, has focused more on the “member-first” approach. This also has downsides. Say, only in the past year or so we really started having a network of people working in AI Safety, and we are very weak on the animal welfare front.

I’m not sure what is the best approach, and very likely we can have the best of both worlds most of the time. However, I am pretty sure that being more mindful of this particular dimension in community building is important, and I hope that this post would be a helpful small step in understanding how to do community building better.

Thanks to the many people I’ve met at EAG and discussed this topic with! I think that crystalizing this idea was one of the key outcomes of the conference for me.

  1. ^

    I try to make the two main examples a bit extreme, to make the distinction clearer, but most opinions are somehow a mesh of the two.

  2. ^

    I’ve taken some liberty with paraphrasing the original definition to make my claims clearer. This example doesn’t mean that Will Macaskill is a proponent of such a “cause-first” strategy.

  3. ^

    These haven’t been updated much since 2018, so I’m not sure how representative they are. Anyway, again, I’m using this definition to articulate a possible strategy.

  4. ^

    By this, I mean a future where principles very close to the current main “tenets” of EA are widespread and commonsense.

  5. ^

    Maybe the focus on “making the principles of EA more universal” is more important than the focus on the community and this section should be called something like “ideas-first”. I think now that these two notions should be distinguished, and represent different goals and strategies, but I’ll leave this to other people (maybe future Edo) to articulate this clearly if this post would be useful.

  6. ^

    Say, x-risks, wild-animal suffering, and empirically-supported GH&D interventions.