CEA will continue to take a “principles-first” approach to EA

Introduction

I’m Zach, the new CEO of the Centre for Effective Altruism (CEA). As I step into my role, I want to explain the principles that I think make EA special and share how CEA will continue to promote them.

In this post, I will:

  • Highlight the principles that I think are core to EA, and explain why CEA will continue to promote them above and beyond any single or set of cause area(s).

  • Explain what being principles-first means in practice for CEA[1].

  • Explain how encouraging people to act on EA principles can still lead to some prioritization decisions between causes, how CEA has navigated those decisions in the past, and what factors influence those decisions.

  • Share a little bit about my background and how I’ve personally engaged with these principles.

CEA will continue a “principles-first” approach to EA

In my role at CEA, I embrace an approach to EA that I (and others) refer to as “principles-first”. This approach doubles down on the claim that EA is bigger than any one cause area. EA is not AI safety; EA is not longtermism; EA is not effective giving; and so on. Rather than recommending a single, fixed answer to the question of how we can best help others, I think the value of EA lies in asking that question in the first place and the tools and principles EA provides to help people approach that question.

Four core principles that I and others think characterize the EA approach to doing good are[2]:

  • Scope sensitivity: Saving ten lives is more important than saving one, and saving a thousand lives is a lot more important than saving ten.

  • Scout mindset: We can better help others and understand the world if we think clearly and orient towards finding the truth, rather than trying to defend our own ideas and being unaware of our biases.

  • Impartiality: With the resources we choose to devote to helping others, we strive to help those who need it the most without being partial to those who are similar to us or immediately visible to us. (In practice, this often means focusing on structurally neglected and disenfranchised groups, like people in low-income countries, animals, and future generations[3].)

  • Recognition of tradeoffs: Because we have limited time and money, we need to prioritize when deciding how we might improve the world.

CEA has historically taken a principles-first approach and I don’t expect to make big changes to this aspect of CEA’s mission[4]. With that being said, I recognize that being principles-first may mean something different to different people, and it may mean something different to me than it did to CEA’s previous CEO. In talking with staff who have been around CEA for longer than I, I didn’t find consensus in how principles-first was interpreted. Instead of trying to come up with a comprehensive history for this post, I instead want to focus on some specific actions that CEA has taken in the recent past that reflect this approach, as well as provide clarity on how I interpret a principles-first approach and where I still feel uncertain about how CEA will approach promoting EA principles in the future.

Why principles-first?

CEA will continue to promote these core principles and nurture a community based on them. While we’ll sometimes prioritize between causes (see examples and reasoning), cause-specific work won’t be CEA’s main focus.

I think EA principles are impactful and worth promoting, and CEA is one of the best-suited organizations to promote them[5]. I also think cause-specific field-building can be impactful, and I don’t feel confident in sweeping claims about how either a principles-first or cause-specific approach is much better than the other. I think it makes sense for organizations trying to do good via community-building and field-building, like CEA, 80,000 Hours, and (to some extent) Open Philanthropy, to take a variety of approaches in a community-building portfolio. We can argue about the specific allocations across that portfolio—and we have—but it seems extremely likely to me that promoting core EA principles and nurturing a community of people who take those principles seriously should be part of the portfolio.

Here are some benefits of the principles-first approach:

  • Promoting EA principles has inspired and empowered thousands of people to be more altruistic and impactful with their careers and donations. EA’s core principles have served as a beacon for many sincere, talented people to pivot significant energy towards doing good in the world. It turns out that “how can I do more good with my career?” and “how can I do more good with my donations?” are questions that people actually ask. I worry that outreach only for specific causes would never catch the eye of people who are asking the big-picture questions that EA’s frameworks and principles try to help answer. More generally, principles that can draw thousands of people to common ground for the sake of helping others are something to protect and hard to reproduce.

  • EA principles, and a community of talented people who take them seriously, are adaptable. Our knowledge of the world and the environment around us will inevitably change, and it’s valuable to have a group of people who can reprioritize as we learn more. If we lose a focus on EA principles, I think we risk losing our ability to notice what others may be missing. I think the EA community’s scout mindset and attention to neglected problems are behind some of our most impactful achievements, such as prioritizing campaigns for farmed animal welfare and an early focus on pandemics and AI safety. Looking ahead at possible wild futures, I think a community focused purely on making AI safe, for example, would be significantly less capable of tackling other potential challenges posed by emerging technology, such as post-AGI governance or digital sentience.

  • Promoting principles that draw people from many causes allows for a productive cross-pollination of ideas and changing of minds. Drawn together by a wish to help others, EA spaces can enable connections between people who wouldn’t meet otherwise, but who can benefit from one another. For example, AI safety advocates have sought advice from experienced animal-welfare advocates to inform potential approaches to regulation and campaigns for labs to voluntarily implement safety protocols[6]. It seems unlikely these groups would have collaborated without EA. I also think the epistemics of the community benefit from people with different backgrounds meeting while sharing principles like scout mindset. Seeing people around us hold the same values but come to different conclusions invites us to challenge our own cause prioritization in a way that, say, attracting people to work on malaria purely via anti-malaria campaigns doesn’t. In a testament to the impact of a commitment to a scout mindset, hundreds of people have engaged with others aspiring to do good, pressure-tested one another’s ideas, and pivoted their work’s focus as a result[7].

With that being said, I don’t want to make it seem like “what should CEA do? or “what should EA be?” are questions with obvious answers.

There are reasons to think that it could be better to shift more resources to specific causes or, for example, advocate for greater splintering of different parts of EA:

  • Existing problems with the EA community: There are criticisms of the EA community that may be warranted. For example, I think criticisms about diversity issues in EA, echo chambers, and conflicts of interest have merit. If you believe these shortcomings can’t or won’t be addressed, that may be an argument to shift towards focusing on specific causes instead of trying to create an alternative community focused on the same principles[8].

  • Downsides of interconnectedness: Having a large interconnected community may create shared risk between projects and people that would otherwise not be tied together. For example, a scandal involving someone working on AI safety can end up harming the credibility of animal welfare activists and someone focused on AI may be criticized for not being vegan.

  • Concerns about how some people approach these principles (e.g. maximization): Other arguments against focusing on principles-first EA include concerns that EA can lead to a perilous focus on maximization or encourage unsustainable and unhealthy dedication to work.

  • Benefits of a narrow focus: It may just be the case that certain causes are much more important to work on directly. And if so, there are benefits to focusing on one cause to build expertise and relevant relationships rather than emphasizing principles.

I’m sympathetic to these concerns about a principles-first approach and the case for spending more resources on building specific fields. In particular, I believe there are real concerns about the EA community. But I believe we should improve EA, not abandon it. I don’t see the community or core EA principles as fatally flawed. I also don’t see a clear high-impact, non-EA community that exists without flaws. I want to stay open to criticisms like those above and guide CEA to improve what it can, and I’m grateful to others who also contribute to making our community better.

Overall, I think the benefits of a principles-first approach outweigh the concerns. I feel good about honestly saying, “Yes, cause-specific efforts can be very valuable, but so can a principles-first approach. Both should exist, and I’m focusing on the latter.”

What exactly does principles-first mean for CEA?

CEA’s mission is to nurture a community of people who are thinking carefully about the world’s most pressing problems and taking impactful action to solve them.

We currently enact our mission via five main efforts[9]. We think these programs all promote EA principles more than they promote any specific answer to how to do the most good, and we’ll continue to prioritize this approach in the future.

CEA programExamples that demonstrate a commitment to principles

Events: We run conferences like EA Global and support community-organized EAGx conferences. We also run some bespoke events for subject matter experts (see more below).

EA Global and EAGx admissions weigh how well applicants understand EA principles and put those principles into practice. This means we end up accepting applicants from a range of causes[10], including non-standard EA causes if the applicant can make the case for their work’s impact.

We also platform a lot of cause-agnostic content, like cause-prioritization workshops and skill- or career-stage-based meetups.

Groups: We fund and advise hundreds of local effective altruism groups, ranging from university groups to national groups. We also run virtual introductory EA programs.

Our Groups program supports EA groups that engage with members who prioritize a variety of causes.


Our current training for facilitators for the intro program emphasizes framing EA as a question and not acting as if there is a clear answer.

Online: We build and moderate the EA Forum, an online hub for discussing the ideas of effective altruism. We also produce the Effective Altruism Newsletter.We don’t approve or reject EA Forum posts based on cause prioritization, and we curate content on the EA Forum and EA Newsletter that is relevant to a variety of causes.

The EA Forum runs events like Career Conversations Week and the Donation Election, which encourage engagement with EA principles and don’t pre-suppose an answer.

Community Health: We aim to prevent and address interpersonal and community problems that can prevent community members and projects from doing their best work.

This work supports individuals, projects, and organizations across the EA space and across cause areas.

Communications: We work to communicate about EA principles, ideas, and work with a variety of audiences and stakeholders. This involves working with the media, advising and assisting communicators in the EA community, and supporting the creation of content about EA.

We support communications at organizations across cause areas and dedicate part of our content focus to highlighting EA principles.

Sometimes we’ll prioritize some causes over others

While we want CEA’s work to be principles-first, I don’t think it makes sense for CEA’s work to be principles only. Part of what makes EA special is that it goes beyond a group of people thinking about how to do good—it’s a group of people doing good. We want to encourage a journey from learning about EA principles to applying these principles to concrete problems. And insofar as we introduce concrete problems, it’s inevitable that we run into tricky questions about what causes we prioritize.

Moreover, I think there are ample reasons to want CEA to be an ally for people working directly on priority causes, even as we continue to have a principles-first focus. Both approaches emphasize solving problems that can save or improve the lives of people and animals. We have a lot to learn from cause-area experts, whether they explicitly engage with EA or not, about what interventions are most promising in their field, what talent and projects would help the most, and in what ways we could harm the field. And cause-area experts can benefit by conveying their ideas to a wider audience and attracting donations and talented people to work on important issues. I worry that in the past the EA community has been too insular and perhaps dismissive of non-EA expertise, and I’d be excited to see more humility in the future.

Cause prioritization examples

In light of CEA deciding to not solely focus on principles, I‘lI give some examples of the cause-prioritization decisions CEA has made recently:

  • EAGs are an opportunity for attendees to learn more about specific causes. How should we distribute object-level sessions across cause areas?

    • For EA Globals in 2023, 33% of our content on the three main-stages[11] ended up covering cross-cause issues (growing effective altruism, cause prioritization, skills, etc.). Of the cause-specific content, 64% was focused on existential risk reduction, 15% was on animal welfare, and 21% was on global health and development.

  • CEA has the events team with the most capacity in the EA ecosystem, which means we can enable high-value events that object-level experts either couldn’t run or couldn’t run nearly as well without us. If we think cause-specific events are more valuable than another ‘meta EA’ event, which cause-area specific events should we support?

    • Our Partner Events Team has supported events like two Summits on Existential Security and an Effective Giving Summit.

    • We also experimented with an EAG in the Bay Area focused on Global Catastrophic Risks[12].

  • After introducing EA principles in the EA intro program, we want to highlight concrete problems in the world to ground the application of EA principles and emphasize the value of actually doing things. Which areas should we spotlight?

    • In the EA intro program syllabus, the first three weeks explain differences in impact via global health and development examples and radical empathy via animal welfare readings. The next three weeks explain the “most important century” thesis, longtermism, and risks from AI. The final weeks emphasize the importance of thinking for yourself, less common causes, and putting these ideas into practice.

The examples above demonstrate that when CEA has prioritized between causes, AI safety has received more attention than other areas. While I’ve only been full-time in this role for a few months[13] and don’t yet have a clear perspective on what I think the “correct” balance of attention should be between specific causes going forward, I do expect AI safety to continue to receive the most attention (though I wouldn’t be surprised if the relative weighting of causes looked different). At the same time, I sometimes worry this can go too far, and I expect we’ll experiment with different approaches. It is important to me that people engaging across all core-EA causes can find value and feel like their work is valued when they engage with the EA community.

Factors that shape CEA’s cause prioritization

To understand how CEA prioritizes (and, for example, why AI safety currently receives more attention than other specific causes), here are some of the factors that weigh into our cause-prioritization (insofar as we’re not just promoting cross-cause tools)[14].

  • The opinions of CEA staff: I want to actively encourage CEA staff to be thoughtful about their own cause prioritization and have opinions about how they can accomplish the most good with the time they’re spending on their career at CEA. CEA staff’s constant judgment calls influence CEA’s programs. An informal 2023 survey of CEA staff suggests that staff, on average, thought that there were around five key priorities, with mitigating existential risk selected the most, followed by AI existential security. We also shared a post about where CEA staff donated in 2023.

  • Our funders: The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies. While I don’t think it’s necessary for us to share the exact same priorities as our funders, I do feel there are some constraints based on donor intent, e.g. I would likely feel it is wrong for us to use the GCRCB team’s resources to focus on a conference that is purely about animal welfare. There are also practical constraints insofar as we need to demonstrate progress on the metrics our funders care about if we want to be able to successfully secure more funding in the future. I’m interested in doing more to support a broader array of causes (e.g. running more events targeted at animal welfare or global health and development), though I expect there to be some barriers in terms of different willingness-to-pay for community building from different funders, team bandwidth, and in some cases staff interest. Over time, I’d like to see CEA diversify its funding to better reflect a principles-first approach.

    • With that being said, there have been significant changes in staffing and funding practices at both Open Philanthropy and CEA, and I think it’s uncertain how Open Philanthropy will approach funding CEA in the future (e.g. if the funding continues to come from one grantmaking portfolio or if it will be spread out). We expect this to be an active topic of conversation before our next funding cycle.

  • The views of people who have thought a lot about cause prioritization: CEA has historically shown some deference to heavily-engaged people who serve key roles in organizations that embrace EA principles and relevant cross-cause or cause-specific experts[15]. I feel significant ambivalence about this approach. On the one hand, CEA doesn’t have deep in-house expertise in cause prioritization, and I think deferring to an aggregate of well-informed experts can represent an appropriate degree of humility. On the other hand, I worry that this creates an echo chamber. For example, people could point to surveys from the Meta Coordination Forum to justify focusing more on existential risk, which then means there’s a disproportionate emphasis placed on existential risk when inviting attendees to future Meta Coordination Forums, creating a self-reinforcing cycle. Ultimately, I don’t think resolving how much weight to put on this factor is essential, because both this point and the ones mentioned above suggest CEA will emphasize existential risks more than other causes.

    • Some argue that we should instead mirror back the cause prioritization of the community as a whole, e.g. based on community surveys. I think this is wrong. Not only does that presuppose that there should be equal weighting of views between people who have not necessarily engaged equally with the question of what causes are worth prioritizing (and, as is discussed above in this post, the cause prioritization of people engaging with EA is liable to change), but it also presupposes that CEA exists solely to serve the EA community. I view the community as CEA’s team, not its customers. While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members (though oftentimes the two are intertwined).

As the factors above demonstrate, we care about (and are incentivized to care about) prioritizing existential risk reduction work when we need to prioritize between causes[16]. But prioritizing between causes isn’t at the heart of CEA’s mission: to promote EA principles and nurture the community of people who take those ideas seriously. As a result, you can still expect the Forum, groups, and events to support EA principles across a range of causes.

The role of principles in my path through EA

On a personal note, I’m excited about CEA sticking to its principles-first approach. I may have never started working in effective altruism or related causes if there weren’t an “EA community” that spanned multiple causes and nurtured a spirit of truth-seeking.

A friend of mine spent years trying to get me involved in AI safety without mentioning anything about EA. I was confused why he seemed to care so much about it, and I wasn’t particularly compelled by the arguments I heard.

After years of trying and failing to have me focus on AI safety, my friend told me that Open Philanthropy was hiring. I had never heard of Open Philanthropy (or EA!). But I was drawn to the team’s dedication to finding new ways to maximize impact while scaling up its charitable giving, largely with a focus on global health and development. At the time, I was working at a for-profit start-up after a prior stint as a management consultant, and I wasn’t particularly interested in any specific part of global health and development (in fact, I explicitly told the recruiter I wasn’t interested in a role that would make me choose a sole subject to become an expert in). I was, however, compelled by the prospect of a career explicitly oriented around helping others as much as I could.

After getting the Open Philanthropy researcher role, I had the opportunity to explore a variety of causes. In addition to my time as a researcher, I managed grantmaking programs across both human-centered health and development and farm animal welfare. I also spent time on operations and communications work that cut across causes.

During this time at Open Philanthropy, I began engaging more with AI safety. I initially had significant reservations. I saw AI safety as a delusion of privileged tech bros in the Bay Area focusing on theoretical risks that felt close to home to them and made their work seem important, unlike more distant harms faced by the global poor or animals in cages. More recently, I’ve started to take AI safety very seriously (much to the delight of my friend who had been pushing me toward AI many years ago). But that took time and the existence of EA. What ultimately made the difference for me was spending many hours talking with a community of people who had a different perspective on cause prioritization from mine but with whom I shared a commitment to key principles for determining how to best help others. It mattered that those I disagreed with weren’t just engineers looking to get wealthy, and instead were people who shared my values and were often vegans who donated 10% of their income to the global poor.

I think my journey demonstrates how EA principles can resonate with some people who may not be interested in specific causes. There are people who might have the opposite experience—bouncing off abstract or philosophical arguments while finding themselves excited about specific causes—but I think building and nurturing a community around EA principles creates a compelling beacon for many people, as it did for me.

With my updated cause prioritization, I hope CEA’s work helps humanity navigate advanced AI, but I want to be clear that this is not the only reason I’m promoting EA principles. I still feel uncertain about how to compare causes, and I also continue to believe that people inspired by EA principles make valuable contributions to animal welfare, human health, and in other places where moral progress is needed. I’m excited to do what I can to ensure the EA community is a place where people doing impactful work across multiple causes feel like they can find value and their work is celebrated.

Serving as CEA’s new CEO is an exciting opportunity to continue to protect and advocate for principles that played such an important role in my life. I’m grateful to work alongside others who share my ethical commitments, and I look forward to developing and refining programs that will nurture the community of people engaging with these principles.

Acknowledgments

I want to give a particularly large thank you to Michel Justen, who played a very significant role in drafting, editing, and coordinating feedback for this post. I also want to thank Max Dalton, Will MacAskill, Eli Rose, James Snowden, Lewis Bollard, Emma Richter, and the many CEA staff who helped refine this post and its underlying ideas.

  1. ^
  2. ^

    This list of principles isn’t totally exhaustive. For example, CEA’s website lists a number of “other principles and tools” below these core four principles and “What is Effective Altruism?” lists principles like “collaborative spirit”, but many of them seem to be ancillary or downstream of the core principles. There are also other principles like integrity that seem both true and extremely important to me, but also seem to be less unique to EA compared to the four core principles (e.g. I think many other communities would also embrace integrity as a principle).

    Also, these principles are not unique to CEA. Others have used similar principles to describe EA, like Peter Wilderford here.

  3. ^

    However, not all EA-related work has to be motivated by work on one of these populations. In particular, some people working on GCRs believe their efforts can be justified based purely on the impact on present-day humans.

  4. ^

    There was a chance that CEA would make a pivot with a new CEO, but I’m excited about continuing to embrace a principles-first approach.

  5. ^

    For the sake of efficiency, I won’t argue for that claim in depth in this post. For now, I’ll simply say that I think this is clear to me given that this has long been CEA’s mission and that CEA already has programs and staff committed to this mission.

  6. ^

    For a public example, see here. I’m also aware of other conversations that have happened privately.

  7. ^

    Based on this 2019 EA survey data indicating 42% of survey respondents had prioritized a different cause compared to when they first joined EA, it seems likely that the number of people who have changed causes is at least in the thousands.

  8. ^

    Though if you think the EA community has issues but also has potential, its issues might actually be a reason to dedicate more reasons to developing and improving the community.

  9. ^

    What programs we feature may change, but this is unlikely to happen in the near future (i.e., in 2024). You can see data on these programs on our dashboard.

  10. ^

    A notable exception to this was the 2024 EAG Global Catastrophic Risk in the Bay Area. Our reasons for running that, laid out here, were our funder’s priorities, excitement about ​​experimenting with cause-specific events, and the historic attendee pool of Bay Area events. The 2025 Bay Area EAG doesn’t have a cause-specific theme.

  11. ^

    The amount of cross-cause content increases if you take into account non-main stage content, like meetups, workshops, and speed meetings.

  12. ^

    We recently reviewed this event here.

  13. ^

    I started in mid-February and took an extended period of pre-scheduled leave after joining.

  14. ^

    I roughly agree with most heuristics in CEA’s Approach to Moderation and Content Curation, which details our approach to tasks like curating content and splitting content for our introductory materials. But they were written before my time and it’s likely there will be some places where I diverge.

  15. ^

    We think that there are drawbacks to each of these groups (e.g. “cause prioritization experts” may be selected for preferring esoteric conclusions and arguments, highly-engaged community members have been selected to agree with current EA ideas), but they seem to converge to a significant degree.

  16. ^

    I think there’s an important question about how much attention to pay to different existential risks. I’m personally inclined to prioritize AI significantly more than other existential risks (and I also care about AI for reasons that are not purely motivated by existential risk), though I suspect many others would disagree with my weightings.