EA Culture and Causes: Less is More

Should there be a community around EA? Should EA aim to be one coherent movement?

I believe in the basic EA values and think they are important values to strive for. However, I think that EA’s current way of self-organizing – as a community and an umbrella for many causes – is not well suited to optimizing for these values.

In this post I will argue that there are substantial costs to being a community (as opposed to being “just” a movement or a collection of organizations). Separately, I will argue that EA has naturally grown in scope for the past ten years (without much pruning), and that now may be a good time to restructure.

In the following sections I will explore (potential) negative facets of EA as a community and as a large umbrella of causes:

  1. If the community aspect of EA becomes too dominant, then we will find ourselves with cult-like problems, such as: the incentive for people to stay in the community being stronger than the incentive to be truth-seeking.

  2. Currently, EA’s goal is very broad: “do good better”. Originally, colloquially it meant something fairly specific: when considering where to donate, keep in mind that some (traditional) charities save much more QALYs per dollar than others. However, over the past ten years the objects of charity EA covers have vastly grown in scope e.g. animals and future beings (also see a and b). We should beware that we don’t reach a point where EA is so broad (in values) that the main thing two EAs have in common is some kind of vibe: ‘we have similar intellectual aesthetics’ and ‘we belong to the same group’, rather than ‘we’re actually aiming for the same things’. EA shouldn’t be some giant fraternity with EA slogans as its mottos, but should be goal-oriented.

I think most of these issues would go away if we:

  • De-emphasize the community aspect

  • Narrow the scope of EA, for example into:

    • A movement focusing on doing traditional charities better; and an independent

    • Incubator of neglected but important causes

1. Too Much Emphasis on Community

In this section I will argue that a) EA is not good as a community and b) being a community is bad for EA. That is, there are high costs associated with self-organizing as a community. The arguments are independent, so the costs you associate with each argument should be added up to get a lower bound for the total cost of organizing as a community.

Problems with Ideological Communities in General

The EA-community is bad in the sense that any X-community is bad. EA in itself is good. Community in itself is good. However, fusing an idea to a community is often bad.

Groups of people can lie anywhere on the spectrum of purpose vs people. On one extreme you have movements or organizations that have a purpose and people coordinating to make it happen. Think of a political movement with one narrow, urgent purpose. People in this movement form an alliance because they want the same outcome, but they don’t have to personally like each other.

On the other end of the extreme you have villages, in which people support each other but don’t feel the urge to be on the same page as their neighbor ideologically. (They may find the guy who cares a lot about X a weirdo, but they accept him as one of them.) For an unexpected example, consider the Esperanto community. This community was founded on an idea, but today it is very much at the community end of the spectrum rather than the ideological one.

Both extremes (main focus on ideology/​purpose or on community/​people) can be healthy. However, combining ideology with community tends to lead to dysfunctional dynamics. The ideology component takes a hit because people sacrifice epistemics and goal-directedness for harmony. At the same time, the people take a hit because their main community is no longer just a place to find solidarity and refuge, but also the place where they apply for positions and their competence and usefulness is measured. Both ideology and community wellbeing are compromised.

EA as a Community

An emphasis on community encourages EA members to make new connections with other EAs. However, too much of someone’s social network being within EA may lead to them being too dependent on the community.

Currently for some people, many facets of their life are intertwined with EA. What happens when someone reaches a point where a critical number of their friends, colleagues, and romantic interests are EAs? Losing social standing in EA for them means risking a critical portion of their social network… Most communities are not work-related, like a salsa dance community. Most communities that are related to a field or based on work are much looser and less homogeneous in values, e.g. the algebraic geometry (mathematics) community. The combination of work centered and fairly tight-nit is less common.

Additionally, many individual EAs benefit financially from EA (through grants or salaries), derive their sense of belonging from EA. Some EAs are in careers that have good prospects within EA but bleak ones outside of it (e.g. independent AI safety researcher or wild animal welfare analyst). For these people there is a strong (subconscious) incentive to preserve their social connections within the community.

1. Human costs of high intertwinement of many facets of life with EA

Higher anxiety and a higher sensitivity to failures when observed by other EAs. By having a lot of needs met within EA, one’s supply of many resources is reliant on one’s acceptance within the EA community. That is, individuals become reliant on their social standing within EA and their supply of good things in life becomes less robust. And it would only be natural to become vigilant (i.e. anxious) in this aspect of life.

For example, if an individual’s EA-related job is going poorly, this may make them insecure about their position in the community and very anxious. Whereas, on the contrary, if the individual’s eggs were spread more widely they would probably be less likely to catastrophize damage to their career. EAs do seem to have anxiety around rejection and failure as pointed out by Damon.

Power dynamics and group status are amplified in an ideological community (as opposed to for example an ideology without a community). Julia has written about power dynamics she observes in EA. Many people would like to be high up in the social hierarchy of their primary community, if EA is the primary community of many, that has repercussions.

Career dependency. Choosing for a career as a full-time community leader is well-accepted within EA. However, people may find it difficult to find a stimulating job outside of EA if their main work experience is in community building.

Incentive to preserve social connections may override desire for truth-seeking. For example, I get the impression that there are subgroups in EA in which it’s especially cool to buy in to AI risk arguments. There is a cynical view that one of the reasons mathsy academically inclined people like arguments for x-risk from AI is that this could make them and their friends heroes. For an in-depth explication of this phenomenon consider the motivated reasoning critique of effective altruism.

If you or your friends just received a grant to work on x-risk from AI, then it would be quite inconvenient if you stopped believing x-risk from AI was a big problem.

2. Epistemic costs of intertwinement: Groupthink

Groupthink is hard to distinguish from convergence. When a group agrees on something and the outcome of their decision process is positive, we usually call this convergence. In the moment, and by members, it’s hard to judge whether groupthink or convergence is happening. Groupthink is usually only identified after a fiasco.

Quotes. Two anonymous quotes about groupthink in EA:

a. “Groupthink seems like a problem to me. I’ve noticed that if one really respected member of the community changes their mind on something, a lot of other people quickly do too. And there is some merit to that, if you think someone is really smart and shares your values — it does make sense to update somewhat. But I see it happening a lot more than it probably should.”

b. “Too many people think that there’s some group of people who have thought things through really carefully — and then go with those views. As opposed to acknowledging that things are often chaotic and unpredictable, and that while there might be some wisdom in these views, it’s probably only a little bit.”

Currents in the ocean. The EA community wants to find out what the most important cause is. But, many things are important for orthogonal reasons. And perhaps there is no point in forming an explicit ranking between cause areas. However, EA as a community wants to be coherent and it does try to form a ranking.

A decade ago people directed their careers towards earning to give. In 2015 Global Poverty was considered 1.5 times as important as AI Risk, and in 2020 almost as important, see. In my experience, and those of some people who’ve seen the community evolve for a long time (who I’ve spoken to in private) EA experiences currents. And the current sways large numbers of people.

An indication that unnecessary convergence happens. On topics like x-risk we may think that EAs agree on this because they’ve engaged more with arguments for it. However, in the EA and rationality spheres I think there is homogeneity or convergence where you wouldn’t expect it by default. Such as polyamory being much more common, a favorable view towards cuddle-piles, a preference for non-violent communication, a preference for explicit communication about preferences, waves of meditation being popular, waves of woo being unpopular, etc. The reader can probably think of more things that are much more common in EA than outside of it, even though they are in principle unrelated to what EA is about.

This could be a result of some nebulous selection effect or could be due to group convergence.

When I know someone is an EA or rationalist my base rate for a lot of beliefs, preferences and attributes, instantly becomes different from my base rate for a well-educated, western person.

I think this is a combination of 1) correlation in traits that were acquired before encountering EA; 2) unintended selection of arbitrary traits by EA (for example due to there already existing a decent number of people with that trait); and 3) convergence or groupthink. I think we should try to avoid 2) and 3).

EA is hard to attack as an outsider. Isn’t EA particularly good at combating groupthink, for example by inviting criticisms? You may ask. No, I do not think EA is particularly immune to this.

It is difficult to get the needed outside criticism, because EAs only appreciate criticism when it’s written in the EA style that is hard to acquire. For example, the large volume of existing texts that one would have to be familiar with before being able to emulate the style is fairly prohibitive.

An existence proof? that EA-like groups may not be immune to groupthink. Some examples of (small) communities that have some overlap in demographic to EA and that highly value critical thinking are Leverage, CFAR and MIRI. These nonetheless suffered from groupthink. To be clear I think Leverage, CFAR and MIRI are all Very different from EA as a community. However, these organizations do consist of people who particularly enjoy and (in some contexts) encourage critical thinking. Those three organizations may have nonetheless suffered from groupthink, as expanded on in these blog posts by Jessicata and Zoe Curzi.

3. Special responsibilities are a cost of organizing as a community

A moral compass often incorporates special responsibilities of individuals (EA or not) towards their children, elderly in their care, family, friends, locals and community.

Being a community gives you such a special responsibility to your members. However, if EA is too much of a community, this means that it may have to spend more resources on keeping those in it happy and healthy, than it would if it was maximizing utility.

I think EA as a movement should only care about its ‘members’ insofar as the evidence suggests that this does in fact create the most utility per dollar. However, by being people’s community EA takes on a special responsibility towards its members that goes beyond this.

To be clear, I do think that EA organizations should treat their employees in a humane fashion and care for them (just like most companies do). However, being a community goes beyond this. For example, it gives EA a special responsibility to (almost) anyone who identifies as EA.

Advantages of a community

One factor that makes it so attractive to increase the community aspect of EA is: People (new to EA) come across this super important idea (EA), and none of their friends seem to care. And they want to do something about it. They can instantly contribute to “EA” by for example supporting people that are already contributing directly, or they can do EA community building.

Young people studying something like economics (which could be a very useful expertise for EA in ten years!) end up doing community building because they want to contribute now. Because of the importance, people want to help and join others in their efforts and this is how the community starting mechanism gets bootstrapped. (This sensed importance makes joining EA different from joining a company.) However, because EA is an ideology rather than one narrow project, people are buying into an entire community rather than joining a project.

To play devil’s advocate I will highlight some advantages of having an EA community.

  • Many people would like a community. Maybe EA can attract more capable people to work on EA goals.

    • Counter-argument (not supported by data or other evidence, just by a hunch): the kind of people who crave a community tend to come from a place of feeling isolated and feeling like a misfit, craving a feeling of meaning and being less emotionally stable. I don’t think it’s necessarily productive to have a community full of people with these attributes.

    • In fact we may have been unable to attract people with the most needed skills because of the approach to community building that EA has been taking. Which is what this post argues.

  • A community increases trust among its members, which avoids costly checks. For example, it avoids checking whether someone will screw you over and people can defer some judgements to the community.

    • I think avoiding checks on whether someone will screw you over is just plain good.

    • Counter-argument to ease of deferring judgements: I think this can cause a situation of people thinking that there’s community judgement and no more scrutiny is needed when in fact more scrutiny is needed.

2. Too Broad a Scope

In this section I will argue that:

  1. Optimizing for utility leads to different goals depending on what you value and meta-preferences.

  2. Some important goals don’t naturally go together, i.e. are inconvenient to optimize for within the same movement.

  3. More concrete and actionable goals are easier to coordinate around.

1. Foundation of EA may be too large

To quote Tyler Alterman: ‘The foundation of EA is so general as to be nearly indisputable. One version: “Do the most good that you can.” [fill in own definitions of ‘good,’ ‘the most,’ etc]. The denial of this seems kind of dumb: “Be indifferent about doing the most good that you can” ?’

Many different (even contradictory!) actual goals can stem from trying to act altruistically effectively. For example a negative utilitarian and a traditional one disagree on how to count utility. I think that the current umbrella of EA cause areas is too large. EA may agree on mottos and methods, but the ideology is too broad to agree on what matters on an object-level.

People who subscribe to the most general motto(s) of EA could still disagree on:

  1. What has utility (Should we care about animals? Should we care about future humans?)

  2. Meta-preferences such as risk-tolerance and time-horizons

a. What has utility? People have different values.

b. Meta-preferences such as risk-tolerance and time-horizons. Even if you have the same values, you may still have different ‘meta-preferences’, such as risk-tolerance, time-horizons etc. For example people need different amounts of evidence before they’re comfortable with investing in a project. People could all be epistemically sound, while having different thresholds for when they think evidence for a project is strong enough to pour resources into it. (Ajeya talked about this in a podcast.)

For example, one EA-slogan is ‘be evidence-based’, but this could lead to different behavior depending on how risk-averse you are towards the evidence pointing in the right direction. In global health and wellbeing you can try a health intervention such as malaria nets, worm medication, etc. You measure some outcomes and compare with the default treatment. In this case, you can have high demands for evidence.

Say you instead consider x-risks from natural disasters. In this case, you cannot do experiments/​intervention studies. Say you consider evidence for a particular intervention to prevent x-risks from emerging technologies. In this case, the only evidence you can work with is fairly weak. You could extrapolate trends. Or use analogies but that’s weaker still. So far people have also relied on first principles.

People have different bars for conclusiveness of evidence. Many scientists would probably stay with interventional and observational studies. Philosophers may be content with first principles reasoning. People will care about different projects in part because they have different meta-preferences.

2. Friction between Goals

Spread idea of QALYs independently of growing EA. It would be great if the meme ‘when donating keep in mind that some charities are more effective than others’ became more widespread. Not everyone who incorporates this idea into their worldview has to ‘become an EA’. However, because EA is currently a community and an umbrella of many outgrowths, it’s difficult to only expose them to ‘vanilla’ EA ideas. We should find a way of separating this core idea from current EA and let it integrate into mainstream culture.

Goal of normalizing doing traditional charities better and goal of shining light on new cause areas doesn’t go well together. Trying to convince people that charity evaluation should be more evidence based doesn’t gel well with working on niche causes. (Even if they’d be sympathetic to counting QALYs) people who are more convention-oriented may feel too stretched by charities that target new cause areas. For example ones in the far future or ones regarding (wild) animals.

Counter-factual effective altruistic people and projects. Even if we’re able to internally marry all the different causes, such that no individual in EA feels iffy about it. Then still, we probably have already deterred a lot of people from joining (projects within) the movement or feeling sympathy towards it. It’s of course hard to know how many counterfactual EAs we might have lost. Or what exactly the preferences are of counter-factual EAs. But we should keep them in mind.

Note that, thinking of counterfactual EAs may actually not be the best phrasing. We shouldn’t aim to expand EA as much as we can as a goal in itself. If we focus more on doing object-level projects rather than engaging with how many people we can get to buy into EA ideas, then we may end up with more people doing good effectively (than we would have ended up with effective EAs).

Why should we? We should not try to get people who care a lot about animal welfare or decreasing global inequality to care about x-risk. Many goals are basically orthogonal. There’s no need to have one movement that ‘collects’ all of the most important goals. It’s fine for there to be a different movement for each goal.

As a side-note, I do think there’s a place for a cause area incubator, i.e. a group of people that professionally work to get completely new or overlooked cause areas off the ground. Current EA is much bigger than that though. Current EA includes: active work in cause areas that have more than 200 professionals; directing people’s careers; philosophy etc.

EA as a movement doesn’t have to encompass all the cool things in the world. There can be different movements for different cool things. To me the question is no longer ‘should it be split up?’ but ‘how should it be split up?’

3. EA is not focused enough

In my opinion EA as a community currently bears a lot of similarity to a service club such as Rotary International. Service clubs are substantially about people doing favors for other club members.

EA started small but became big quite quickly, and has mostly aimed to expand. This aim of keeping all the sheep in the same herd facilitates community-based activities (as opposed to goal-based activities). For example, improving mental health of young EAs, supporting new EAs, and organizing EA socials are all mostly supporting the community and only indirectly supporting EA goals. Additional examples of EA focused activities are: organizing EA global, organizing local groups, running a contest for criticisms, maintaining the quality on the EA forum and so on. These activities may or may not prove to be worth their cost, but their worth to the world is indirect (via the effectiveness of EA) rather than directly improving the world.

If a project or organization has very concrete goals (such as submitting papers to an AI safety workshop), then it’s usually clear how many resources should go into maintaining the team or organization and how much should go into directly trying to achieve the goal. Making sure you work with collaborators towards a positive goal is altruistic, generally participating in a service club less so.

Advantages of an Umbrella

  • People can work on riskier endeavors if they feel there are people like them working on less risky projects. If you feel like you’re part of a bigger movement you may be happy to do whatever needs to be done in that movement that’s to your comparative advantage.

  • It may be annoying or messy to decide where to draw the lines.

  • There are a lot of ideas that flow out of Doing Good Better that are less obvious than ‘certain charities are more effective than others’ such as ‘earning to give’ or directing your career towards positive impact.

    • It may be possible to expose people to these ideas without them being a core part of the umbrella? For example, it’s also possible to reference Popper or Elon Musk even though they are not at the core of EA.

The way forward: less is more

Less community

Less EA identity. Currently a community aspect to EA is created by people ‘identifying’ as EA, which is unnecessary in my opinion. So I’d advocate for refraining from seeing oneself as an EA. (Rather than just as someone who’s working on x for reason y.)

Fewer community events. I’m in favor of project-based events, but am wary of non-specific networking events.

Less internal recruiting. EA is community focused in that advertisements for opportunities are often broadcast in EA groups rather than in universities or on LinkedIn directly. Currently a common funnel is: An EA group advertisement is placed in a university, once people have entered the group, they see advertisements for scholarships etc.

Instead I’d aim for: Removing the community mode of communicating about opportunities and directly advertise for specific opportunities in universities. We shouldn’t make opportunities conditional on in-group status, so we should try and make opportunities equally accessible to all. (Also try to avoid having ‘secret’ signals that an opportunity is very cool, that only EA’s can read.)

Narrowing the scope

EA as is, could be split into separate movements that each are more narrow in scope and more focused.

Split off EA into:

  • A movement focusing on doing traditional charities better;

  • A movement or organization focusing on becoming an incubator of neglected but important causes;

  • A couple of mature scientific fields (much like physics has split off from philosophy);

The EA movement and branding could split into 1) the original EA, namely doing traditional charities better by assessing QALYs per dollar; and 2) an incubator. This split would for example mean that EA Global would no longer exist, and instead there could be completely independent (e.g. that are not on purpose run in parallel or shortly after one another) conferences with more narrow focuses.

The incubator could be an organization that identifies ‘new’ causes; does basic research on them; and hands out grants to charities that work for the causes. Once a cause area becomes large enough that it can stand on its own, it’s cut off from the metaphorical umbilical cord. So for example, AI risk would probably be cut off around now. (Note that there could be multiple organizations and/​or research labs working in the newly split off field.)

Two advantages of separating an incubator from traditional EA are:

  • The cause areas in the incubator would all be small and so would be more balanced in size. As a cause area becomes sizable it can be cut off.

  • The incubator could absorb all the weirdness points, and even if people don’t feel attracted to the incubator, they wouldn’t find the weirdness fishy, as an incubator ought to support innovative ideas.

If that seems useful, then additional to movement 1) doing traditional charities better by assessing QALYs per dollar; and 2) an incubator, we could have a movement centered around 3) longtermism, or public perception of and solutions to x-risk.


Overall Recommendation: EA should drop expansionism and loosen its grip.