Elements of EA: your (EA) identity can be bespoke

Lots of people have an angsty, complicated, or fraught relationship with the EA community. When I was thinking through some of my own complicated feelings, I realised that there are lots of elements of EA that I strongly believe in, identify with, and am part of… but lots of others that I’m sceptical about, alienated from, or excluded from.

This generates a feeling of internal conflict, where EA-identification doesn’t always feel right or fitting, but at the same time, something meaningful would clearly be lost if I “left” EA, or completely disavowed the community. I thought my reflections might be helpful to others who have similarly ambivalent feelings.

When we’re in a community but feel like we’re fitting awkwardly, we can either :

(1) ignore it (‘you can still be EA even if you don’t donate/​aren’t utilitarian/​don’t prioritise longtermism/​etc’)
(2) try to fix it (change the community to fit us better, ‘Doing EA better’)
(3) leave (‘It’s ok to leave EA’, ‘Don’t be bycatch’).

I want to suggest a fourth option: like the parts you like, dislike the parts you don’t, and be aware of it and own it. Not ‘keep your identity small’ or ‘hold your identity lightly’ — though those metaphors can be useful too — but make your identity bespoke, a tailor-made, unique garment designed to fit you, and only you, perfectly.

By way of epistemic status/​caveat, know that I came up with this idea literally this morning, so I’m not yet taking it too seriously. It might help to read this as advice to myself.

Elements of EA

So, what are some of the threads, colours, cuts, styles that might go in to making your perfect EA-identity coat? I suggest:

Philosophy and theory

‘Doing the most good possible’ is almost tautologically simple as a principle, but obviously, EAs approach this goal using a host of specific philosophical and theoretical ideas and approaches. Some are held by most EAs, others are disputed. Things like heavy-tailed-ness, expected value, longtermism, randomised controlled trials, utilitarianism, population ethics, rationality, Bayes’ theorem, and hits-based giving fall into this category (to name just a few). You might agree with some of these but not others; or, you might disagree with most EA philosophy but still have some EA identification because of the other elements.

Moral obligation

Many EAs hold themselves to moral obligations: for example, to donate a proportion of their income, or to plan their career with positive impact in mind. You can clearly feel these moral obligations without subscribing to the rest of EA: lots of people tithe, and lots of people devote their lives to a cause. Maybe then these principles are enough unique enough to ‘count’ as central EA elements. But if you add in a commitment to impartiality and effectiveness, I think this does give these moral obligations a distinct flavour; and, importantly, you can aspire to work toward the impartial good, effectively, without agreeing with (most) underlying EA theory, or agreeing with EA cause prioritization.

The four central cause areas

EAs prioritise lots of causes, but four central areas are often used for the purposes of analysis: global health and development, x-risk prevention, animal welfare, and meta-EA. Obviously, you don’t need to subscribe to EA theory or EA’s ideas about moral obligation to work on nuclear risk prevention, corporate animal welfare campaigns, or curing malaria. Similarly, you might consider yourself EA, but think that the most pressing cause does not fall into any of these categories, or (more commonly) is de-prioritized within the category (for example, mental health, or wild animal welfare, which are ‘niche-r’ interests within the wider causes of global health and animal welfare respectively). Or, you might think that one major cause area is clearly the highest priority, and feel alienated that many EAs prioritize the others.

The professional community

People who plan their career according to EA principles, either working directly or earning to give. You can be part of the EA professional community without subscribing much to the philosophical side — for example, you might work with EA colleagues at an EA-influenced animal charity just because you care about animals and you think they are doing good work, even if you don’t subscribe to utilitarianism or EA ideas about donating.

The social community

EA is a social community as well as a professional community. You can be part of the social community without being part of the professional community — for example, if you go to local group events and are close friends with EAs, but you’re not willing or able to get a highly impactful job. What’s more, EA attracts a certain type of person — kind, nerdy, takes ideas seriously, open-minded. If you have those traits, you might really enjoy the vibe of EA social spaces even if you disagree with pretty much everything about the philosophy.

All these elements are clearly related. There’s an idealised picture of becoming an EA in which all five of these elements fit seamlessly together, mutually reinforcing one another. You hear about EA philosophy, and through it you develop a sense of moral obligation to have a positive impact; or maybe you start with a sense of moral obligation and that leads you to discover the philosophy. You join a local or university group, which plugs you into the social community. You read more EA content, talk to your new EA friends, and this helps you decide which cause to prioritize. (This is likely among the central 4 cause areas, though some will go for something more niche). You then plan your career with that cause in mind, joining the professional community.

I think a bunch of EAs had a journey like this, maybe with a few more twists and turns. But I hypothesise that for others, one or more of these elements are present, but one or more others are missing. This creates an angsty dynamic where they are both drawn to the community, but at the same time alienated and repelled. I think this might be behind a lot of internal EA criticism — that is, criticism from EAs, EA-adjacents, or ‘post’-EAs (people who used to identify as EA but no longer do).

This ‘ambivalent identification’ dynamic might also be why so many people self-label as ‘EA-adjacent’ even when they are pretty engaged in EA, by most metrics.

Warm vs cool EA

Another framework is to divide EA into ‘warm’ and ‘cool’ elements, like so:

warmcool
altruismeffectiveness
fuzzies[1]utilons
got here through Giving What We Cangot here through LessWrong
London/​BerlinBay Area
Global poverty, animal welfareAI safety, longtermism, meta
more feminine-coded/​more women?more masculine-coded/​more men?
more common sense ideasweirder ideas

I suspect the things in the columns are correlated with each other, and ‘warm’ EAs are most likely to be alienated by the ‘coldest’ poles of the movement, and vice versa. But obviously many people are a mix; for example, I’m mostly in the warmer column, but I draw ‘weirder ideas’ from the right column.

So, what to do with these frameworks? If done right, I think these differences between us could be exciting and generative tensions, rather than things we need to split up or go to war over. When forming relationships, I’m looking for people who share decent amounts of common ground, but I’m not looking for carbon copies of myself. The same is true of intellectual comrades. EAs in whom different elements dominate can very easily collaborate in ways that achieve both of their goals; they don’t need to become each other.

  1. ^

    I’m not saying ‘warm’ EAs are purely fuzzies-motivated — if that were true, they’d just be average altruists — but they are more likely to either be motivated by illegible emotional considerations, or to take fuzzy feelings more seriously, or to think that fuzzies are very important for the good life even if not a good proxy for effectiveness, or something to that effect.