80,000 Hours: Anonymous contributors on EA movement growth

Link post

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think it’s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.

What’s your current view on effective altruism (EA) movement growth? Should it grow faster, or slower? Should it be broader, or narrower?

Effective altruism should be broad

We should be aiming for 100% of everybody. Everyone who is a philanthropist should know what effective altruism is — but almost no philanthropists I talk to outside of EA know what it is currently. They’ve never heard of 80,000 Hours.


I worry about EAs becoming disinterested in broadening the movement. I think the fact that EA Global was bigger four years ago than it is today is a mistake. I think the effective altruism community needs to be more than a few thousand talented passionate people in order to achieve its goals.

It should stay narrow

I think effective altruism should stay pretty narrow. I think it’s very hard to manage a very large movement. I think EA is still partly figuring out what it is. So, I think we should be aiming for something like the 90th percentile of altruistic people and 99th percentile of high-performing contributors to their fields or areas of study – that’s quite a narrow segment of society.

I think it would be good if we were growing a bit faster than we were. I feel like the optimal is steady, exponential growth — where steady means not slow to the point where people start feeling bored or unexcited. And I worry that we’re currently maybe at the low end of the good range.

I definitely don’t think we should be going for an all-out broad, environmentalist-esque mass movement.

It should focus on having a great culture first

Rather than growth, the thing I’d most want to alter is effective altruist culture. I want anyone in EA to think “wow, this is such a great community to be part of — I think this is great!”, rather than feeling really ambivalent, or even stressed, or finding themselves often annoyed at other people.

I think there’s stuff that could change that would make EA feel more like that first description. One way would be cultivating an attitude of celebration and welcomingness. I’ve heard people talking about how earning to give was ‘denigrated’ in EA, that no one should be getting career capital, things like that. And I think that’s evidence of both how ideas about what people ought to do spread in EA, and about how sensitive people are to them. So it gets very exaggerated.

Whereas if we could create a community atmosphere which is like “oh, you’re a school teacher who donates 10% of your income to AMF? We love you! Thank you for being in this world and doing so much good — I’m honoured to be in a community with you”.

I feel like that would be a healthier, less stressful community, where people made better decisions. Compared to the current situation, where people can feel that unless you’re working for a handful of organisations that are focused on short AI-timelines — you are basically worth nothing.

We do select for people who are very scrupulous, and anxious, but maybe that just means that we have to work much harder to counter those impressions. I mean, EA is just like “what’s the thing you can do that’s the optimal, most important thing?!” — it’s kind of a stressful idea. Perhaps we could do more to counteract that.


I think the ideal EA community would be a place where everyone really promising who finds it, likes it. I don’t think EA is that right now. That’s also a really high bar to hit, that’s hard. But that’s what I care about more than how fast EA is growing.


I think that the bad aspects of EA might be putting people off of these ideas by making them seem kind of cold, nerdy, and dismissive. So, broad vs. narrow isn’t how I’d think about this. I’d want to focus more on changing the tone of EA, putting out ideas in a way that seemed more helpful, and then I’d be much more likely to think it was good if it grew faster.


I want us to have somewhat better infrastructure for making people who are joining feel slightly held, rather than lost.

I think the issue of “it’s hard to get jobs in EA” is a sore point, and I worry it could be exacerbated further if growth continues without improving that situation. But if we have better things there, then growth seems great.

It should take a higher variance approach to recruitment

I’d be doing more to recruit people with more diverse mindsets.

I think I might take a higher variance approach. There are all these EAs who say “as soon as I heard about EA, I was instantly an EA”. So that’s great, you can get those people quickly and easily. You just need to make sure they’ve heard of the definition of effective altruism, and had the chance to get a copy of Famine, Affluence, and Morality.

And then there’s another group of people who are sympathetic to EA principles, but aren’t as naturally drawn. And they might have talents that seem really useful to the community.

So I might have a two-pronged strategy where I, i) focus on the easy cases, ii) make sure we pick up the people who aren’t as naturally drawn, but who have skills we really need.

I think you could have an approach that was less focused on medium-level engagement.

If you have one EA person who has a skill that’s really in demand in the community — get that person to work on recruiting more people like them. For people who aren’t immediately drawn to EA, it’s really useful to get the message from someone who is similar to them.

I don’t think natural EA enthusiasts — young, nerdy, philosopher types — are very good at convincing people who aren’t like that.

You could eventually have this system of a person convincing someone who’s a little bit farther, and then that person convincing someone else who’s a little farther away still — and emphasise to everyone along the way how valuable a role they can play in movement building and recruitment.

We should be wary of committing to one direction

I think we’ve got a problem here. The right thing to do given short AI timelines is not the right thing to do given longer AI timelines.

If AI timelines are short, then we probably shouldn’t be focused on becoming a mass movement. There are only so many things you can do at once, and some aspects of being a mass movement are competitive with short timelines. In particular, if almost every EA leader is working extremely hard on a short AI timeline scenario, then most public outreach is going to end up being quite deceptive. It’s probably not going to say “hey, this is a movement of people who think the world as we know it won’t exist in 10 years”.

If AI timelines are not short, EA should be focusing on becoming a mass movement. I think movements that don’t put a lot of energy into successfully handing down ideas to younger people, and transferring expertise — end up ceasing to exist.

But because we’re unsure, we end up being ambivalent on this question. And I think that’s bad in some ways, although it might be better than committing to one direction or the other.


I’m really happy about the switch that happened from expanding as quickly as possible, to expanding carefully in high-fidelity ways. I’m now thinking that maybe it needs to tweak slightly back in the other direction, but I’m not sure.

We should consider people who aren’t ready for a big commitment

I know some people have said, “all movements have a spectrum of involvement, and that’s going to happen with EA”. But it is possible to say “well, there are certain criteria that qualify you as an EA”. And then if you haven’t met those, you could say you’re an aspiring EA. If you haven’t quite donated 10% of your income or free time effectively, or moved to an effective career, etc.

I know some people think ‘effective altruism’ as a term is too presumptuous, and that everyone should call themselves “aspiring EAs”. I personally think that’s too modest. Because I think there are a lot of people who are doing really good work.

But the term might work for someone who says “I’m on board with the ideas, but I’m only donating 1%. I haven’t quite made the shift”. What I say to them is, there’s this whole field called diffusion of innovations. What they found is that when people change their minds, they may not change their behaviour for a year, or more. That’s normal. And we shouldn’t be frustrated with people who haven’t made this bigger shift yet.

This distinction would allow you to pull people in who might not be ready for a big commitment, without diluting the active EA community.

We should reach out to influential people

I think EAs should focus more on reaching out to people who already have influential positions.

We should want more exposure for both good and bad ideas

A big part of me intuitively thinks that, insofar as these ideas are correct, I want more people to encounter them. And even if they’re wrong, I want more people to encounter them — because that’s a good way of getting rid of bad ideas. I think it’s generally good to shine light on ideas. So that’s an argument for getting more exposure for the ideas, and then letting that affect growth one way or the other.

No comments.