On focusing resources more on particular fields vs. EA per se—considerations and takes

Epistemic status: This post is an edited version of an informal memo I wrote several months ago. I adapted it for the forum at the prompting of EA strategy fortnight. At the time of writing I conceived of its value as mostly in laying out considerations /​ trying to structure a conversation that felt a bit messy to me at the time, though I do give some of my personal takes too.

I went back and forth a decent amount about whether to post this—I’m not sure about a lot of it. But some people I showed it to thought it would be good to post, and it feels like it’s in the spirit of EA strategy fortnight to have a lower bar for posting, so I’m going for it.

Overall take

Some people argue that the effective altruism community should focus more of its resources on building cause-specific fields (such as AI safety, biosecurity, global health, and farmed animal welfare), and less on effective altruism community building per se. I take the latter to mean something like: community building around the basic ideas/​principles, and which invests in particular causes always with a more tentative attitude of “we’re doing this only insofar as/​while we’re convinced this is actually the way to do the most good.” (I’ll call this “EA per se” for the rest of the post.)

I think there are reasons for some shift in this direction. But I also have some resistance to some of the arguments I think people have for it.

My guess is that

  • Allocating some resources from “EA per se” to field-specific development will be an overall good thing, but

  • My best guess (not confident) is that a modest reallocation is warranted, and

  • I worry some reasons for reallocation are overrated.

In this post I’ll

  1. Articulate the reasons I think people have for favouring shifting resources in this way (just below), and give my takes on them (this will doubtless miss some reasons).

  2. Explain some reasons in favour of continuing (substantial) support for EA per se.

Reasons I think people might have for a shift away from EA per se, and my quick takes on them

1. The reason: The EA brand is (maybe) heavily damaged post FTX — making building EA per se less tractable and less valuable because getting involved in EA per se now has bigger costs.

My take: I think how strong this is basically depends on how people perceive EA now post-FTX, and I’m not convinced that the public feels as badly about it as some other people seem to think. I think it’s hard to infer how people think about EA just by looking at headlines or Twitter coverage about it over the course of a few months. My impression is that lots of people are still learning about EA and finding it intuitively appealing, and I think it’s unclear how much this has changed on net post-FTX.

Also, I think EA per se has a lot to contribute to the conversation about AI risk — and was talking about it before AI concern became mainstream — so it’s not clear it makes sense to pull back from the label and community now.

I’d want someone to look at and aggregate systematic measures like subscribers to blogs, advising applications at 80,000 Hours, applications to EA globals, people interested in joining local EA groups, etc. (As far as I know as of quickly revising this in June, these systematic measures are actually going fairly strong, but I have not really tried to assess this. These survey responses seem like a mild positive update on public perceptions.)

Overall, I think this is probably some reason in favour of a shift but not a strong one.

2. The reason: maybe building EA per se is dangerous because it attracts/​boosts actors like SBF. (See: Holden’s last bullet here)

My take: My guess is that this is a weak-ish reason – though I’m unsure, and it’s probably still some reason.

In particular, I don’t think it’s going to be that much easier to avoid attracting/​boosting SBF-like actors in specific fields compared to building EA in general (holding other governance/​cultural changes fixed). And I’d expect the mitigation strategies we should take on this front will be not that different/​not that differently effective on the two options.

An argument against my take: being cause neutral means focusing more on ‘the good’ per se (instead of your specific way of doing good), and that is associated with utilitarianism, and that is SBF-like-actor-attracting. There’s probably something to this; I just wouldn’t be that surprised if SBF-like-actors would be attracted by a field who’s mission was to save the world from a catastrophic pandemic, etc. to ~the same degree. Why? Something like: it’s the “big stakes and ambitions” of EA that has an effect here, rather than the cause neutrality/​focus on ‘the good’ per se. But this is speculation!

3. The reason: EA is too full of stakeholders. The community has too many competing interests, theories of impact, and stakeholders, and it’s too tiring, draining, resource-intensive, and complicated to work with.

My take: I sort of suspect this is motivating some people maybe subconsciously. I think it’s probably a mostly weak reason.

I do feel a lot of sympathy with this. But individual field building will also generate stakeholders—at least if its doing something that really matters! Stakeholders can also be helpful, and it’s often ambiguous at the time whether they’re helping or hindering.

Though I do buy that, especially at first, there will probably be fewer stakeholders in specific fields, especially if they vibe less ‘community’-y. (Though if it’s the “community” aspect that’s generating stakeholders, the move might instead be to move toward a less community-centered EA per se, rather than zoom in on particular causes, something I’ve also seen people argue for.)

4. The reason: people have greater confidence that some issues are much more pressing than others, which means there’s less value in cause neutrality, searching for ‘Cause X’, and thinking about things from the ground up, and so less value in the effective altruism community per se as compared to the specific causes.

My take: this is a strong reason to the extent that people’s confidence has justifiably increased.

I think it’s “taking a bet”/​ risky, but could be worth it. It’s a risk even if we invest in building multiple fields, since we’ll be reducing investment in what has so far been an engine of new field creation.

4A: The reason: Some people feel more confident we have less time until a potential AI catastrophe, which makes strategies that take a long time to yield returns (which building EA per se does compared to specific fields) less promising.

My take: This is a strong reason insofar as the update is justified, though only to invest more in AI-specific field building, rather than field building in a wide variety of areas (except insofar as they intersect with AI — which maybe most of them do?).

5. The reason: There’s greater tractability for specific field building compared to before, because (1) AI safety is going mainstream, (2) pandemic risk has somewhat gone mainstream, and (3) there are community members who are now more able to act directly, due to having gained expertise and career capital.

My take: this is a strong reason.

6. The reason: EA ‘throws too many things together under one label’ and that is confusing or otherwise bad, so we should get away from that. E.g. from Holden:

> It throws together a lot of very different things (global health giving, global catastrophic risk reduction, longtermism) in a way that makes sense to me but seems highly confusing to many, and puts them all under a wrapper that seems self-righteous and, for lack of a better term, punchable?

And from Niel (80,000 Hours slack, shared with permission):

>I worry that the throwing together of lots of things makes us more highlight the “we think our cause areas are more important than your cause areas” angle, which I think is part of what makes us so attack-able. If we were more like “we’re just here trying to make the development of powerful AI systems go well”, and “here’s another group trying to make bio go well” I think more people would be more able to sidestep the various debates around longtermism, cause prio, etc. that seem to wind people up so much.

> Additionally, throwing together lots of things makes it more likely that PR problems in one area spread to others.

My take: I think this is, on net, not a strong reason.

My guess is that the cause neutrality and ‘scout altruism’ aspect of EA — the urge to find out what are actually the best ways of doing good — is among its most attractive features. It draws more criticism and blowback, true, because it talks about prioritisation, which is inherently confrontational. But this also makes it substantive and interesting — not to mention more transparent. And in my experience articles describing EA favourably often seem fascinated by the “EA as a question” characterisation.[1]

Moreover, I think some of this could just be people losing appetite for being “punchable” after taking a beating post-FTX. But being punchable is sometimes worth standing up for what you believe!

I guess this is all to say that I agree that if we shifted more toward specific fields, “more people would be more able to sidestep the various debates around longtermism, cause prio, etc. that seem to wind people up so much”, but that I suspect that could be a net loss. I think people find cause prioritisation super intriguing and it’s good for people’s thinking to have to confront prioritisation questions.

(Though whether this matters enough depends a lot on how (un)confident you are in specific areas being way more pressing than others.E.g. if you’re confident that AI risk is more pressing than anything else, you arguably shouldn’t care much about any potential losses here (though even that’s unclear). EA per se going forward seems like it has a lot less value in that world. So maybe point (4) is the actual crux here.)

7. The reason: a vague sense that building EA per se isn’t going that well even when you put aside the worry about EA per se being dangerous. I don’t know if people really think this beyond as implied in reasons (1) and (2) and (6), but I get the sense they do.

My take: I think this is false! If we put aside the ‘EA helped cause SBF’ point (above), building EA per se is, I think, going pretty well. EA has been a successful set of ideas, lots of super talented and well-meaning and generally great people have been attracted to it, and (a la point (5) above) it’s made progress in important areas. It’s (still) one of the most intellectually interesting and ethically serious games in town.

You could argue that even without SBF, building EA per se is going badly because EAs contributed to accelerating AI capabilities. That might be right. But that would not be a good reason to reallocate resources from EA per se to building specific fields, because if anything that’d result in more AI safety field building with less critical pressure on it from EAs sceptical it’s doing the most good, which seems like it would be more dangerous in this particular way.

Some reasons for continuing to devote considerable resources to EA per se community building:

1. The vibe and foundation EA per se encourages more critical thinking, greater intellectual curiosity, more ethical seriousness, and more intellectual diversity vs. specific field-building.

I think sometimes EA has accidentally encouraged groupthink instead. This is bad, but it seems like we have reason to think this phenomenon would be worse if we were focusing on building specific fields. It’s much harder to ask the question: “Is this whole field /​ project actually a good idea?” when you’re not self-consciously trying to aim at the good per se and the people around you aren’t either. The foundational ideas of EA seem much more encouraging of openness and questioning and intellectual/​ethical health.

I think the ‘lots of schools of thought within EA’/​subcultures thing is also probably good rather than bad, even though I agree there’s a limit to it being net good here and I could imagine us crossing it. I think this is related to the stakeholders point.

2. EA enables greater field switching: a lot of people already seem to think they need to have been doing field X forever to contribute to field X in a way that is bad and false. EA probably reduces this effect by making the social/​professional/​informational flows between a set of fields much tighter than they otherwise would be.

2a. EA enables more cross-pollination of ideas between fields, and more “finding the intersections”. For example, it seems probable that questions like “how does AI affect nuclear risk” would be more neglected in a world with a smaller EA community, general conferences, shared forums and discussions, etc.

3. There are weird, nascent problem areas like those on this list that will probably get less attention if/​to the extent that we allocate resources away from EA per se and toward a handful of specific fields. I place enough importance on these that it would seem like a loss to me, though it could perhaps be worth it. Again, how much this matters goes in the ‘how confident are you that we’ve identified the most pressing areas’ bucket (point 4 above).

4. Depending on how it’s done, moving resources from building EA per se to building separate fields could have the result of reducing long-term investment in the highest-impact fields. Why? Because when people are part of EA per se they often try to be ‘up for grabs’ in terms of what they end up working on — so the fields with the best claim to being highest impact can still persuade them to help with that field. Hence: more people in the community moving toward existential risk interventions over time, because they find the arguments for those issues being most pressing persuasive. If we build separate fields now, we’re making it harder for the most pressing problems to win resources from the less pressing problems as things go on — making the initial allocation much more important to get right.

  1. ^

    I should note though that I could be overweighting this /​ “typical-minding” here because this is a big part of what attracted me to EA