EA and Longtermism: not a crux for saving the world

This is partly based on my experiences working as a Program Officer leading Open Phil’s Longtermist EA Community Growth team, but it’s a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.

Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it.

Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach.

A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I’m concerned that this is a reason we’re failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023].

This is in the vein of Neel Nanda’s “Simplify EA Pitches to “Holy Shit, X-Risk”″ and Scott Alexander’s “Long-termism vs. Existential Risk”, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA.


EA and longtermism: not a crux for doing the most important work

Right now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think that’s likely the most important thing anyone can do these days. And I don’t think EA or longtermism is a crux for this prioritization anymore.

A lot of us (EAs who currently prioritize x-risk reduction) were “EA-first” — we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about the importance of the far future and potential technologies and other changes that could influence it. Some of us were “longtermists-second”; we prioritized making the far future as good as possible regardless of whether we thought we were in an exceptional position to do this, and that existential risk reduction would be one of the core activities for doing it.

For most of the last decade, I think that most of us have emphasized EA ideas when trying to discuss X-risk with people outside our circles. And locally, this worked pretty well; some people (a whole bunch, actually) found these ideas compelling and ended up prioritizing similarly. I think that’s great and means we have a wonderful set of dedicated and altruistic people focused on these priorities.

But I have concerns.

I’d summarize the EA frame as, roughly, “use reasoning and math and evidence to figure out how to help sentient beings as much as possible have better subjective experiences, be open to the possibility this mostly involves beings you don’t feel emotionally attached to with problems you aren’t emotionally inspired by” or, a softer “try to do good, especially with money, in a kind of quantitative, cosmopolitan way”. I’d summarize the LT frame as “think about, and indeed care about, the fact that in expectation the vast majority of sentient beings live very far away in the future (and far away in space), who in expectation are very different from you and everything you know, and think about whether you can do good by taking actions that might allow you to positively influence these beings.”

Not everyone is into that stuff. Mainly, I’m worried we (again, EAs who currently prioritize x-risk reduction) are missing a lot of great people who aren’t into the EA and LT “frame” on things; e.g. they find it too utilitarian or philosophical (perhaps subconsciously), and/​or there are subtle ways it doesn’t line up with their aesthetics, lifestyle preferences and interests. I sometimes see hints that this is happening. Both frames ask for a lot of thinking and willingness to go against what many people are emotionally driven by. EA has connotations of trying to be a do-gooder, which is often positive but doesn’t resonate with everyone. People usually want to work on things that are close to them in time and space; longtermism asks them to think much further ahead, for reasons that are philosophically sophisticated and abstract. It also connotes sustainability and far-off concerns in a way that’s pretty misleading if we’re worried about imminent transformative tech.

Things have changed

Now, many EA-first and longtermist-first people are, in practice, primarily concerned about imminent x-risk and transformative technology, have been that way for a while, and (I think) anticipate staying that way.

And I’m skeptical that the story above, if it were an explicit normative claim about how to best recruit people to existential risk reduction causes, passes the reversal test if we were starting anew. I’d guess that if most of us woke up without our memories here in 2022 [now 2023], and the arguments about potentially imminent existential risks were called to our attention, it’s unlikely that we’d re-derive EA and philosophical longtermism as the main and best onramp to getting other people to work on that problem. In fact, I think that idea would sound overly complicated and conjunctive, and by default we wouldn’t expect the optimal strategy to use a frame that’s both quite different from the one we ultimately want people to take, and demanding in some ways that that one isn’t. As a result, I think it would seem more plausible that people who believe it should directly try to convince people existential risks are large and imminent, and that once someone buys those empirical claims, they wouldn’t need to care about EA or longtermism to be motivated to address them.

An alternative frame

By contrast, the core message of an “x-risk first” frame would be “if existential risks are plausible and soon, this is very bad and should be changed; you and your loved ones might literally die, and the things you value and worked on throughout your life might be destroyed, because of a small group of people doing some very reckless things with technology. It’s good and noble to try to make this not happen”. I see this as true, more intuitive, more obviously connected to the problems we’re currently prioritizing, and more consistent with commonsense morality (as evinced by e.g. the fact that many of the most popular fictional stories are about saving the world from GCRs or existential risks).

I don’t think the status quo evolved randomly. In the past, I think x-risks seemed less likely to arise soon, or at all, so EA + LT views were more likely to be cruxes for prioritizing them. I still think it would have been worth trying the things I’m suggesting ten years ago, but the case would have looked a lot weaker, Specifically, there are some changes that make an x-risk first (or similar) recruiting onramp more likely to succeed, looking forward:

  • AI capabilities have continued to advance. Compared to the status quo a decade ago in 2012, AIs outperform humans in many more areas, AI progress is far more apparent, the pace of change is faster, and all of this is much more widely known. [This seems much more true in 2023 than 2022, when I originally wrote this line, and now seems to me like a stronger consideration than the rest.]

  • The arguments for concern about AI alignment have been made more strongly and persuasively, by a larger number of credible people.

  • COVID-19 happened and made concern about anthropogenic biorisk seem more credible.

  • COVID-19 happened and a lot of respected institutions handled it less well than a lot of people expected, engendering a greater sense of things not being under control and there not being a deep bench of reasonable, powerful experts one can depend on.

  • [maybe] Brexit, Trump’s presidency, crackdowns in China, Russia’s war on Ukraine, etc., have normalized ideas about big societal changes and dangers that affect a huge number of people happening relatively frequently and suddenly.

Who cares?

I think there should be a lot more experimentation with recruiting efforts that aren’t “EA-first” or “longtermist-first”, to see if we can engage people who are less into those frames. The people I’d be excited about in this category probably wouldn’t be the kind of people that totally reject EA and LT; they might nod along to the ideas, but wind up doing something else that felt more exciting or compelling to them. More broadly, I think we should be running lots of experiments (communicating a wide range of messages in a wide range of styles) to increase our “surface area”.

Some other reasons to be skeptical of the status quo:

  • It might not be sustainable; I think if timelines start to seem very short, especially if there are warning shots and more high-profile people attempting to sound various alarms, I think the “EA-first” onramp will look increasingly convoluted and out of place; it won’t just leave value on the table, it might seem actively especially uncompelling and out of touch.

  • I think leading with EA causes to more people feeling surprised and disappointed, because something that seemed to be and on occasion represents itself as an accessible way to try to be a good person, is in fact sometimes elitist/​elite-focused, inaccessible, and mostly pretty alienated from its roots, generating general bad feelings and lower morale. I think existential risk reduction, by virtue of the greater transparency of the label, is less likely to disappoint.

  • Relatedly, I think EA is quite broad and so reliably generates conflicting access needs problems (e.g. between people working on really unusual topics like wild animal suffering who want to freely discuss e.g. insect sentience, and people working on a policy problem in the US government who more highly prioritize respectability) and infighting between people who prioritize different cause areas, and on the current margin more specialization seems good.

  • At least some EAs focused on global health and wellbeing, and on animal welfare, feel that we are making their lives harder, worsening their reputations, and occupying niches they value with LT/​x-risk stuff (like making EAG disproportionately x-risk/​LT-focused). Insofar as that’s true, I think we should try hard to be cooperative, and more specialization and brand separation might help.

  • Something about honesty; it feels a bit dicey to me to intro people to EA first, if we want and expect them to end up in a more specific place with relatively high confidence, even though we do it via EA reasoning we think is correct.

  • Much of the value in global health and farm animal welfare, as causes, is produced by people uninterested in EA. On priors, I’d expect that people in that category (“uninterested in EA”) can also contribute a lot of value in x-risk reduction..

  • Claim from Buck Shlegeris: thinking of oneself and one’s work as part of a group that also includes near-term priorities makes it socially awkward and potentially uncooperative to the group to argue aggressively that longtermist priorities are much more important, if you believe it, and having a multi-cause group makes it harder to establish a norm of aggressively “going for the throat” and urging other to do the same on what you think is the most important work.

I suspect it would be a useful psychological exercise for many of us to temporarily personally try out “shaking free” of an EA- or LT-centric frames or identities, to a much greater extent than we have so far, for our own clarity of thought about these questions.

I think readers of this post are, in expectation, overvaluing the EA and longtermism frames

Because:

  • They are “incumbent” frames, so they benefit from status quo bias, and a lot of defaults are built around them and people are in the habit of referring to them

  • We (mostly) took this onramp, so it’s salient to us

  • Typical mind fallacy; I think people tend to assume others have more similar minds to themselves than is the case, so they project out that what is convincing to them will also convince others.

  • They probably attract people similar to us, who we enjoy being around and communicate with more easily. But, damn it, we need to win on these problems, not hang out with the people we admire the most and vibe with the best.

  • Most of us have friends, allies, and employees who are relatively more committed to EA/​LT and less committed to the x-risk reduction frame, and so it’s socially costly to move away from EA/​LT.

  • Given that we decided to join the EA/​LT community, this implies that the EA and LT frames suggested priorities and activities that were a good fit for us and let us achieve status — and this could bias us toward preferring those frames. (For example, if an x-risk frame puts less emphasis on philosophical reasoning, people who’ve thrived in EA through their interest in philosophy may be unconsciously reluctant to use it.)

Concrete things I think are good

  • Recruiting + pipeline efforts that don’t form natural monopolies in tension with existing EA infrastructure, focused on existential risk reduction, the most important century, AI safety, etc.. Like:

    • Organizations and groups

    • Courses, blogs, articles, videos books

    • Events and retreats

    • 1:1 conversations with these emphases

Concrete things I’m uncertain about

  • Trying to build lots of new community infrastructure of the kind that creates natural monopolies or have strong network effects around an x-risk frame (e.g. an “Existential Risk Forum”)

Counterarguments:

  • In my view, a surprisingly large fraction of people now doing valuable x-risk work originally came in from EA (though also a lot of people have come in via the rationality community), compared to how many I would have expected, even given the historical strong emphasis on EA recruiting.

  • We’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.

    • However, it seems unlikely that we’ll end up shifting our views such that “transformative tech soon” and “the most important century” stop seeming like plausible ideas that justify a strong focus on existential risk.

  • EA offers a lot of likable ideas and more accessible success stories, because of its broad emphasis on positive attributes like altruism and causes like helping the global poor; this makes existential risk reduction seem less weird and connects it to things with a stronger track record

    • However, I think the PR gap between EA and x-risk reduction has closed a lot over the last year, and maybe is totally gone

    • And as noted above, I think there are versions of this that can be uncooperative with people who prioritize causes differently, e.g. worsening their reputations

  • Transformative tech/​MIC/​x-risk reduction isn’t a very natural frame either; we should be more cause-specific (e.g. recruiting into TAI safety or bio work directly).

    • I think we should do some of this too, but I suspect a broader label for introducing background concepts like the difference between x-risk and GCRs, and the idea of transformative technology, is still helpful.

  • Some people brought up that they particularly want people with cosmopolitan, altruistic values around transformative tech.


Anti-claims

(I.e. claims I am not trying to make and actively disagree with)

  • No one should be doing EA-qua-EA talent pipeline work

    • I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of great people. However, my guess is that a medium-sized reallocation away from it would be good to try for a few years.

  • The terms EA and longtermism aren’t useful and we should stop using them

    • I think they are useful for the specific things they refer to and we should keep using them in situations where they are relevant and ~ the best terms to use (many such situations exist). I just think we are over-extending them to a moderate degree

  • It’s implausible that existential risk reduction will come apart from EA/​LT goals

    • E.g. it might come to seem (I don’t know if it will, but it at least is imaginable) that attending to the wellbeing of digital minds is more important from an EA perspective than reducing misalignment risk, and that those things are indeed in tension with one another.

    • This seems like a reason people who aren’t EA and just prioritize existential risk reduction are less helpful from an EA perspective than if they also shared EA values all else equal, and like something to watch out for, but I don’t think it outweighs the arguments in favor of more existential risk-centric outreach work.

Thanks to lots of folks who weighed in on this, especially Aaron Gertler, who was a major help in polishing and clarifying this piece