Update from Open Philanthropy’s Longtermist EA Movement-Building team


  • Open Philanthropy’s Longtermist EA Movement-Building team aims to grow and support the pool of people who are well-positioned to work on longtermist priority projects.

    • This post outlines our recent work and strategic updates as a team, and isn’t meant to represent the work or views of other teams at Open Phil.

  • We think this is a very promising space, and we’re hiring for several roles so that we can move faster and deploy more funding.

  • Over time, we have become more confident in the value of the grants we’ve already made, since our grantees mostly seem to be bringing in promising people to work on longtermist projects at a good rate.

  • This has led us to begin spending our time differently:

    • Less time evaluating opportunities (since we’ve come to think that most of the things we want to fund will probably be above our “bar” for impact)

    • More time trying to generate additional opportunities (e.g. by creating different programs where people can apply for funding, like our scholarships or course development grants).

    • More time trying to better understand the field and share our findings.

  • We’ve also come to prioritize “time-effectiveness” over “cost-effectiveness” in most cases (that is, aiming to achieve our goals while conserving EA time/​labor, even if that means spending more money).

  • I think we should have made those changes faster than we did, and see it as a mistake that I didn’t (a) hire more quickly and (b) advocate more forcefully for certain opportunities that were promising, but difficult to evaluate.

  • For our future grantmaking on our team, I’m concerned about avoiding measurability bias (prioritizing grants that come with impressive numbers/​credentials attached) and certain forms of motivated reasoning.

  • There are many kinds of projects we hope to fund in the future that could allow us to sharply scale up our total grantmaking.

For much more detail on all of this, see the rest of the post.


This post is a report and update on the Open Philanthropy Longtermist Effective Altruism Movement-Building team’s thinking and goals. It’s written by me, Claire, and mostly represents my perspective. I’m writing this in my role as an Open Phil staff member, but I take sole responsibility for the angsty commentary near the bottom.

Our team currently consists of me, Asya Bergal, Bastian Stern, and Eli Rose. We are supported by Open Phil’s “longtermist budget” (funding to support projects motivated by the longtermist view), but unlike the other longtermist cause areas, we aren’t aiming to make progress on longtermist priorities directly. Instead, our goal is to grow and support the pool of people motivated and well-positioned to work on longtermist priority projects (e.g. reducing existential risk and aiming to improve the far future). [1]

I think our team’s grantmaking has high expected value, because (1) in my experience, most of the relevant object-level longtermist projects are bottlenecked by the dearth of aligned people who are good fits (so our goals are aimed at a core problem), (2) there’s a lot of funding to direct (which is bottlenecked by the number of grantmakers working to direct it), and (3) we have a reasonably high number of potential grantmaking projects we are working as fast as we can to implement (i.e. there’s a feeling of traction) and could implement faster if we had more capacity on the team. Not coincidentally, we are currently hiring for several roles.

What’s happened so far

I took over this area from Nick Beckstead (who was working on it part-time) in early 2019. Bastian started working with me and Eli joined in 2020, and Asya joined in 2021.

I think a lot has changed since then, and a lot of important changes are ongoing. We committed funds equaling ~$17M in the area in 2019, ~$26M in 2020, and ~$60M in 2021. So far in 2022, we’ve already committed >$65M (though some key aspects of the relevant grants are still TBD[2]), so we are on track to continue to vastly increase our giving. I hope and believe that if we hire more strong grantmakers, we can double giving in this area several more times (i.e., there’s sufficient funder interest and are/​will be worthy opportunities).

As I’m going to discuss a bit below, I’ve shifted further away from focusing on “money moved” figures, and I think they can be misleading proxies for impact. Even among funding that is “above the bar” from a financial perspective[3], I think the top decile is at least an order of magnitude more impactful (per dollar) than the bottom decile grantmaking that we’re doing. In other words, seeing that more money has been moved doesn’t tell you much unless you have a sense of where it falls along the cost-effectiveness spectrum (or time-effectiveness spectrum).

Right now, the time of aligned longtermists working on high-priority projects seems to be the scarcer resource and perhaps the more useful metric to focus on. Still, “money moved” is one of the easier figures to report, and I reported it above because I think it mostly tracks the more meaningful but less measurable growth in projects and funding opportunities my team is working with.

Over the last few years, my thinking about my role and the role of other longtermist grantmakers has shifted significantly. In the past, I spent a lot more time working on the question: “How can I tell if a funding opportunity in the longtermist meta space meets the bar?” Nowadays, we spend less time on evaluation and more time creating new funding opportunities to achieve our main goal (growing and supporting people who help with longtermist priority projects).[4]

There were a few reasons for this change:

  • Over time, I got to know and understand many of the relevant grantees more, and developed a better sense of how they were working, what their core competencies were, and what their own research and metrics were suggesting about their impact. The fifth time you evaluate a grantee’s work is likely to teach you much less than the first time.

  • Research (e.g. this and some unpublished analysis) we did suggested that our previous grants were mostly successfully recruiting people we thought were promising at reasonable rates.

    • There’s a lot of nuance here, but basically, enough people (mostly doing object-level work our advisors think is promising in longtermist priority areas) reported (including when we didn’t prompt them by mentioning specific projects) that (more or less) the projects we are funding helped them significantly on their path to their current work.

    • It also suggested to me that high-quality object-level work can be as effective at achieving “meta” goals as meta work for a variety of reasons. Such work often opens up exciting “surface area” for people who are good fits (e.g. one conceptual insight can create opportunities for valuable research on several better-scoped sub-questions, a new org aimed at a key goal can often create roles for people who are ready to contribute but not be founders). It also demonstrates at a gut level that progress is plausible, and it showcases that there are talented teams doing this important work (and that working with them might be fun and a valuable learning experience).

      • Tangentially, I think a lot of community-builders in EA-land underestimate the importance of engaging with object-level EA causes themselves, in terms of their ability to in turn do successful outreach to people who are good fits for working in those causes. In my experience, it’s hard to convince someone to go into a cause area when you don’t really understand the cause area or why it’s important (and you risk potentially putting them off by giving them clumsy or inaccurate explanations). And on the other hand, it’s really helpful for one’s own intellectual development to try to understand a few different causes, and the different dynamics they face trying to solve core problems.

  • The amount of longtermist-motivated funding available rose substantially

    • As Ben Todd at 80,000 Hours noted, EA-motivated funding available has risen vastly in the last few years. (Note that only some of this funding will go to longtermist projects, though I expect it to be a substantial fraction).

    • Also, I strongly suspect based on conversations with some other funders and their representatives that we would be able to mobilize more funding than is currently clearly aimed at longtermist goals if there were clearer funding gaps than the ones that exist and there wasn’t an apparent funding overhang.

Metrics of impact

The above points led me to think that in fact, going forwards, grants of the kind we were making will likely be substantially “above the (new) bar”.

That updated my views in several ways:

  • It seemed much less likely that additional time spent evaluating those opportunities more thoroughly would lead to changes in our decision to fund them.

  • It seemed more likely that less cost-effective-seeming opportunities we hadn’t previously been exploring would “meet the bar”.

  • And if more opportunities were “above the bar” and there were more funding, it seemed more plausible that the bottleneck for distributing funding as well as possible would be grantmaker time available for opening up opportunities.

  • It generally seemed more helpful to spend time doing various forms of research and thinking that would help people identify and aim at impactful and neglected activities that might be strong fits for them.

So, we’re shifting to prioritize our and our grantees’ time more highly. And we’ve been creating different kinds of open calls[4] for people to request different kinds of fairly short-term support[5] (in contrast to grants where we support existing organizations). For these, we’ve started thinking in terms of how much quality-weighted longtermist output we think they’ll produce per hour we put into them, rather than focusing primarily on output per dollar. (In an ideal world, we’d have a conversion factor we trusted between longtermist time and dollars and be able to get an aggregate longtermist resource cost estimate; this is more of a heuristic about which factor will tend to dominate for the kinds of decisions we’re making, given the situation we find ourselves in).

On the one hand, I think giving in this category (the short-term support, including for less EA-engaged individuals) tends to be less impactful per dollar compared to many other outreach activities aimed at less EA-engaged people.[6] But, I think it can have more positive impact per hour of EA (grantmaker and grantee) labor used.

For example, when we fund e.g. 80,000 Hours, we (amongst other activities) support their full-time advisors to advise interested people about how to have more impactful careers. With our scholarship programs, we’re also trying to cause people to spend more time on more impactful activities. But rather than do this via the 80k advisors, our scholarship programs use money “directly” (without much intermediating EA labor) to try to make impactful careers more accessible and attractive. In general, we think we get less impact per dollar from interventions that consume money “directly” like this. Since EA labor is the scarcer resource in many contexts, these types of interventions can make sense for grantmakers to prioritize.

I think it’s good for people starting projects of various kinds to think through not just monetary costs, but also the amount of aligned EA labor required to make a project work well. However, I expect most important longtermist projects to consume a ton of EA labor (including high opportunity cost labor), and am worried many newer EAs are already too hesitant to ask for support and advice because of personal and professional underconfidence, so it’s confusing.

Other changes that seem important to me

I’m not going to try to justify these now, just share my impressions sans evidence or explanation.

  • It seems like, relative to a few years ago, the rate of new meta projects spinning up has increased substantially. I’m extremely interested to see if we see early indicators of EA/​longtermist community growth start to pick up again in the next year or two in response (and will be a lot more pessimistic about the value of much of our work if that doesn’t happen).

  • My sense is that, relative to when this post was written, it’s substantially easier to get an EA job, and in fact there tends to be substantial competition over the most promising-seeming hires (though still many more applicants than jobs). This is probably healthier for the pool of people who want to work at EA organizations, though also a potentially worrying indicator of the number of projects (and resulting labor needs) growing faster than the pool of people that want to join those projects (despite many of the jobs now offering meaningfully higher salaries and better benefits, and the field writ broad having more roles and thus leading to higher effective job security).

Our mistakes

By which I mostly mean “my mistakes”, given the relative recency of my teammates joining and getting up to speed, and my responsibility for final calls about team strategy and direction.

I think:

  • I should have hired more people, more quickly. And, had a slightly lower bar for hiring in terms of my confidence that someone would be a good fit for the work, with corresponding greater readiness to part ways if it wasn’t a good fit.

  • I should have been faster to reorient towards valuing time and creating opportunities, relative to evaluating existing grants and their impact per dollar, and more intense about communicating to grantees and others about this change. Relatedly, I probably asked for too much information from grantees for too long, and I wish I’d been more comfortable advocating forcefully for high-EV bets even when evidence was very sparse.

  • I generally tend towards spreading myself and my team too thin and taking on too many projects, rather than either critically evaluating which projects are most important and focusing on them (including potentially, just focusing on one to the temporary exclusion of all else), or implementing meta-level changes that make it so we can take on more projects without becoming overstretched (like hiring or streamlining our processes, which we’ve also been doing). When I’ve tried to make these meta-level changes, I’ve generally felt like the time was very well spent. I’ve often found people on my team unable to put a lot of focused full-time effort into a project that might have deserved it because of other/​ongoing responsibilities, and I think that’s caused me to avoid considering particularly time-consuming projects.

On a meta level, I think most of my mistakes revolve around being unnecessarily slow to reorient around a change, so I’m trying to address that pattern by, when I notice myself having the thought that we might be erring slightly in some direction, trying to more quickly evaluate the hypothesis that we might actually be erring substantially and that fixing it should be a top priority.

The other, weaker pattern I noticed was being bottlenecked on emotional pain tolerance and reputational concerns (e.g. related to advocating for very uncertain grants for which I have little evidence when they have a reasonable probability of going poorly, or making riskier hiring decisions which might end in mutual unhappiness).

Mistakes I’m worried we will make

  • Our area is rife with opportunities to fall prey to measurability bias, either directly, or mediated by status gradients where we’re socially rewarded for reporting measurable, impressive results. I think that if we aren’t careful, that could cause us to focus on ambitious grantmaking projects that spend a lot of money and/​or involve impressive-sounding figures (e.g. funding very large numbers of people, sponsoring very popular content, or working with prestigious/​high profile people). That could come at the cost of the kind of work that seems most promising to us when we’re at our most reflective (which tends slightly more towards work aimed at helping exceptionally promising people become more involved and resolve key cruxes in a more targeted way), or lead us to do things that end up being net negative, though it’s complicated because I think all of the kinds of projects above have the potential to be really impactful.

    • The above concern seems especially relevant as we hire more people; I think I’ll have to rely more on trust and metrics relative to really developing my own inside view about particular projects.

  • I find myself flinching away from pessimistic models of how transformative artificial intelligence (TAI) could unfold, e.g. along the lines of what I hear from people at MIRI about very high odds of unaligned TAI being developed by default and most or all AI alignment agendas having little hope of success, with catastrophic results. Large parts of those models tend to make sense to me when I try to evaluate the arguments for myself, but they also frighten me. It seems like in those worlds, it’s harder for people in my position to have a big positive impact, which is demotivating. I’m worried about motivated reasoning causing me to underestimate how likely this kind of situation is.

    • I think it’s often useful for longtermists to act as though they are in plausible-seeming worlds where they can have an unusually big impact and ignore worlds they can’t affect very much, because the expected value of one’s actions is likely dominated by actions taken in worlds where you’re well-positioned to have a big impact. But, I’m still concerned about (a) missing hard-but-possible routes to substantial positive impact in these worlds and (b) accidentally having substantially negative impact in those worlds, if it’s easy to have negative impact but hard to have positive impact.

  • I (and probably other grantmakers) can end up spending lots of time on the most borderline fund vs. not-fund decisions. Deciding whether to recommend funding is often the most clear and salient decision a grantmaker can make, and grants that are near the bar are the most challenging to decide on. But, those decisions generally have pretty low stakes, somewhat by definition (if you’re correct that the EV of the grant is right around the bar for funding, the decision to fund or not will lead to only small gains or losses in expectation[7]). I think it’s better to think about options besides funding or not funding — like creating new programs, seeing whether you can somehow help a promising grantee, or sharing information and insights with other funders or grantees — but it’s a bit less intuitive to do so.

Looking forward

Over the next few years, I expect us to spend more time on projects engaging with high-school students (largely for the reasons listed here) as well as working more directly with community-building efforts aimed at undergraduates.

If we found the right people (we’re hiring!) I could also imagine us spending tens of millions of dollars more on the following projects, which could easily end up seeming similarly cost-effective as our previous grantmaking:

  • AI Safety-focused meta work, i.e. aiming specifically at causing more people who are good fits for AI safety research to work on it (via projects like EA Cambridge’s AGI Safety Fundamentals or supporting AI safety-focused groups at universities).

  • Supporting the production of more excellent content on EA, longtermism, and transformative technology (e.g. books, web content, YouTube videos).

  • Rationality-and-epistemics-focused community-building. Right now, I think the EA community is growing much faster than the rationalist community, even though a lot of the people I think are most impactful report being really helped by some rationalist-sphere materials and projects. Also, it seems like there are a lot of projects aimed at sharing EA-related content with newer EAs, but much less in the way of support and encouragement for practicing the thinking tools I believe are useful for maximizing one’s impact (e.g. making good expected-value and back-of-the-envelope calculations, gaining facility for probabilistic reasoning and fast Bayesian updating, identifying and mitigating one’s personal tendencies towards motivated or biased reasoning). I’m worried about a glut of newer EAs adopting EA beliefs but not being able to effectively evaluate and critique them, nor to push the boundaries of EA thinking in truth-tracking directions.

  • Trying to make EA ideas and discussion opportunities more accessible outside current EA hubs, especially outside the Anglophone West (e.g. via translating content and supporting groups at relevant universities). I think that in the English-speaking Western world, there are or soon will be somewhat diminishing returns to additional recruiting efforts in the most recruiting-saturated contexts; this doesn’t seem as true for other locations.

  • Supporting marketing and advertising for high-quality content that discusses ideas important to EA or longtermist projects. I think this could be good because, while writing strong original content generally takes deep understanding of the relevant ideas (which is in short supply), this isn’t as much the case for spreading existing content (this relies more on funding and marketing skill, which are).

  1. ^

    Sometimes, our team supports projects that aren’t directly aimed at these priorities, often because we think their value from a movement-building perspective is sufficiently high that it justifies supporting them (i.e. in those cases we might have different motives for supporting a project than the people who work on it have for working on it.)

  2. ^

    Two caveats about this though:

    1. A relatively small fraction of the funding is for different regranting programs, and so has not in fact yet “bottomed out”, and will absorb more grantmaker labor before that happens (and might be reported by another entity as part of their money moved in the future).

    2. This is somewhat driven by unusually large outliers. However, grants that were previously outliers in terms of size are becoming more common. I’m left pretty uncertain about how much we should expect to give this year.

  3. ^

    I.e. in expectation a better use of funding than the longtermist last dollar. See here for a discussion about last dollars in the global health and wellbeing space.

  4. ^

    So far, this includes our RFP for outreach projects, course development program, early-career funding, undergraduate scholarship, with more to come. The FTX Foundation Future Fund also currently has an open round.

  5. ^

    I’d love to have a better name for this category, suggestions welcome

  6. ^

    There are also programs that support highly EA-engaged individuals, which I think can be really impactful per dollar and hour, but there’s a limited number of such people and so only so much financial support to provide.

  7. ^

    Occasionally, spending more time can lead one to realize that a grant is actually really promising or really net negative, but I think that’s pretty rare.