We’re no longer “pausing most new longtermist funding commitments”

In November, I wrote about Open Philanthropy’s soft pause of new longtermist funding commitments:

We will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/​or we might want to provide smaller amounts of money to projects we previously would have supported more generously ..

Open Philanthropy also need[s] to raise its bar in light of general market movements (particularly the fall in META stock) and other factors … the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar …

It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few …

Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so …

It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants).

Since then, we’ve done some work to assess where our new funding bar should be, and we have created enough internal guidance that the pause no longer applies. (The pause stopped applying about a month ago, but it took some additional time to write publicly about it.)

What did we do to come up with new guidance on where the bar is?

What we did:

  • Ranking past grants: We created1 a rough ranking of nearly all the grants we’d made over the last 18 months; we also included a number of grants now-defunct FTX-associated funders had made.

    • The basic idea of the ranking is essentially: “For any two grants, the grant we would make if we could only make one should rank higher.”2 It is based on a combination of rough quantitative impact estimates, grantmaker intuitions, etc.

    • With this ranking, we are now able to take any given total Open Phil longtermist budget for the time period in question, and identify which grants would and would not have made the cut under that budget.

    • In some cases, we gave separate rankings to separate “tranches” of a grant (e.g., the first $1 million of a grant might be ranked much higher than the next $1 million, because of diminishing returns to adding more funding to a given project).

  • Annual spending guidelines: We estimated how much annual spending would exhaust the capital we have available for longtermist grantmaking over the next 20-50 years (the difference between 20 years and 50 years is not huge3 in per-year spending terms), to get some anchors on what our annual budget for longtermist grantmaking should be.

    • We planned conservatively here, assuming that longtermist work would end up with 30-50% of all funding available to Open Philanthropy.4 OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.

  • Setting the bar: We divided our ranked list of grants into tiers (tier 1 being the best-ranked grants, tier 2 being next-best-ranked grants, etc.), and considered how much we would spend if we funded everything at tier 2 and better, tier 3 and better, etc. After considering a few possibilities, we had broad all-things-considered agreement (with a heavy dose of intuition) that we should fund everything at tier 4 and better, as well as funding tier-5 grants under various conditions (not having huge time costs of grant investigation, recognizing that we might later stop funding tier-5 grants if our set of giving opportunities continues to grow, etc.)

  • I disseminated guidance along these lines to grant investigators.

This was a very messy, pragmatic exercise intended to get out some quick guidance that would result in a sustainable spending level with some room for growth. We’re making plans to improve on it in a number of ways, and this may lead to further adjustments to our bar and how that bar is operationalized in the relevant program areas.

What is the bar now?

I’m not able to give much public detail on “where the bar” is, because the bar is defined with reference to specific grants (e.g., “fund everything at tier 4 and above” means “fund everything that we think is at least as good value-for-money as low-end tier-4 grants,” and there’s an assumption that grant investigators will have enough familiarity with some specific tier-4 grants to have a sense for what this means). But hopefully these numbers will be somewhat informative:

  • About 40% of our longtermist grantmaking over the last 18 months (by dollars) would have qualified for tier 4 or better (which, under the new guidance, means it would be funded). Note that this figure refers only to our longtermist grantmaking, and does not include grants by other funders (we included some of the latter in our exercise, but I’m reporting a figure based on Open Philanthropy alone because I think it will be easier to interpret).

  • About 70% would have qualified for tier 5 or better (which, under the new guidance, means it would be funded under some conditions—low time costs to investigate, hesitance to make implicit very long-term commitments since we might raise our bar in the future).

  • So about 40-70% of the grantmaking we did over the last 18 months would’ve qualified for funding under the new bar. I think 55% would be a reasonable point estimate.

    • This doesn’t mean we think that 45% of our past grants were “bad.” Our bar has just gotten much higher, due to the decline in other funding available and the growth in the longtermist community and other relevant communities (e.g., AI alignment), noted in the blockquote at the beginning of this post.

    • In spite of the higher bar, we expect our overall longtermist funding to be flat or up in the coming years, because there are now so many more good otherwise-unfunded giving opportunities.

Sometimes, we see strong grant applicants underestimate the strength of their applications. Though we’ve raised the bar, we still encourage people to err on the side of applying for funding.

A note on budget-based vs. value-based bar-setting

In theory, the ideal way to set the bar for our longtermist giving would be to estimate the value-per-dollar of each grant (in terms of the long-run future, perhaps roughly proxied by reduced probability of existential catastrophe), and to estimate the value-per-dollar of unknown future grants, and make any grants whose value-per-dollar exceeds that of the lowest-expected-value5 grant we would ever otherwise make (and correspondingly refrain from that lowest-value grant).

Instead, we set the bar based on something more like: “If we use this policy, we have a reasonable spending rate relative to our capital (specifically, our spending can roughly double before it would be on pace to spend down the capital within ~20 years).” You could think of this as corresponding to an assumption like: “Over the next few years, giving opportunities will grow faster than available capital grows; after that, we aren’t sure and just assume they’ll grow at the same rate for a while; in the long run, we want to be on pace to spend down capital within 20 years or so, and hope that other funders come in by then to the extent this whole operation still makes sense.” Of course, we can adjust our spending rate over time (and as noted above, the difference between “spend down over 20 years” and “spend down over 50 years” is not huge in per-year spending terms); this is just the rough sort of picture that we have in mind at the moment.

The first approach would be better if we could (without too much time cost) produce informative numbers. In practice, I’ve tried and seen a number of attempts to do this, and they haven’t produced action recommendations that I believe/​understand enough to deviate from the action recommendations I come up with using more informal methods (like what we’ve done here). This isn’t to say that I think the more formal approach is hopeless, just that I think doing it well enough to affect our actions will require a lot more time investment than has been put in so far, and we’d rather spend our time on other things (such as sourcing and evaluating grants) for the time being.


  1. Roughly speaking, each team lead ranked grants made by their team, then I merged them into a master ranking that mostly deferred to team leads’ judgments, but incorporated my own as well.

  2. This is a fairly similar idea to ranking by “impact per dollar (in terms of the long-run future, roughly proxied by reduced probability of existential catastrophe)” but not exactly the same, e.g. in this framework I’d prefer to make a large grant with very (and unusually even by our standards) high impact per dollar than a very small grant with slightly higher impact per dollar (since I expect the money saved making the smaller grant to end up effectively spent at much lower per-dollar value).

    This ranking can depend on the ranker’s opinion of how good future giving opportunities will be (and on lots of other hard-to-estimate things). In practice, I doubt that important variation in opinions on future giving opportunities affected the rankings much. Having discussed this further with Bastian Stern, I think “impact per dollar” is actually better. The instructions for rankers were pretty vague on this point and I suspect they were mostly using “impact per dollar” anyway.

  3. E.g., at an annual real investment return of 5%, spending ~8% of (initial) capital each year would spend down the capital in ~20 years; spending ~5.5% would spend down the capital in ~50 years; spending a bit under 5% would never exhaust the capital.

  4. Longtermist giving has accounted for ~30% of the funding we’ve allocated through the end of 2022.

  5. According to estimates made at the time. Another way of thinking about this is as the “last grant” we’d ever make, in the sense that we’d prioritize all others above it.