LTFF and EAIF are unusually funding-constrained right now
UPDATE 2023/09/13:
Including only money that has already landed in our bank account and extremely credible donor promises of funding, LTFF has raised ~1.1M and EAIF has raised ~500K. After Open Phil matching, this means LTFF now has ~3.3M additional funding and EAIF has ~1.5m in additional funding.
From my (Linch)’s perspective, this means both LTFF nor EAIF are no longer very funding constrained for the time period we wanted to raise money for (the next ~6 months), however both funds are still funding constrained and can productively make good grants with additional funding.
See this comment for more details.
Summary
EA Funds aims to empower thoughtful individuals and small groups to carry out altruistically impactful projects—in particular, enabling and accelerating small/medium-sized projects (with grants <$300K). We are looking to increase our level of independence from other actors within the EA and longtermist funding landscape and are seeking to raise ~$2.7M for the Long-Term Future Fund and ~$1.7M for the EA Infrastructure Fund (~$4.4M total) over the next six months.
Why donate to EA Funds? EA Funds is the largest funder of small projects in the longtermist and EA infrastructure spaces, and has had a solid operational track record of giving out hundreds of high-quality grants a year to individuals and small projects. We believe that we’re well-placed to fill the role of a significant independent grantmaker, because of a combination of our track record, our historical role in this position, and the quality of our fund managers.
Why now? We think now is an unusually good time to donate to us, as a) we have an unexpectedly large funding shortage, b) there are great projects on the margin that we can’t currently fund, and c) more stabilized funding now can give us time to try to find large individual and institutional donors to cover future funding needs.
Importantly, Open Philanthropy is no longer providing a guaranteed amount of funding to us and instead will move over to a (temporary) model of matching our funds 2:1 ($2 from them for every $1 from you, up to 3.5M from them per fund).
Where to donate: If you’re interested, you can donate to either Long-Term Future Fund (LTFF) or EA Infrastructure Fund (EAIF) here.[1]
Some relevant quotes from fund managers:
Oliver Habryka
I think the next $1.3M in donations to the LTFF (430k pre-matching) are among the best historical grant opportunities in the time that I have been active as a grantmaker. If you are undecided between donating to us right now vs. December, my sense is now is substantially better, since I expect more and larger funders to step in by then, while we have a substantial number of time-sensitive opportunities right now that will likely go unfunded.
I myself have a bunch of reservations about the LTFF and am unsure about its future trajectory, and so haven’t been fundraising publicly, and I am honestly unsure about the value of more than ~$2M, but my sense is that we have a bunch of grants in the pipeline right now that are blocked on lack of funding that I can evaluate pretty directly, and that those seem like quite solid funding opportunities to me (some of this is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality).
Lawrence Chan
“My main takeaway from [evaluating a batch of AI safety applications on LTFF] is [LTFF] could sure use an extra $2-3m in funding, I want to fund like, 1/3-1/2 of the projects I looked at.” (At the current level of funding, we’re on track to fund a much lower proportion).
Related links
EA Funds organizational update: Open Philanthropy matching and distancing
Asya Bergal’s Reflections on my time on the Long-Term Future Fund
Linch Zhang’s Select examples of adverse selection in longtermist grantmaking
Our Vision
We think there is a significant shortage of independent funders in the current longtermist and EA infrastructure landscape, resulting in fewer outstanding projects receiving funding than is good for the world. Currently, the primary source of funding for these projects is Open Philanthropy, and whilst we share a lot of common ground, we think we add value in the following ways:
Increasing the total grantmaking capacity within key cause areas.
Causing great projects to counterfactually happen in the world, or saving time and effort for people doing great projects who would otherwise spend significant time fundraising or waiting for grants to come in.
Supporting a set of worldviews that we find plausible and that are not currently well represented among grantmakers (though we have substantial overlap with Open Philanthropy’s worldview and there is a range of views on how much we should be directly optimizing for diversification away from their perspectives).
Emphasizing contact with reality: most of our grantmakers spend most of their time trying to directly solve problems of importance within their cause area, rather than engaging in “meta” activities like grantmaking. We think this is important as grantmaking often has very poor feedback loops (particulalry longtermist grantmaking).
Provide early stage funding to allow applicants to test their fit for work and “get ready” to seek funding from other funders that specialize in larger grant sizes.
Improving the epistemic environment within EA by making it easier for smaller projects to disagree with Open Philanthropy without worrying that this will significantly reduce their chance of being funded in the future.
Helping to identify harmful projects whilst being aware of factors such as the unilateralist curse and information cascades.
Increasing the resilience, robustness and diversity of funders within EA and longtermism.
Alongside the above, EA Funds has ambitions to pursue new ways of generating value by:
Creating an expert-led active grant-making program to create counterfactual impactful projects (starting with longtermist information security).
Modeling and shaping community norms of transparency, integrity, and criticism to improve the epistemic environment within EA and associated communities.
Our Ask
We are looking to raise ~$4.4M from the general public to support our work over the next 6 months:
~$2.7M for the Long-Term Future Fund.
This is ~2M above our expected 720k donations in the next 6 months.
~$1.7m for the EA Infrastructure Fund.
This is ~1.3M above our expected 400k donations in the next 6 months.
This will be matched by Open Phil at a 2:1 rate ($2 from Open Phil per $1 donated to a fund) with a ceiling of a $3.5m contribution from Open Phil (per fund). You can read more about the matching here.
The EAIF and LTFF have received very generous donations from many individuals in the EA community. However, donations to the EAIF and LTFF have recently been quite low, especially relative to the quality and quantity of applications we’ve had in the last year. While much of this is likely due to the FTX crash and subsequently increased funding gaps of other longtermist organizations, our guess is that this is partially due to tech stocks and crypto doing poorly in the last year (though we hope that recent market trends will bring back some donors).
Calculation for LTFF funding gap
The LTFF has an estimated ideal dispersal rate of $1M/month, based on our post-November 2022 funding bar that Asya estimated[2] from looking at the funding gaps and marginal resources within the longtermist ecosystem overall. This is $6M over the next 6 months.
I also think LTFF donors should pay $200k over the next 6 months ($400k annualized) as their “fair share” of EA Funds operational costs. So in total, LTFF would like to spend $6.2M over the next 6 months.
Caleb estimated ~$700k in expected donations from individuals by default in the next 6 months, based solely on extrapolation from past trends. With Open Phil donation matching, this comes out to a total of $2.1M in expected incoming funds, or a shortfall of $4.1M.
To cover the remaining $4.1M, we would like individual donors to contribute an additional $2M, where Open Phil will provide $2.1M of matching for the first $1.05M.
To get a sense of what projects your marginal dollars can buy, you might find it helpful to look at the $5M tier of the LTFF Funding Thresholds Post.
Calculation for EAIF funding gap
The EAIF has an estimated ideal dispersal rate of $800k/month, based on the proportion of our historic spend rate that we believe is above Open Phil’s bar for EA community building projects (though note that this was based on fairly brief input from Open Phil and I didn’t check with them about whether they agree with this claim). This is $4.8M over the next 6 months.
I also think EAIF donors should pay $200k over the next 6 months ($400k annualized) as their “fair share” of EA Funds operational costs. So in total, EAIF would like to spend $5M over the next 6 months.
Caleb estimated $400k in expected donations from individuals by default in the next 6 months, based solely on extrapolation from past trends. With Open Phil donation matching, this comes out to a total of $1.2M in expected incoming funds, or a shortfall of $3.8M.
To cover the remaining $3.8M, we would like individual donors to contribute an additional $1.3M, where Open Phil will provide 2.5M in donation matching.
Potential change for operational expenses payment
Going forwards, we would also like to move towards a model where donors directly pay for our operational expenses (currently we fundraise for operational expenses separately, so 100% of donations from public donors goes to our grantees). We believe that the newer model is more transparent, as it lets all donors more clearly see the true costs and cost-benefit ratio for their donations. However, making the change is still pending internal discussions, community feedback, and logistical details. We will make a separate announcement if and when we switch to a model where a percentage of public donations go to cover our operational expenses. See Appendix A for a calculation of operational expenses.
Why give to EA Funds?
We think EA Funds is well-positioned to be a significant independent grantmaker for the following reasons.
We have knowledgeable part-time fund managers who do direct work in their day jobs: we have built several grantmaking teams with a broad range of expertise. These managers usually dedicate the majority of their time to hands-on efforts addressing critical issues. We believe this direct experience enhances their judgment as grantmakers, enabling them to pinpoint important and critical projects with high accuracy.
Specialization in early-stage grants: we made over 300 grants of under $300k in 2022. To our knowledge, that’s more grants of this size than any other EA-associated funder.
We are the largest open application funding source (that we are aware of) within our cause areas. Our application form is always open, anyone can apply, and grantees can apply for a wide variety of projects relevant to our funds’ purposes (as opposed to e.g. needing to cater to narrow requests for proposals). We believe this is critical to us having access to grant opportunities that other funders do not have access to, allowing us to rely on formal channels rather than informal networks.
Our operational track record. In 2022, EA Funds paid out ~$35M across its four Funds, with $12M to the Long-Term Future Fund, $13M to the EA Infrastructure Fund, $6.4M to the Animal Welfare Fund, and $4.8M to the Global Health and Development Fund. This requires (among others) clearing nontrivial logistical hurdles in following nonprofit law across multiple countries, consistent operational capacity, and a careful eye towards downside risk mitigation.
We believe our grants are highly cost-effective. Our current best guess is that we have successfully identified and given out grants of similar ex-ante quality to (e.g.) Open Phil’s AI safety and community building grants, some of which Open Phil would counterfactually not have funded.[3] This gives donors an opportunity to provide considerable value.
We are investigating new value streams. We would like to pursue ‘DARPA-style’ active grantmaking in priority areas (starting with information security). We are also actively considering setting up an AI Safety-specific fund, encouraging donors interested in AI safety (but not EA or longtermism) to donate to projects that mitigate large-scale globally catastrophic AI risks.
We are one of the main public longtermist donation options available for individual donors to support. We believe that we are a relatively transparent funder, and we are currently thinking about how we can increase our transparency further whilst moving more quickly and maintaining our current standard of decision-making.
We are primarily looking for funding to support the Long-Term Future Fund and the EA Infrastructure Fund’s grantmaking.
The Long-Term Future Fund is primarily focused on reducing catastrophic risks from advanced artificial intelligence and biotechnology, as well as building and equipping a community of people focused on safeguarding humanity’s future potential. The EA Infrastructure Fund is focused on increasing the impact of projects that use the principles of effective altruism, in particular amplifying the efforts of people who aim to do an ambitious amount of good from an impartial welfarist and scope-sensitive perspective. We have included some examples of grants each fund has made in the highlighted grants section.
Our Fund Managers
We lean heavily on the experience and judgement of our fund managers. We have around five fund managers on each fund at any given time. [4]Our current fund managers include:
Linchuan Zhang (LTFF): Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
Oliver Habryka (LTFF): Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
Peter Wildeford (EAIF): co-executive director and co-founder of Rethink Priorities, a think tank dedicated to figuring out the best ways to make the world a better place.
Guest Fund Managers
Daniel Eth (LTFF): Daniel’s research has spanned several areas relevant to longtermism, and he’s currently focused primarily on AI governance. He was previously a Senior Research Scholar at the Future of Humanity Institute, and he has a PhD in Materials Science and Engineering from UCLA. He is currently self-employed.
Lauro Langosco (LTFF): Lauro is a PhD student with David Krueger at the University of Cambridge. His work focused broadly on AI Safety, in particular on demonstrations of alignment failures, forecasting AI capabilities, and scalable AI oversight.
Lawrence Chan (LTFF): Lawrence is a researcher at ARC Evals, working on safety standards for AI companies. Before joining ARC Evals, he worked at Redwood Research and as a PhD Student at the Center for Human Compatible AI at UC Berkeley.
Thomas Larsen (LTFF): Thomas was an alignment research contractor at MIRI, and he is currently running the Center for AI Policy, where he works on AI governance research and advocacy.
Clara Collier (LTFF): Clara is the managing editor of Asterisk, a quarterly journal focused on communicating insights on important issues. Before, she worked as an independent researcher on existential risks. She has a Masters in Modern Languages from Oxford.
Michael Aird (EAIF): Michael Aird is a Senior Research Manager in Rethink Priorities’ AI Governance and Strategy team. He also serves as an advisor to organizations such as Training for Good and is an affiliate of the Centre for the Governance of AI. His prior work includes positions at the Center on Long-Term Risk and the Future of Humanity Institute.
Huw Thomas (EAIF): Huw is currently working part-time on various projects (including a contractor role at 80,000 hours). Prior to this, he worked as a media associate at Longview Philanthropy, a groups associate at the Centre for Effective Altruism and was a recipient of the CEA Community Building Grant for his work at Effective Altruism Oxford.
You can find a full list of our fund managers here[5]
If you have more questions, feel free to leave a comment here. Caleb Parikh and the fund managers are also happy to talk to donors potentially willing to give >$30k. Linch Zhang, in particular, has volunteered himself to talk about the LTFF.
Highlighted Grants
EA Funds has identified a variety of high-impact projects, at least some of which we think are unlikely to have been funded elsewhere. (However, for any specific grant listed below, we think there’s a fairly high probability they’d otherwise be funded in some form or another; figuring out counterfactuals is often hard).
From the Long-Term Future Fund:
David Krueger - $200,000
Computing resources and researcher stipends at a new deep learning + AI alignment research group at the University of Cambridge.
Alignment Research Center - $72,000
A research & networking retreat for winners of the Eliciting Latent Knowledge contest with the aim of fostering promising research collaborations between junior researchers.
SERI MATS program - $316,000
8-week scholars program to pair promising alignment researchers with renowned mentors. This program has now grown into a more established program producing multiple people working full-time on alignment in established research organizations (with a smaller number of people pursuing independent research or starting new organizations).
Manifold Markets - $200,000
Stipend and expenses for 4 months for 3 FTE to build a forecasting platform made available to the public based on user-created play-money prediction markets
Daniel Filan - $23,544
We recommended a grant of $23,544 to pay Daniel Filan for his time making 12 additional episodes of the AI X-risk Research Podcast (AXRP), as well as the costs of hosting, editing, and transcription.
From the EA Infrastructure Fund:
Shauna Kravec & Nova DasSarma - $50,000:
Compute infrastructure and dedicated support for AI safety researchers to run technical AI experiments. This later became Hofvarpnir Studios which used to provide compute for Jacob Steinhardt’s lab at UC Berkeley and the Center for Human-Compatible Artificial Intelligence (CHAI).
Finlay Moorhouse and Luca Righetti - $38,200
Ongoing support for “Hear This Idea”, a podcast showcasing new thinking in effective altruism.
Laura Gonzalez Salmerón, Sandra Malagón - $43,308
12-month stipend to coordinate and grow the EA Spanish speakers community and its projects.
Czech Association for Effective Altruism - $ 8,300
Expenses and stipend to create a short Czech book (~130 pgs) and brochure (~20 pgs) with a good introduction to EA in digital and print formats.
See a complete list of our public grants at this link. You can also read the most recent payout report by LTFF here.
Planned actions over the next six months
To achieve our goals of empowering thoughtful people to pursue impactful projects, we’ll attempt to do the following:
Asya Bergal will step down as chair of LTFF (Max Daniel has already stepped down as chair of the EAIF). Max and Asya both work for Open Phil, and we want to increase our separation from Open Phil. [6]
Open Phil also wanted to reduce entanglements between the two organizations, in part to mitigate downside reputational risks.
We are looking to find new fund chairs for both LTFF and EAIF.
We plan to onboard more fund managers to grow each fund substantially (aiming to double the staffing of each fund).
In recent months, LTFF has onboarded Lauro Langosco and Lawrence Chan who will primarily focus on technical alignment grantmaking, as well as Clara Collier for her expertise in communications and general longtermism. The EAIF is in the process of onboarding new fund managers.
Open Phil has agreed to give us a 2:1 match for up to $7M total (up to $3.5M to each of EAIF and LTFF) for a 6-month period. While our ultimate goal is to develop our own robust funding base, in 2022, Open Philanthropy provided 40% of the funding for the Long-Term Future Fund and 84% for the EA Infrastructure Fund.[7] We see donation matching as a realistic intermediary step while enabling us to pursue more intellectual independence.
This model replaces fixed grants from Open Philanthropy. This reduces the likelihood of your donations being fungible: previously an extra $1 to EA Funds in fundraising could result in a $1 reduction in Open Philanthropy’s grants to us, diverting those funds to their other projects. This newer approach allows funders to donate to EA Funds and support the specific value proposition that we, as opposed to Open Philanthropy, present. [8]
We are considering hiring or contracting out more non-grantmaking duties (eg website, project management, fundraising, communications) at EA Funds. Right now Caleb is the only full-time employee of EA Funds and plausibly having 0.5-1.5 more FTEs at EA Funds will help both existing projects go more smoothly, as well as unlock new ambitious opportunities.
We are working with external investigators to do retroactive evaluations of past EAIF and LTFF grants, with the hopes that we can then have a clearer picture of a) how well the impact of our past grants compares to e.g., Open Phil’s, b) which of our broader categories of historical grants have been the most impactful, and c) other qualitative insights to help us improve further.
We aim to improve the operations of our passive grantmaking (funding of open grant applications) program with a focus on improving the grantee experience by providing more support to grantees and getting back to grantees much more quickly[9]
We are trying to reconceptualize and reframe the value proposition and strategic direction of EAIF in the coming months. While much of this will be contingent on the vision of the incoming fund chair, we’d like EAIF to have a more coherent and targeted vision, strategy, and coherent value proposition to donors going forwards.
We plan to create a new AI Safety specific program, for donors outside of EA/Longtermism who want to decrease catastrophic risks from AI. We hope that such a program can inspire new donors to give to AI safety projects.
EA Funds is pursuing active grant-making programs, where we’ll actively seek out promising projects to fund. We’ll initially focus on Information Security field building. The current plan is for this program to initially be funded by Open Philanthropy, though if you are interested in contributing to this program in particular, please let us know.
Potential negatives to be aware of
Here are some reasons you might not want to donate to EA Funds:
Potential downside risks of LTFF or EAIF
Inability to fully screen for or prevent unilateral downside risks: EA Funds has much less control over and offers less guidance to our grantees than, e.g., the executive directors of a moderately-sized EA organization. So compared to larger organizations, we may be less able to prevent unilateral downside risks like the sharing of information hazards, or actions that pose reputational risks to effective altruism at large, or to specific EA subfields.
Centralization of funds: In contrast, we are also implicitly asking for the centralization of funds from private donors to a single grantmaking entity. To the extent that you believe your counterfactual for donating to EA Funds is better and/or more centralization is bad, you may wish to donate directly rather than pool your funds with other LTFF or EAIF donors.
Waste/Inefficient usage of human capital: Giving money to EA Funds rather than larger organizations implicitly subsidizes a culture and community of grantseekers who are supported by small grants. To the extent that you believe this is a less efficient usage of human capital than plausible counterfactuals for talented people (e.g. getting a job in tech, policy, or academia), you might want to shift away from EA grantmakers that give relatively small individual grants.
Note that we consider these issues to be structural and do not realistically expect resolutions to these downside risks going forwards.
Areas of improvement for the LTFF and EAIF
Historically, we’ve had the following (hopefully fixable) problems:
Slower than ideal response times: in the past year, our median response time has been around 4 weeks with high variance; we’d like to get this down to closer to 2 weeks with 95% of applications responded to in 4 four weeks.
Limited feedback/advice given to grantees: we generally don’t give feedback to rejected applicants. We currently give some feedback to promising grantees but much less than we’d give if we had more grantmaking capacity.
Insufficient active grantmaking: We spend some time trying to improve our grantees’ projects, but we have invested fairly little in active grantmaking (actively identifying promising projects and creating/supporting them).
Missing areas of subject matter expertise: The scopes of both funds are quite expansive. This means sometimes all of the existing grantmakers lack sufficient direct technical subject matter expertise to evaluate grants in certain areas, and thus have to rely on external experts. For example, the LTFF does not currently have a technical expert in biosecurity.
For more, you can read Asya’s reflections on her time as chair of LTFF.
EAIF vs LTFF
Some donors are interested in giving to both the EAIF and LTFF and would like advice on which fund is a better fit for them.
We think that the EAIF is a better fit for donors who:
Are interested in supporting a portfolio of meta projects covering a range of plausible worldviews (both longtermist and non-longtermist).
Are interested in building EA and adjacent communities.
Believe that EA (and EA community building) has historically been very good for the world.
Believe in multiplier effect arguments (donating $100 to an EA group could plausibly create far more than $100 in donation to high-impact charities by encouraging more people to donate).
Expect the EAIF and LTFF to have similar diminishing marginal returns curves and want to donate to the fund with lower funding. (EAIF and LTFF each receive about 1000 grant applications per year, but EAIF has less funding currently committed)
We think that the LTFF is a better fit for donors who are:
More compelled by longtermist cause areas than other EA cause areas.
Particularly interested in AI safety.
Are more interested in direct work than “meta” work that have a longer chain of impact/reasoning.
Are more excited about the $5M tier of marginal LTFF grants than what they consider to be the marginal EAIF grant.
Closing thoughts
This post was written by Caleb Parikh and Linch Zhang. Feel free to ask questions or give us feedback in the comments below.
If you are interested in donating to either LTFF or EAIF, you can do so here.
Appendix A: Operational expenses calculations and transparency.
In the last year, EA Funds has dispersed $35M and spent ~700k in operational expenses. The vast majority of the operational expenses were spent on LTFF and EAIF, as the global health and development fund and animal welfare fund are operationally much simpler.
Historically, ~60-80% of the operational expenses are paid to EV Ops, for grant disbursement, tech, legal, other ops, etc.
The remaining 20-40% is used for:
Caleb’s salary, who leads EA Funds (~$100k/year plus benefits).
Payments for grantmakers at $60/hour, though many volunteer for free.
Contractors who work on different projects, earning between $35-$100/hour.
I (Linch) ballparked the expected annual expenditures going forwards (assuming no cutbacks) to be ~800k annually. I estimated the increase due to a) inflation and b) us wanting to take on more projects, with some savings from us slowing down the rate of dispersals a little. But this estimate is not exact.
Since LTFF and EAIF incur the highest expenses, I suggest donors to each should contribute around $400k yearly, or $200k every six months.
As for where we might cut or increase spending:
Reducing EV Ops costs would be challenging and may require moving EA Funds out of EV and building our own grant ops team.
Reducing Caleb’s working hours would be challenging.
I think my own hours at EAF are somewhat contingent on operational funding. In the last month, I’ve been spending more than half of my working hours on EA Funds (EA Funds is buying out my time at RP), mostly helping Caleb with communications and strategic direction. I will like to continue doing this until I believe EA Funds is in a good state (or we decide to discontinue or sunset projects I’m involved in). Obviously whether there is enough budget to pay for my time is a crux for whether I should continue here.
Assuming we can pay for my time, other plausible uses of marginal operational funding include: a) whether we pay external investigators for extensive or just shallow retroactive evaluations, b) whether we attempt to launch new programs, c) whether the new infosec, AI safety project, etc websites have professional designers, etc. My personal view is that marginal spending on EA Funds expenses is quite impactful relative to other possible donations, but I understand if donors do not feel the same way and they will prefer a higher percentage of donations go directly to our grantees (currently it’s 100% but proposed changes may move this to ~ 94-97%).
- ^
The Long-Term Future Fund and the EA Infrastructure Fund are part of EA Funds, which is a fiscally sponsored project of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA Inc. (“EV US”). Donations to LTFF and EAIF are donations to EV US or EV UK. Effective Ventures Foundation (UK) (EV UK) is a charity in England and Wales (with registered charity number 1149828, registered company number 07962181, and is also a Netherlands registered tax-deductible entity ANBI 825776867). Effective Ventures Foundation USA Inc. (EV US) is a section 501(c)(3) organization in the USA (EIN 47-1988398). Please see important state disclosures here.
- ^
Note that our current funding bar is higher as a result of anticipated funding/liquidity shortages.
- ^
This is a pretty loose statement partially because impact evaluation is quite hard in the fields we work in, and partially due to insufficient time investment in our evaluations. We are working with external investigators to establish better metrics and to have external retrospective evaluations. Potential cruxes for the value of our work (relative to larger entities like Open Phil) includes the value of independent researchers and small projects, and the value of having a wider range of longtermist worldviews.
- ^
This is generally a mix of experienced fund managers and less experienced assistant fund managers.
- ^
Though note that the current list is out of date.
- ^
I think that it’s useful to note that I don’t expect substantive world view shifts from making this change relative to our previous grantmaking. However, I think we will be a bit less likely to suffer from Open Phil correlated sources of error.
- ^
The donations to these funds only totaled $7.4m and $10m respectively, less than the total amount of grants disbursed that year.
- ^
Open Philanthropy is also on board with the aim of wanting the funding landscape to be more independent and for funders to be able to more legibly donate in non-fungible ways.
- ^
We aim to get back to 90% of grantees within three weeks, currently our median decision response time is 28 days.
- 10 years of Earning to Give by 7 Nov 2023 23:35 UTC; 678 points) (
- Observations on the funding landscape of EA and AI safety by 2 Oct 2023 9:45 UTC; 136 points) (
- EA Poland is facing an existential risk by 10 Nov 2023 16:23 UTC; 113 points) (
- Hypothetical grants that the Long-Term Future Fund narrowly rejected by 15 Nov 2023 19:39 UTC; 95 points) (
- 30 Apr 2024 21:57 UTC; 93 points) 's comment on EA Meta Funding Landscape Report by (
- Effective Altruism Infrastructure Fund: March 2024 recommendations by 27 May 2024 21:11 UTC; 88 points) (
- Participate in the Donation Election and the first weekly theme (starting 7 November) by 2 Nov 2023 17:02 UTC; 84 points) (
- MATS Winter 2023-24 Retrospective by 11 May 2024 0:09 UTC; 84 points) (LessWrong;
- MATS Summer 2023 Retrospective by 1 Dec 2023 23:29 UTC; 77 points) (LessWrong;
- Long-Term Future Fund: May 2023 to March 2024 Payout recommendations by 12 Jun 2024 13:46 UTC; 75 points) (
- What do Marginal Grants at EAIF Look Like? Funding Priorities and Grantmaking Thresholds at the EA Infrastructure Fund by 12 Oct 2023 21:40 UTC; 70 points) (
- Long-Term Future Fund Ask Us Anything (September 2023) by 30 Aug 2023 23:02 UTC; 64 points) (
- MATS Winter 2023-24 Retrospective by 11 May 2024 0:09 UTC; 62 points) (
- CEA is fundraising, and funding constrained by 20 Nov 2023 21:41 UTC; 58 points) (
- EA Infrastructure Fund: June 2023 grant recommendations by 26 Oct 2023 0:35 UTC; 40 points) (
- Long-Term Future Fund: May 2023 to March 2024 Payout recommendations by 12 Jun 2024 13:46 UTC; 40 points) (LessWrong;
- EA Infrastructure Fund Ask Us Anything (January 2024) by 12 Jan 2024 18:36 UTC; 35 points) (
- Long-Term Future Fund Ask Us Anything (September 2023) by 31 Aug 2023 0:28 UTC; 33 points) (LessWrong;
- MATS Summer 2023 Retrospective by 2 Dec 2023 0:12 UTC; 28 points) (
- EA Infrastructure Fund: June 2023 grant recommendations by 26 Oct 2023 0:35 UTC; 21 points) (LessWrong;
- What do Marginal Grants at EAIF Look Like? Funding Priorities and Grantmaking Thresholds at the EA Infrastructure Fund by 12 Oct 2023 21:40 UTC; 20 points) (LessWrong;
UPDATE 2023/09/13:
Including only money that has already landed in our bank account and extremely credible donor promises of funding, LTFF has raised ~1.1M and EAIF has raised ~500K. After Open Phil matching, this means LTFF now has ~3.3M additional funding and EAIF has ~1.5m in additional funding.
We are also aware that other large donors, including both individuals and non-OP institutional donors, are considering donating to us. In addition, while some recurring donors have likely moved up their donations to us because of our recent unusually urgent needs, it is likely that we will still accumulate some recurring donations in the coming months as well. Thus, I think at least some of the less-certain sources of funding will come through. However, I decided to conservatively not include them in the estimate above.
From my (Linch)’s perspective, this means both LTFF nor EAIF are no longer very funding constrained for the time period we wanted to raise money for (the next ~6 months), however both funds are still funding constrained and can productively make good grants with additional funding.
To be more precise, we estimated a good target spend rate for LTFF is as 1M/month, and a good target spend rate for EAIF as ~800k/month. The current funds will allow LTFF to spend ~550k/month and EAIF to spend ~250k/month, or roughly a gap of 450k/month and 550k/month, respectively. More funding is definitely helpful here, as more money will allow both funds to make productively make good grants[1].
Open Phil’s matching is up to 3.5M from OP (or 1.75M from you) for each fund. This means LTFF would need ~650k more before maxing out on OP matching, and EAIF would need ~1.25M more. Given my rough estimate of funding needs above, which is ~6.2M/6 months for LTFF and ~5M/6 months for EAIF, this means LTFF would ideally like to receive 1M above the OP matching.
I appreciate donors’ generosity and commitment to improving the world. I hope the money will be used wisely and cost-effectively.
I plan to write a high-level update and reflections post[2] on the EAForum (crossposted to LessWrong) after LTFF either a) reach our estimated funding target or b) decided to deprioritize fundraising, whichever one comes earlier.
For LTFF, the current level of funding is enough to fund all projects at the 1M tier, and ~50% of projects at the 5M tier. We don’t have very good public information out about marginal EAIF grants out just yet, but I hope to cowrite/copublish another post in the next few weeks about marginal EAIF grants.
I’m a bit confused about the ideal frequency of high-level updates. On the one hand I think informing donors regularly is quite valuable, especially as our funding needs change. On the other hand I don’t want the EA forum to be clogged with like 5 fundraising posts by the same org in a month.
sapphire on LessWrong raised this interesting objection:
Some fund managers (including myself) weighed in. For other readers similarly concerned, feel free to read the comments there.
How right now is “right now”? Like would giving $100 literally this moment be worth $105 given in a week? A month?
Just looking for something super approximate, especially a rough time horizon where $1 now ≈ $1 then
It’s very hard/confusing for me to think of an exact number, in part because the very existence of this public announcement and public comments probably changes the relevant numbers.
Suppose the counterfactual for this post is that we wait for November to make a “normal” end-of-year fundraising post, and during that time we make do with an income stream similar to donations to us in the past few months (~100k/month). If we are honest about our funding needs in November (likely still very high), I expect say ~1-2m of donations to us from people’s end-of-year donation budgets (3-5.5m including Open Phil matching). In that world, because of sharply diminishing returns, I’d likely prefer 10k additional now (30k including Open Phil matching), to 20k additional in December (60k including OP matching).
But the very existence of this post means we aren’t living in that world, as (hopefully) donors with far lower opportunity cost of money will generously donate to us now to ameliorate such gaps. So the whole thing leaves me pretty confused.
Anyway, I will not encourage giving money to us now if the urgency imposes significant hardship on your end (beyond the level you reflectively endorse for donations in general).
If you are a large (>50k?) donor faced very concretely with an option of giving us $X now vs $X * Y later (I gave the example of tax reasons below), feel free to ping Caleb or I. We can discuss together what makes the most sense, and also (if necessary, I also need to check with ops) EA Funds can borrow against such promises and/or make conditional grants to grantees.
If you expect to take in $3-6M by the end of this year, borrowing say $300k against that already seems totally reasonable.
Not sure if this is possible, but I for one would be happy to donate to LTFF today in exchange for a 120% regrant to the Animal Welfare Fund in December[1]
This would seem to be an abuse of the Open Phil matching, but perhaps that chunk can be exempt
Thanks so much for your offer! That’d be a great option to have on the table! Hopefully enough donors will ameliorate our gaps in the next month, but I might check in with you against later this month if a) we have some more firm commitments for donations by end-of-year[1], and b) we’re still quite severely funding constrained as of Sept 20th, and c) we can’t find lower bids.
One issue with the straightforward “expect to get $3-6M by the end of this year” logic is that a model that spits out that sentence would also predict that this fundraising post + associated public and private comms should also work as fundraising for us; if we turn out to receive neither donations in the near future nor promises after our posts now, I should also strongly update against my original estimate of getting 3-6M by EOY.
I’m not confident and would encourage other fund managers to weigh in here.
I’d guess that $100 now is similarly useful to us as $140 in 3 months and something like $350 in six months time after the OP matching runs out. These numbers aren’t very resilient and are mostly my gut impression.
IDK 160% annualized sounds a bit implausible. Surely in that world someone would be acting differently (e.g. recurring donors would roll some budget forward or take out a loan)?
I would be curious to hear from someone on the recipient side who would genuinely prefer $10k in hand to $14k in three months’ time.
Maybe it’s a bit high but it doesn’t seem crazy to me.
We seem to have a lot of unusually good applications right now and unusually little funding. I also expect to hear back from some large donors later in the year and I expect our donations to increase around giving season (December).
A quick scan of the marginal grants list tells me that many (most?)[1] of these take the form of a salary or stipend over the course of 6-12 months. I don’t understand how the time-value of money could be so out of whack in this case—surely you could grant say half of the requested amount, then do another round in three months once the large donors come around?[2]
As for the rest, I don’t see anything on the list that wouldn’t exist in three months.
Daniel’s comment says “there are a whole host of issues” with this approach. I’d be curious to know what those are, and how they aren’t worth unlocking 40% additional value.
GPT-4 gave some reasons here.
In addition:
Being an independent researcher on a 12-month grant is already quite rough, moving to a 3-month system is a pretty big ask and I expect us to lose some people to academia or corporate counterfactuals as a result
Most of the people we’re funding have fairly valuable counterfactuals (especially monetarily); if we fund them with 3 months under high researcher uncertainty and potential for discontinuity, I just expect many of our grantees to spend a large fraction of the time job-searching.
For people who are not independent, a 3-months contract makes it very hard to navigate other explicit and implicit commitments (eg project leads will find it hard to find contractors/employees, I’m not sure it’s even possible to fund a graduate student for a fraction of a semester)
Giving us $X now is guaranteed, and we can make grants or plan around them. Maybe giving us $1.4X in the future is more of a hypothetical, and not something that we can by default plan around.
If a large donor is actually in this position, please talk to us so we can either discuss options together and/or secure an explicit commitment that is easier for us to work around.
So these are all reasons that funding upfront is strictly better than in chunks, and I certainly agree. I’m just saying that as a donor, I would have a strong preference for funding 14 researchers in this suboptimal manner vs 10 of similar value paid upfront, and I’m surprised that LTFF doesn’t agree.
Perhaps there are some cases where funding in chunks would be untenable, but that doesn’t seem to be true for most on the list. Again, I’m not saying there is no cost to doing this, but if the space is really funding-constrained as you say 40% of value is an awful lot to give up. Is there not every chance that your next batch of applicants will be just as good, and money will again be tight?
To be clear, I’m not sure I agree with the numbers Caleb gave, and I think they’re somewhat less likely given that we live in a world where we communicated our funding needs. But I also want to emphasize that the comparison-in-practice I’m imagining is (say) $100k real dollars that we’re aware of now vs $140k hypothetical dollars that a donor is thinking of giving to us later but not actually communicating to us; which means from our perspective we’re planning as if that money isn’t real. If people are actually faced with that choice I encourage actually communicating that to us; if nothing else we can probably borrow against that promise and/or make plans as if that money is real (at some discount).
There’s some chance, sure, but it’s not every chance. Or at least that’s my assumption. If I think averaging 100k/month (or less) is more likely than not to become the “new normal” of LTFF, I think we need to seriously think about scaling down our operations or shutting down.
I don’t think this is very likely given my current understanding of donor preferences and the marginal value of LTFF grants vs other longtermist donation opportunities[1], but of course it’s possible.
I think there is a chicken-and-egg problem with the fund right now, where to do great we need
a) great applications
b) grant grantmakers/staff/organizational capacity (especially a fund chair) and
c) money
Hiring for good grantmakers has never been easy, but I expect it to to be much harder to find a fund chair to replace Asya if we can’t promise them with moderately high probability that we are moving enough money to be worth their time working on the fund, compared to other very high value work that they could be doing (and more prosaically, many potential hires like having a guaranteed salary).
I also expect great applications to start drying up eventually if there continues to be so much funding uncertainty, though there’s still some goodwill we have to burn down and I think problems like that are only going to be significant over the timescale of many months, rather than a few.
The main exception in my mind comes from some other grantmaker rising up in the space and being great, or an existing grantmaker expanding into the space that we currently work in.
That is very different from the question that Caleb was answering—I can totally understand your preference for real vs hypothetical dollars.
Presumably the first step towards someone acting differently would be the LTFF/EAIF (perhaps somewhat desperately) alerting potential donors about the situation, which is exactly what’s happening now, with this post and a few others that have recently been posted.
FWIW, (with rare exceptions) it’s not that more funding would allow us to give the same recipients larger grants, but instead that more funding would allow us to fund more grants, and marginal grants now are (according to Caleb’s math) ~40% more valuable per dollar than what he expects from the marginal grant in a few months. In principle, grantees could be given the promise of (larger) delayed payment for grants instead of payment up front, but I think there are a whole host of problems with heading down that path.
several cruxes:
how much should you value an OP longtermist $ vs an LTFF or EAIF $
There’s both a question of simple EV and how cooperative or epistemically deferential you should be.
If Alice values her $s at 10x Bob’s, and Bob values his dollars at 10x Alice’s, but they know each other really well and cooperate in other settings, they should probably come to a better equilibria then the first-order calculation.
whether we’re likely to get an inflow of institutional and large individual donations going forwards, now that we’re “actually trying” to fundraise.
The more optimistic you’re about our future donations, the more it makes sense to donate know.
whether we’ll get some inflow soonish, given our pretty public and unambiguous ask.
obviously there’s some weird game theory thing going on here where if (say) other people are willing to cover half of our funding gap until if/when larger donors role in, the marginal value of you covering our funds is much lower. But the more other people are waiting, the more valuable it is for you to give now.
whether it makes sense for LTFF to keep going if we don’t raise much money.
obviously most of us have fairly high-value counterfactuals, so working on LTFF when it’s not doing much is pretty costly in terms of other work we could be doing.
whether some other org will fill up the vacuum if we stop going.
If it takes X months for a different org to fill up the vacuum while we plan out a graceful exit, funding us now rather than later is pretty valuable. If otoh we neither get more funding nor anybody else wants to step in to do this work, then the marginal value of donations to us would become a lot more flat.
whether the projects we’re currently excited about will get funding elsewhere if we don’t fund them now (so us not funding them only incurs delay and switching costs from their end) vs just won’t happen, or at least won’t happen for X months.
whether money promised in the future is a very concrete and specific promise “for tax law reasons I can either give you $28k in Janurary or $20k now; I’m very willing to publicly commit in writing to doing the former if y’all think it’s a better idea” vs a pretty wishy-washy “Oh I might give $1.40x in around 3 months instead of $1x now.” vs you mentally thinking of giving us money later but telling us anything about it
If enough people are giving us very credible and concrete commitments, then it’s at least possible for us (though a bit costly in terms of work, and probably money as well) to borrow against such commitments to fund projects today, at a much-lower-than-implied interest rate.
If we can’t plan around a hypothetical future windfall, we should probably triage as if that windfall isn’t there (though with some contingency plans to absorb that if necessary).
whether the current funding landscape for smallish independent projects is a temporary lull vs “new normal”
The more it’s like a “new normal”, the less critical funds are now as opposed to later.
Shouldn’t this depend on how OP will use its matching funds otherwise? Would they just sit on them longer, possibly waiting for better opportunities meeting a higher bar, or grant them to something else, and how good would that be?
By “useful to us” I meant useful to the EAIF or LTFF (as opposed to make a comment on what the best thing to do is). I don’t have a great sense of what the counterfactuals for the funding at Open Phil are. Some evidence that open Phil thinks that they are worse than donations to us is that Open Phil has historically given us large grants and has decided to offer donation matching to help incentivise donations from the public.
But also Open Phil wants to fund you less going forward and the matching is for this transition, right? Or was it primarily EA Funds pushing for that?
Of course, the reason seems to be to reduce your reliance on Open Phil, but that should be weighed against the difference in value of grants you’d make with more of their funding instead of them. And they might want to reduce your reliance on them because they think they can do better themselves and/or because the need for extra grant advisor capacity in the space has been reduced with the reduction in funding after the collapse of FTX.
One possible interpretation is that this matching and decreased future support is like them spinning off their criminal justice reform work with one last large exit grant, because they concluded it didn’t meet the bar anymore. Furthermore, Open Phil has recently raised its bar for longtermist work, with about half of previous longtermist grants no longer meeting the bar. https://forum.effectivealtruism.org/posts/FHJMKSwrwdTogYLGF/we-re-no-longer-pausing-most-new-longtermist-funding
It’s pretty plausible to me (with what limited knowledge I have of the specifics, and I would hope they’d let you know) that you no longer meet their new bar. And even if you would going forward, Open Phil might just prefer to make grants themselves, because they have the capacity and decide themselves what meets their own bar ex ante, whereas they’d have to trust you. Furthermore, by having the EA Fund managers who also work at Open Phil resign from EA Funds, they have more time to focus on Open Phil grantmaking, so they’ve effectively increased their capacity (maybe only marginally, I suppose, if EA Funds was only a small time commitment).
Sure but the difference in value is key here right? If you value marginal OP longtermist $s at 90% those of LTFF $s, then 2:1 counteractual matching “only” 1.2x’s your donations, whereas if you value OP longtermist $s at 10% those of LTFF $s, then the matching is equivalent to a 1.8x:1 match from an unaligned donor, or like a 2.8x donation to us overall.
EA donors generally care about “useful to the world”, but you mean the more narrow “useful to LTFF and EAIF”, right?
Great thanks, I’ve set up a recurring donation!
EDIT: apparently they’re very time-constrained, so I’ll give $13.3k as a lump sum instead.
I’m strongly in favour of there being an AI safety specific fund whether that be LTFF or spearheaded by another group.
Long-termism has taken a beating in the press recently, so it’s important to remove any unnecessary barriers that might turn-off donors.
[Speaking in my personal capacity, not on behalf of the LTFF] I am also strongly in favor of there being an AI safety specific fund, but this is mostly unrelated to recent negative press for longtermism. My reasons for support are (primarily): a) people who aren’t EAs (and might not even know about longtermism) are starting to care a lot more about AI safety, and many of them might donate to such a fund; and b) EAs (who may or may not be longtermists) may prioritize AI safety over other longtermist causes (eg biosafety), so an AI safety specific fund may fit better with their preferences.
it’s true that the correlation between framings of the problem socially overlapping with longtermism and longtermism could be made spurious! there’s a lot of bells and whistles on longtermism that don’t need to be there, especially for the 99% of what needs to be done in which fingerprints never come up.
UPDATE 2023/12/21: Open Phil’s $3.5M donation matching for the Long-Term Future Fund has now been filled[1]. So your donations to LTFF will no longer be matched. That said, this fundraising post was written 4 months ago, and we’d like to continue fundraising (especially given that December is an unusually good time to fundraise).
Open Phil’s donation matching for the EA Infrastructure Fund has not been filled (currently $1.3M/$3.5), and my current projection is that by default they won’t be filled by the deadline (end of Jan 2024). So to the extent that you’re fairly indifferent between LTFF and EAIF, and believe that either fund is a better use of marginal resources than OP’s marginal dollar, it might make more sense to donate to EAIF than LTFF.[2]
The dashboard currently says $3.34M/$3.5M filled but my understanding is that a few donations that aren’t up on the dashboard yet would be enough to cross over the line.
Of course, if you think one fund is much more impactful than another, you should donate to that fund instead.
Thanks for this very thorough write up. I appreciate this level of transparency on what’s needed for two of our community’s biggest grantmaking orgs!
Thanks for sharing this! I’m especially excited about the retrospective evaluation.
I notice you mention there being five fund managers, but only name three. Are the other two secret? Also, this post lists Peter but not Asya as Manager, while the website lists Asya but not Peter.
Peter is a fund manager of the EAIF, not the LTFF. Current permanent LTFF fund managers are (I think) Caleb, Asya, Linch and me.
This is my understanding as well. I didn’t list Asya because she’s planning to step down.
Got it, thanks! Sorry for misunderstanding.