Announcing the Longtermism Fund
Longview Philanthropy and Giving What We Can would like to announce a new fund for donors looking to support longtermist work: the Longtermism Fund.
In this post, we outline the motivation behind the fund, reasons you may (or may not) choose to donate using it, and some questions we expect donors may have.
What work will the Longtermism Fund support?
The fund supports work that:
Reduces existential and catastrophic risks, such as those coming from misaligned artificial intelligence, pandemics, and nuclear war.
Promotes, improves, and implements key longtermist ideas.
The Longtermism Fund aims to be a strong donation option for a wide range of donors interested in longtermism. The fund focuses on organisations that:
Have a compelling and transparent case in favour of their cost effectiveness that most donors interested in longtermism will understand; and/âor
May benefit from being funded by a large number of donors (rather than one specific organisation or donor) â for example, organisations promoting longtermist ideas to the broader public may be more effective if they have been democratically funded.
There are other funders supporting longtermist work in this space, such as Open Philanthropy. The Longtermism Fundâs grantmaking is managed by Longview Philanthropy, which works closely with these other organisations, and is well positioned to coordinate with them to efficiently direct funding to the most cost-effective organisations.
The fund will make grants approximately once each quarter. To give donors a sense of the kind of work within the fundâs scope, here are some examples of organisations the fund would likely give grants to if funds were disbursed today:
The Johns Hopkins Center for Health Security (CHS) â CHS is an independent research organisation working to improve organisations, systems, and tools used to prevent and respond to public health crises, including pandemics.
Council on Strategic Risks (CSR) â CSR analyses and addresses core systemic risks to security. In its nuclear weapons policy work, CSR focuses on identifying nuclear systems and policies with the greatest potential to cause escalation into nuclear war (for example, nuclear-armed cruise missiles) and seeks to address them by working with key decision-makers.
Centre for Human-Compatible Artificial Intelligence (CHAI) â CHAI is a research organisation aiming to shift the development of AI away from potentially dangerous systems we could lose control over, and towards provably safe systems that act in accordance with human interests even as they become increasingly powerful.
Centre for the Governance of AI (GovAI) â GovAI is a policy research organisation that aims to build âa global research community, dedicated to helping humanity navigate the transition to a world with advanced AI.â
The vision behind the Longtermism Fund
We think that longtermism as an idea and movement is likely to become significantly more mainstream â especially with Will MacAskillâs soon-to-be-released book, What We Owe The Future, and popular creators becoming more involved in promoting longtermist ideas. But whatâs the call to action?
For many who want to contribute to longtermism, focusing on their careers (perhaps by pursuing one of 80,000 Hoursâ high-impact career paths) will be their best option. But for many others â and perhaps for most people â the most straightforward and accessible way to contribute is through donations.
Our aim is for the Longtermism Fund to make it easier for people to support highly effective organisations working to improve the long-term future. Not only do we think that the money this fund will move will have significant impact, we also think the fund will provide another avenue for the broader community to engage with and implement these ideas. In turn, this makes it more likely that the value of future generations features in discussions with friends, voting choices, and careers.
And we think itâs worth being ambitious. GiveWell now moves hundreds of millions of dollars each year, with over a hundred thousand individual donors having contributed. In the best case, this fund can follow a similar trajectory, becoming a significant part of the longtermist funding ecosystem.
Why donate to the Longtermism Fund?
We think there are three main reasons to support this fund:
You want to reduce the chance of catastrophic and existential risks, thereby safeguarding the long-term future of humanity.
The fund is managed by expert grantmakers, informed by years of research, who can help maximise the impact of your donation.
By supporting a fund, not only are you donating as part of a community, but itâs also highly efficient: grantmakers can coordinate with organisations to ensure they receive the funding they can effectively use.
We discuss the above considerations in more depth on the Longtermism Fund page.
Whatâs the difference between the Longtermism Fund and the Long-Term Future Fund?
We think the Long-Term Future Fund (LTFF) from EA Funds is an excellent donation opportunity for donors with a lot of context on effective altruism and longtermism, but being accessible or legible to the broader public is not integral to the fundâs grantmaking â intentionally so. Instead, the LTFF has primarily worked within the niche of providing small to medium grants to individuals or early organisations. Often, this involves supporting researchers early in their careers, or highly targeted outreach efforts promoting longtermism.
While we think this is extremely impactful, we expect many donors (especially those who are newer to the longtermist community) will prefer to support larger organisations whose work requires less context to understand. The Longtermism Fund aims to support those donors. We think thereâs room for a new fund which takes into account the legibility of its grants, and puts greater emphasis on ensuring the reasoning behind each grant is explained in a way that will make sense to people with varying levels of context. Both funds will be supported by the Giving What We Can donation platform (formerly run by EA Funds).
Along with the other EA Funds, the LTFF has shown the âfundâ model can be highly successful, with LTFF being the most popular longtermist donation option of all Giving What We Can members. We hope that the Longtermism Fund can continue this success, and potentially reach an even wider pool of donors.
Wonât all the fundâs grants be highly fungibile?
Fungibility and donor coordination is a complicated topic. In many cases, major funders will react to Longtermism Fund grants by making smaller donations to the recipient organisations â this makes the donations âfungibleâ. We donât see this as a major issue, for the following reasons:
If grants given by the Longtermism Fund end up freeing up resources of other funders working in this space, we see that as a good thing. However, we think itâs important to flag to donors that if their values are not aligned with these other funders (e.g., Longviewâs other work and Open Philanthropy) they may not want to donate to the fund.
While in the early stages, the fundâs grants are likely to be fungible with other fundersâ work, this may change over time. As the amount of money the fund disperses grows, so does the amount of research and grantmaking efforts it makes sense to allocate to the fund. Itâs possible in the medium or long-run, this fund will build the capacity to do its own grantmaking, thereby finding new opportunities to support that â but for the fund â may not otherwise received funding.
While thinking at the margin is a powerful tool, so is coordination. We expect many grantees to prefer being funded by a large pool of individual donors, rather than by a single philanthropic foundation. We think in an optimal funding ecosystem, individual donors would support those kinds of organisations, while other funders could focus efforts on more niche areas where they have a better fit as a funder. We hope the Longtermism Fund can help push the funding ecosystem further in that direction.
There is in fact a substantial amount of work being done that is highly impactful, but doesnât meet the current bar for cost effectiveness to be funded. For example, only 4% of the applications to the Future Fundâs 2022 application round were accepted. When more funding is available, that bar can lower, thereby funding even more work. So to the extent this fund might increase the total amount of funding available, it will also genuinely be funding projects that otherwise may not have been funded.
Funds are an excellent way for individual donors to coordinate via expert grantmakers to maximise their personal counterfactual impact. We discuss some of the advantages to the fund model on the Longtermism Fund page.
So overall, we donât think the concerns around fungibility significantly undermine the cost effectiveness of donating to this fund. And we think that even with the large amount of funding currently available, small donations still have a significant impact from a longtermist perspective.
Calls to action
We anticipate donors may have some questions about the Longtermism Fund â if there are any we miss, please ask in the comments, or reach out to michael[dot]townsend[at]givingwhatwecan.org. More information is also available on Giving What We Canâs website.
If you want to support the fund, donate and share it with others you think would be interested!
- An Overview of the AI Safety FundÂing Situation by 12 Jul 2023 14:54 UTC; 129 points) (
- LongterÂmism Fund: AuÂgust 2023 Grants Report by 20 Aug 2023 5:34 UTC; 81 points) (
- An Overview of the AI Safety FundÂing Situation by 12 Jul 2023 14:54 UTC; 70 points) (LessWrong;
- EA OrÂgaÂniÂzaÂtion UpÂdates: SeptemÂber 2022 by 14 Sep 2022 15:50 UTC; 40 points) (
- FuÂture MatÂters #5: suÂperÂvolÂcaÂnoes, AI takeover, and What We Owe the Future by 14 Sep 2022 13:02 UTC; 31 points) (
- GivÂing What We Can AuÂgust 2022 Newsletter by 30 Aug 2022 1:19 UTC; 19 points) (
- Monthly OverÂload of EAâSeptemÂber 2022 by 1 Sep 2022 13:43 UTC; 15 points) (
- 16 Aug 2022 22:11 UTC; 11 points) 's comment on Public reÂports are now opÂtional for EA Funds grantees by (
Tl;dr the Longtermism Fund aims to be a widely accessible call-to-action to accompany longtermism becoming more mainstream đ
Great TL;DR! (I love comments like this <3 )
Iâm happy this exists and I like the logo!
Credit goes to Alex Savard :)
Also, do they/âyou intend to release writeups, in the style of EA Funds?
Weâll release payout
reports each quarterwhen we disburse funds (likely bi-annually). The exact format/âstyle hasnât yet been determined, but weâre aiming to explain the reasoning behind each grant to donors.Love to see this type of collaboration â€ïžđ
Not sure how I feel about this. Seems like this might make longtermism more scalable, and the cost of screening-off some opportunities. Do you expect the best opportunities to be above or below your bar for legibility? Do other people (e.g., from the LTFF or OpenPhil) agree with your view here? Personally I have some intuitions that it might be below.
The cost is lower than it naively looks because if the grantmakers are skilled, they should be able to understand what makes for a great-but-potentially-illegible grant, and forward it to other grantmakers.
Good point!
I do agree with GWWC here and have been involved in some of the strategic decision-making that lead to launching this new fund. Iâm excited to have a donation option that is less weird than LTFF for longtermists but still (like GWWC) see a lot of value in both donation opportunities existing.
I think that excellent but illegible projects already have (in my probably biased opinion) good funding options through both the LTFF and the FTX regranting program.
Thanks for your questions!
As Linch suggests, opportunities that seem promising but arenât sufficiently legible can be referred to other funders to investigate.
We reached out to staff at Open Philanthropy about setting up this fund, and received positive feedback. The EA Funds team (with input from LTFF grant managers at the time) had also previously considered setting up a âLegible Longtermism Fundâ â my understanding is the reason they didnât was due to lack of capacity, but they were in favour of the idea.
Whether the best opportunities are sufficiently legible is an interesting question:
It may depend on whether you look at it in terms of cost-effectiveness, or total benefit:
In pure cost-effectiveness terms:
I think I may share your intuitions that some of the smaller grants the Long-Term Future Fund makes might be more cost-effective than the typical grant I expect the Longtermism Fund to make (though, itâs difficult to evaluate this in advance of the Longtermism Fund making grants!).
Though, we anticipate the Longtermism Fundâs requirement for legibility might, in some cases, be beneficial to cost-effectiveness. For example, we anticipate some organisations to prefer receiving grants from the Longtermism Fund (as itâs democratically funded and highly legible) than other funders. Per his comment, Caleb (from EA Funds) and a reviewer from OP share this view.
In total benefit terms:
My intuition, informed by just double-checking Open Philâs and FTX FFâs respective grants databases, is that a significant amount of longtermist grantmaking goes to work that would be sufficiently legible for this fund to support.
There therefore seems to me to be plenty of sufficiently legible work to support.
My bottomline view is the effect of the fund will be to:
Increase the total amount of funding going to longtermist work. This may be especially important if longtermism manages to scale up significantly and funding requirements increase (e.g., successful megaprojects).
Changing the proportion of funding to legible/âillegible opportunities provided by individual donors/âlarge funders (i.e., the proportion of funding going to legible work provided by individual donors will increase).
Provide a funder that may be favourable to grantees who want to be funded by something democratically supported/âhighly legible.
I donât think itâs âscreening offâ opportunities that donât fit meet its legibility requirement will make it more difficult for those organisations to receive funding.
Worth noting that Iâm speaking as a Researcher at GWWC, whereas Longview is primarily responsible for grantmaking.
FWIW this is the most exciting ToC to me. In general (and speaking very coarsely) I think grantmakers should be optimizing to identify new vehicles to allow more great grants to be given, rather than e.g. better evaluations or improvements of existing opportunities, or fundraising.
Thanks Michael!
Can you say a bit more about them and about the rationale behind their selection? They have short blurbs on the page, but they are pretty short. Not sure if this is correct, though, none seem to have that much of a background in AI/âbiosecurity. I have the impression that AI grants in particular can be gnarly to evaluate.
As mentioned on the page, the fundâs grantmaking will be informed by all of Longviewâs work, and therefore everyone in their team plays a role. The fund managers listed on the page are especially likely to contribute. For work outside their focus areas, such as in AI and Bio, the grants will be heavily informed by others with expertise in those areas (including the work of other organisations, like Open Philanthropy and FTX FF).
You published a grant report about a small 2022 grants round; do you know when you plan to do the next round?
Yes, we are aiming to publish this next week, and it should include an explanation on the delay. (Also thanks for checking in on thisâthe accountability is helpful.)
Will you be taking open applications from organizations looking for funding?
At this stage, we wonât be taking applications from organizations looking to apply for funding. Iâll add this question and response to the FAQ â thanks for asking! This is something we plan to review within the first year.