The Long-Term Future Fund (LTFF) makes small, targeted grants with the aim of improving the long-term trajectory of humanity. We are currently fundraising to cover our grantmaking budget for the next 6 months. We would like to give donors more insight into how we prioritize different projects, so they have a better sense of how we plan to spend their marginal dollar. Below, we’ve compiled fictionalbut representative grants to illustrate what sort of projects we might fund depending on how much we raise for the next 6 months, assuming we receive grant applications at a similar rate and quality to the recent past.
Our motivations for presenting this information are a) to provide transparency about how the LTFF works, and b) to move the EA and longtermist donor communities towards a more accurate understanding of what their donations are used for. Sometimes, when people donate to charities (EA or otherwise), they may wrongly assume that their donations go towards funding the average, or even more optimistically, the best work of those charities. However, it is usually more useful to consider the marginal impact for the world that additional dollars would buy. By offering illustrative examples of the sort of projects we might fund at different levels of funding, we hope to give potential donors a better sense of what their donations might buy, depending on how much funding has already been committed. We hope that this post will help improve the quality of thinking and discussions about charities in the EA and longtermist communities.
For donors who believe that the current marginal LTFF grants are better than marginal funding of all other organizations, please consider donating! Compared to the last 3 years, we now have both a) unusually high quality and quantity of applications and b) unusually low amount of donations, which means we’ll have to raise our bar substantially if we do not receive additional donations. This is an especially good time to donate, as donations are matched 2:1 by Open Philanthropy (OP donates $2 for every $1 you donate). That said, if you instead believe that marginal funding of another organization is (between 1x and 3x, depending on how you view marginal OP money) better than current marginal LTFF grants, then please do not donate to us, and instead donate to them and/or save the money for later.
Background on the LTFF
We are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks.
We specialize in funding early stage projects rather than established organizations.
From March 2022 to March 2023, we received 878 applications and funded 263 as grants, worth ~$9.1M dollars total (average $34.6k/grant). To our knowledge, we have made more small grants in this time period than any other longtermist- or EA- motivated funder.
Historically, ~40% of our funding has come from Open Phil. However, we are trying to become more independent of Open Phil. As a temporary stopgap measure, Open Phil is matching donations to LTFF 2:1 instead of granting to us directly.
100% of money we fundraise for LTFF qua LTFF goes to grantees; we fundraise separately and privately for operational costs.
We try to be very willing to fund weird things that the grantmakers’ inside views believe are really impactful for the long-term future.
You can read more about our work at our website here, or in our accompanying payout report here.
Methodology for this analysis
At the LTFF, we assign each grant application to a Principal Investigator (PI) who assesses its potential benefits, drawbacks, and financial cost. The PI scores the application from −5 to +5. Subsequently, other fund managers may also score it. The grant gets approved if its average score surpasses the funding threshold, which historically varied from 2.0 to 2.5, but is currently at 2.9.
Here’s how we created the following list of fictional grants:
Caleb ranked all LTFF grant applications from the past six months according to their average scores.
Caleb calculated the total cost of funding all above-threshold grants as a function of the funding threshold, starting with the highest scoring grant and adding costs as the threshold decreases.
Caleb grouped the applications based on the cumulative budget required for them to surpass the threshold.
Caleb and Linch randomly selected grants from each group.
Linch modified and blended grants to form representative fictitious grants based on brief descriptions.
This process is highly qualitative and is intended to demonstrate the types of projects we’d fund at various donation levels. The final ranking likely does not represent the views of any individual fund manager very well.
This analysis has weaknesses, including that:
Our current grant scoring system lacks precision except at levels close to the funding threshold. When scoring applications, we generally aim to determine the probability a grant reaches the threshold, not to track explicit expected cost-effectiveness. If a grant is clearly above or below the threshold, fund managers won’t score it as precisely.
In this post, we offer limited information on each hypothetical applicant’s suitability for each hypothetical project, but in reality, both applicant quality and applicant-project fit significantly influence our application assessment.
For the analysis, we conservatively assume that the quality of applications won’t improve even if funding surpasses expectations.
Caveat for grantseekers
This article is primarily aimed at donors, not grantees. We believe that the compatibility between an applicant and their proposed project, including personal interest and enthusiasm, plays a crucial role in the project’s success. Therefore, we discourage tailoring your applications to match the higher tiers of this list; we do not expect this to increase either your probability of getting funded or the project’s eventual impact conditional upon funding.
Grant tiers
Our primary aim in awarding grants is to optimize the trajectory of the long-term future. To that end, grantmakers try to evaluate each grant according to their subjective worldviews of whether spending $X on the grant is a sufficiently good use of limited resources given that we only have $Y total to spend for our longtermist goals.
In the tiers below, we illustrate the types of projects (and corresponding grant costs[1] in brackets) we’d potentially finance if our fundraising over the next six months reaches that tier. For each tier, we list only projects we likely wouldn’t finance if our fundraising only met the preceding tier’s total. For example, if we raised $1.2 million, we would likely fund everything in the $100,000 and $1M tiers, but only a small subset (up to $200,000) of projects in the $5M tier, and nothing in the $10M tier.
To put it differently, as the funding amount for the LTFF increases, the threshold for applications we would consider funding decreases, as there is more funding to go around.
If LTFF raises $100,000
These are some fictional projects that we might fund if we had roughly $100,000 of funding over the next 6 months. Note that this is not a very realistic hypothetical: in worlds where we actually only have ~$100,000 of funding over 6 months, a) many LTFF grantmakers would likely quit, and b) the remaining staff and volunteers would likely think that referring grants to other grantmakers was a more important part of our job than allocating the remaining $100k. Still, these are projects that would meet our bar even if our funding was severely constrained in this way.
Funding to cover a four-month fellowship for a vital senior contractor at a leading AI Safety research institution ($40k).
A year-long stipend for a highly-recommended Theoretical Computer Science PhD graduate in a low cost-of-living area to independently investigate new failure modes of state-of-the-art narrowly superhuman models ($55k).
Financial support for a skilled expert in a relevant subfield to conduct three weeks of joint work with a team at an impactful biosecurity research organization, covering travel and accommodation expenses ($4k).[2]
$1M
Below are some hypothetical projects we might additionally fund if we had roughly $1M of funding over the next 6 months (roughly 1⁄5 − 1⁄6 of our past spending rate). This is roughly how much money we would have if we only account for our current reserves and explicit promises of additional funding we’ve received.
Funding for a six-month fellowship to support a technology policy consultant in advising on a new existential risk division for a US think tank ($50k).
A four-month stipend, including physical therapy allowance, for a former research manager from an effective altruism research organization, enabling a professional sabbatical to address ongoing issues with repetitive strain injury and explore a career in independent research, possibly in digital sentience ($32k).
A four-month stipend to help a physics PhD student to transition to AI safety work ($7.5k).
One-year stipend for a researcher with a Machine Learning PhD to develop a novel approach to interpretability using LLMs ($120k).
$5M
$5M over 6 months is our current target, and roughly how much we want to raise to cover our grantmaking budget going forward. Note that our current threshold (2.9) is in between the $1M and $5M bars.
Should we secure roughly $5M in funding for the next six months, corresponding to our funding threshold from November 2022 to July 2023 (2.5), we might additionally fund the following hypothetical grants:
Nine months financial support for a gifted recent Master’s in Machine Learning graduate to support a research collaboration on ML safety evaluations with a senior researcher ($75k).
Funding for an AI governance researcher with legal expertise to dedicate two weeks of work to a paper exploring the practical effects of anti-trust laws on AI safety coordination ($2.5k).
Support for a team of translators and editors to translate the Cold Takes “Most Important Century” blog series into Spanish ($15k).
An hourly rate of $50 for a cybersecurity PhD graduate to expand their skills in AI safety and conduct original research in important cybersecurity questions in AI governance ($50/hr).
Funding to enable 10-20 mid-career materials engineering professionals to participate in a professional conference focused on reducing biorisk ($15k).
Aside from Linch:
To add more color to these examples, I’d like to discuss the sort of applications that are relatively close to the current LTFF funding bar – that is, the kind of applications we’ll neither obviously accept nor obviously reject. Hopefully, this will both demystify some of the inner workings of LTFF, as well as help donors make more informed decisions.
Some grant applications to the LTFF look like the following: a late undergraduate or recent graduate from an Ivy League university or a comparable institution requests a grant to conduct independent research or comparable work in a high-impact field, but we don’t find the specific proposal particularly compelling. For example, the mentee of a fairly prominent AI safety or biosecurity researcher may request 6-12 months’ stipend to explore a particular research project that their mentor(s) are excited about, but LTFF fund managers and some of our advisors are unexcited about. Alternatively, they may want to take an AGISF course, or to read and think enough to form a detailed world model about which global catastrophic risks are the most pressing, in the hopes of then transitioning their career towards combating existential risk.
In these cases, the applicant often shows some evidence of interest and focus (e.g., participation in EA local groups/EA Global or existential risk reading groups) and some indications of above-average competence or related experience, but nothing exceptional. Factors that would positively influence my impression include additional signs of dedication, a more substantial track record in relevant areas, indications of exceptional talent, or other signs of potential for a notably successful early-career investment. Conversely, evidence of deceitfulness, problematic unilateral actions or inclinations, rumors or indications of sketchiness not quite severe enough to be investigated by Community Health, or other signs or evidence of possibly becoming a high-downside grant would negatively influence my assessment.
I think the median grant application of this kind (without extenuating evidence) would be a bit below our funding bar until July (2.5), and just above our pre-November 2022 bar.
$7.5M
If we accumulate $7.5M in funds over the next six months, we might additionally support the following hypothetical grants. This aligns with our pre-November 2022 grantmaking threshold (2.0). However, we have never actually spent as much as $7.5m in any six-month period before November 2022. This is because we’ve had an average increase in both quantity and quality of applications this year, which meant there were not enough applications above the old bar to fund $7.5M worth of projects.
A workshop led by professional facilitators to train employees of longtermist organizations in specific project management techniques ($10k).
Hiring a professional communications firm by a junior longtermist researcher to teach best communication practices to professionals in longtermist organizations ($58k).
A one-year grant for a Computer Science PhD graduate, previously funded for Machine Learning safety upskilling without substantial results, to shift into agent foundations research ($85k).
Additional funds for human evaluations in studies comparing methods for fine-tuning language models ($11k).
$10M
Below are some hypothetical grants that we might additionally fund if we have $10M in spending over the next 6 months. This will correspond to a lower grantmaking bar than at any point in LTFF’s history. That said, should we actually receive such a substantial influx, we might instead opt to carry out proactive grantmaking projects we deem more impactful, and/or reconsider our general policy against saving funds.
We will always refrain from funding projects we believe are net harmful in expectation, regardless of the funds raised.
A six-month stipend for a junior software engineer to continue AI alignment research at a notable research institution, including a budget for upskilling ($40k).
Extra funding for a student who wrote a bachelor’s thesis on existential risk to pursue a master’s degree in philosophy at a renowned university ($12k).
Travel funding for a Machine Learning PhD student to present an alignment paper at a prestigious conference, despite the fund managers’ internal belief that the research itself isn’t particularly impactful ($2k).
Financial support for a Master’s degree study in conflict and security with a concentration on AI and geopolitical studies ($75k).
If you’ve read this far, please don’t hesitate to comment if you have additional questions, clarifications, or feedback!
If you think grants above the $1M tier are valuable, please consider donating to us! If we do not receive more money soon, we will have to increase our bar again, resulting in a quite suboptimal (by my lights) misallocation of longtermist resources.
Acknowledgements
This post was written by Linch Zhang and Caleb Parikh, with considerable help from Daniel Eth. Thanks to Lizka Vaintrob, Nuño Sempere, Amber Dawn and GPT-4 for helpful feedback and suggestions.
Appendix A: Donation smoothing/saving
The LTFF saves money/smooths donations on the timescale of months (e.g. if we have unexpectedly high donations in August, we might want to ‘smooth out’ our grantmaking so that we award similar amounts in September, October, etc). However, we generally do not attempt to smooth donations on the timescale of years. That is, if we receive an unexpectedly high windfall in 2023, we would not by default plan to “save up” donations for future years. Instead, we may aim both to more aggressively solicit grant applications, and also to lower the bar for funding. Similarly, if we receive unexpectedly little in donations, we will likely raise the bar for funding and/or refer grant applicants to other donors.
This is in contrast to Open Philanthropy, which tries to optimize for making the best grants over the timescale of decades, and the Patient Philanthropy Fund, which tries to optimize for making the best grants over the timescale of centuries.
There are several considerations in favor of not attempting to do too much donation smoothing:
We are not experts on the question of longtermist funding over time, and aren’t trying to be. This question is arguably best decided by the surrounding community and ecosystem, and for donors to decide to donate to us when they believe (roughly) that we are a better usage of marginal funds than other options, at whatever level of desired spending.
We don’t and probably can’t make aggressive investment choices. Having unused donation money in our bank accounts is likely costly compared to individuals and large foundations holding the money, who can opt to make better and riskier investment choices than we’re able to.
The donor community has in the past been opposed to funds not disbursing money. E.g. “slow grant disbursement” was a recurring problem on CEA’s Mistakes page.
However, this policy is not set in stone. If donors or the community have strong opinions, we welcome engagement here!
Note that this grant would be controversial within the fund at a $100k funding bar, as some fund managers and advisers would say we shouldn’t fund any biosecurity grants at that level of funding.
What Does a Marginal Grant at LTFF Look Like? Funding Priorities and Grantmaking Thresholds at the Long-Term Future Fund
The Long-Term Future Fund (LTFF) makes small, targeted grants with the aim of improving the long-term trajectory of humanity. We are currently fundraising to cover our grantmaking budget for the next 6 months. We would like to give donors more insight into how we prioritize different projects, so they have a better sense of how we plan to spend their marginal dollar. Below, we’ve compiled fictional but representative grants to illustrate what sort of projects we might fund depending on how much we raise for the next 6 months, assuming we receive grant applications at a similar rate and quality to the recent past.
Our motivations for presenting this information are a) to provide transparency about how the LTFF works, and b) to move the EA and longtermist donor communities towards a more accurate understanding of what their donations are used for. Sometimes, when people donate to charities (EA or otherwise), they may wrongly assume that their donations go towards funding the average, or even more optimistically, the best work of those charities. However, it is usually more useful to consider the marginal impact for the world that additional dollars would buy. By offering illustrative examples of the sort of projects we might fund at different levels of funding, we hope to give potential donors a better sense of what their donations might buy, depending on how much funding has already been committed. We hope that this post will help improve the quality of thinking and discussions about charities in the EA and longtermist communities.
For donors who believe that the current marginal LTFF grants are better than marginal funding of all other organizations, please consider donating! Compared to the last 3 years, we now have both a) unusually high quality and quantity of applications and b) unusually low amount of donations, which means we’ll have to raise our bar substantially if we do not receive additional donations. This is an especially good time to donate, as donations are matched 2:1 by Open Philanthropy (OP donates $2 for every $1 you donate). That said, if you instead believe that marginal funding of another organization is (between 1x and 3x, depending on how you view marginal OP money) better than current marginal LTFF grants, then please do not donate to us, and instead donate to them and/or save the money for later.
Background on the LTFF
We are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks.
We specialize in funding early stage projects rather than established organizations.
From March 2022 to March 2023, we received 878 applications and funded 263 as grants, worth ~$9.1M dollars total (average $34.6k/grant). To our knowledge, we have made more small grants in this time period than any other longtermist- or EA- motivated funder.
Other funders in this space include Open Philanthropy, Survival and Flourishing Fund, and recently Lightspeed Grants and Manifund.
Historically, ~40% of our funding has come from Open Phil. However, we are trying to become more independent of Open Phil. As a temporary stopgap measure, Open Phil is matching donations to LTFF 2:1 instead of granting to us directly.
100% of money we fundraise for LTFF qua LTFF goes to grantees; we fundraise separately and privately for operational costs.
We try to be very willing to fund weird things that the grantmakers’ inside views believe are really impactful for the long-term future.
You can read more about our work at our website here, or in our accompanying payout report here.
Methodology for this analysis
At the LTFF, we assign each grant application to a Principal Investigator (PI) who assesses its potential benefits, drawbacks, and financial cost. The PI scores the application from −5 to +5. Subsequently, other fund managers may also score it. The grant gets approved if its average score surpasses the funding threshold, which historically varied from 2.0 to 2.5, but is currently at 2.9.
Here’s how we created the following list of fictional grants:
Caleb ranked all LTFF grant applications from the past six months according to their average scores.
Caleb calculated the total cost of funding all above-threshold grants as a function of the funding threshold, starting with the highest scoring grant and adding costs as the threshold decreases.
Caleb grouped the applications based on the cumulative budget required for them to surpass the threshold.
Caleb and Linch randomly selected grants from each group.
Linch modified and blended grants to form representative fictitious grants based on brief descriptions.
This process is highly qualitative and is intended to demonstrate the types of projects we’d fund at various donation levels. The final ranking likely does not represent the views of any individual fund manager very well.
This analysis has weaknesses, including that:
Our current grant scoring system lacks precision except at levels close to the funding threshold. When scoring applications, we generally aim to determine the probability a grant reaches the threshold, not to track explicit expected cost-effectiveness. If a grant is clearly above or below the threshold, fund managers won’t score it as precisely.
In this post, we offer limited information on each hypothetical applicant’s suitability for each hypothetical project, but in reality, both applicant quality and applicant-project fit significantly influence our application assessment.
For the analysis, we conservatively assume that the quality of applications won’t improve even if funding surpasses expectations.
Caveat for grantseekers
This article is primarily aimed at donors, not grantees. We believe that the compatibility between an applicant and their proposed project, including personal interest and enthusiasm, plays a crucial role in the project’s success. Therefore, we discourage tailoring your applications to match the higher tiers of this list; we do not expect this to increase either your probability of getting funded or the project’s eventual impact conditional upon funding.
Grant tiers
Our primary aim in awarding grants is to optimize the trajectory of the long-term future. To that end, grantmakers try to evaluate each grant according to their subjective worldviews of whether spending $X on the grant is a sufficiently good use of limited resources given that we only have $Y total to spend for our longtermist goals.
In the tiers below, we illustrate the types of projects (and corresponding grant costs[1] in brackets) we’d potentially finance if our fundraising over the next six months reaches that tier. For each tier, we list only projects we likely wouldn’t finance if our fundraising only met the preceding tier’s total. For example, if we raised $1.2 million, we would likely fund everything in the $100,000 and $1M tiers, but only a small subset (up to $200,000) of projects in the $5M tier, and nothing in the $10M tier.
To put it differently, as the funding amount for the LTFF increases, the threshold for applications we would consider funding decreases, as there is more funding to go around.
If LTFF raises $100,000
These are some fictional projects that we might fund if we had roughly $100,000 of funding over the next 6 months. Note that this is not a very realistic hypothetical: in worlds where we actually only have ~$100,000 of funding over 6 months, a) many LTFF grantmakers would likely quit, and b) the remaining staff and volunteers would likely think that referring grants to other grantmakers was a more important part of our job than allocating the remaining $100k. Still, these are projects that would meet our bar even if our funding was severely constrained in this way.
Funding to cover a four-month fellowship for a vital senior contractor at a leading AI Safety research institution ($40k).
A year-long stipend for a highly-recommended Theoretical Computer Science PhD graduate in a low cost-of-living area to independently investigate new failure modes of state-of-the-art narrowly superhuman models ($55k).
Financial support for a skilled expert in a relevant subfield to conduct three weeks of joint work with a team at an impactful biosecurity research organization, covering travel and accommodation expenses ($4k).[2]
$1M
Below are some hypothetical projects we might additionally fund if we had roughly $1M of funding over the next 6 months (roughly 1⁄5 − 1⁄6 of our past spending rate). This is roughly how much money we would have if we only account for our current reserves and explicit promises of additional funding we’ve received.
Funding for a six-month fellowship to support a technology policy consultant in advising on a new existential risk division for a US think tank ($50k).
A four-month stipend, including physical therapy allowance, for a former research manager from an effective altruism research organization, enabling a professional sabbatical to address ongoing issues with repetitive strain injury and explore a career in independent research, possibly in digital sentience ($32k).
A four-month stipend to help a physics PhD student to transition to AI safety work ($7.5k).
One-year stipend for a researcher with a Machine Learning PhD to develop a novel approach to interpretability using LLMs ($120k).
$5M
$5M over 6 months is our current target, and roughly how much we want to raise to cover our grantmaking budget going forward. Note that our current threshold (2.9) is in between the $1M and $5M bars.
Should we secure roughly $5M in funding for the next six months, corresponding to our funding threshold from November 2022 to July 2023 (2.5), we might additionally fund the following hypothetical grants:
Nine months financial support for a gifted recent Master’s in Machine Learning graduate to support a research collaboration on ML safety evaluations with a senior researcher ($75k).
Funding for an AI governance researcher with legal expertise to dedicate two weeks of work to a paper exploring the practical effects of anti-trust laws on AI safety coordination ($2.5k).
Support for a team of translators and editors to translate the Cold Takes “Most Important Century” blog series into Spanish ($15k).
An hourly rate of $50 for a cybersecurity PhD graduate to expand their skills in AI safety and conduct original research in important cybersecurity questions in AI governance ($50/hr).
Funding to enable 10-20 mid-career materials engineering professionals to participate in a professional conference focused on reducing biorisk ($15k).
Aside from Linch:
To add more color to these examples, I’d like to discuss the sort of applications that are relatively close to the current LTFF funding bar – that is, the kind of applications we’ll neither obviously accept nor obviously reject. Hopefully, this will both demystify some of the inner workings of LTFF, as well as help donors make more informed decisions.
Some grant applications to the LTFF look like the following: a late undergraduate or recent graduate from an Ivy League university or a comparable institution requests a grant to conduct independent research or comparable work in a high-impact field, but we don’t find the specific proposal particularly compelling. For example, the mentee of a fairly prominent AI safety or biosecurity researcher may request 6-12 months’ stipend to explore a particular research project that their mentor(s) are excited about, but LTFF fund managers and some of our advisors are unexcited about. Alternatively, they may want to take an AGISF course, or to read and think enough to form a detailed world model about which global catastrophic risks are the most pressing, in the hopes of then transitioning their career towards combating existential risk.
In these cases, the applicant often shows some evidence of interest and focus (e.g., participation in EA local groups/EA Global or existential risk reading groups) and some indications of above-average competence or related experience, but nothing exceptional. Factors that would positively influence my impression include additional signs of dedication, a more substantial track record in relevant areas, indications of exceptional talent, or other signs of potential for a notably successful early-career investment. Conversely, evidence of deceitfulness, problematic unilateral actions or inclinations, rumors or indications of sketchiness not quite severe enough to be investigated by Community Health, or other signs or evidence of possibly becoming a high-downside grant would negatively influence my assessment.
I think the median grant application of this kind (without extenuating evidence) would be a bit below our funding bar until July (2.5), and just above our pre-November 2022 bar.
$7.5M
If we accumulate $7.5M in funds over the next six months, we might additionally support the following hypothetical grants. This aligns with our pre-November 2022 grantmaking threshold (2.0). However, we have never actually spent as much as $7.5m in any six-month period before November 2022. This is because we’ve had an average increase in both quantity and quality of applications this year, which meant there were not enough applications above the old bar to fund $7.5M worth of projects.
A workshop led by professional facilitators to train employees of longtermist organizations in specific project management techniques ($10k).
Hiring a professional communications firm by a junior longtermist researcher to teach best communication practices to professionals in longtermist organizations ($58k).
A one-year grant for a Computer Science PhD graduate, previously funded for Machine Learning safety upskilling without substantial results, to shift into agent foundations research ($85k).
Additional funds for human evaluations in studies comparing methods for fine-tuning language models ($11k).
$10M
Below are some hypothetical grants that we might additionally fund if we have $10M in spending over the next 6 months. This will correspond to a lower grantmaking bar than at any point in LTFF’s history. That said, should we actually receive such a substantial influx, we might instead opt to carry out proactive grantmaking projects we deem more impactful, and/or reconsider our general policy against saving funds.
We will always refrain from funding projects we believe are net harmful in expectation, regardless of the funds raised.
A six-month stipend for a junior software engineer to continue AI alignment research at a notable research institution, including a budget for upskilling ($40k).
Extra funding for a student who wrote a bachelor’s thesis on existential risk to pursue a master’s degree in philosophy at a renowned university ($12k).
Travel funding for a Machine Learning PhD student to present an alignment paper at a prestigious conference, despite the fund managers’ internal belief that the research itself isn’t particularly impactful ($2k).
Financial support for a Master’s degree study in conflict and security with a concentration on AI and geopolitical studies ($75k).
If you’ve read this far, please don’t hesitate to comment if you have additional questions, clarifications, or feedback!
If you think grants above the $1M tier are valuable, please consider donating to us! If we do not receive more money soon, we will have to increase our bar again, resulting in a quite suboptimal (by my lights) misallocation of longtermist resources.
Acknowledgements
This post was written by Linch Zhang and Caleb Parikh, with considerable help from Daniel Eth. Thanks to Lizka Vaintrob, Nuño Sempere, Amber Dawn and GPT-4 for helpful feedback and suggestions.
Appendix A: Donation smoothing/saving
The LTFF saves money/smooths donations on the timescale of months (e.g. if we have unexpectedly high donations in August, we might want to ‘smooth out’ our grantmaking so that we award similar amounts in September, October, etc). However, we generally do not attempt to smooth donations on the timescale of years. That is, if we receive an unexpectedly high windfall in 2023, we would not by default plan to “save up” donations for future years. Instead, we may aim both to more aggressively solicit grant applications, and also to lower the bar for funding. Similarly, if we receive unexpectedly little in donations, we will likely raise the bar for funding and/or refer grant applicants to other donors.
This is in contrast to Open Philanthropy, which tries to optimize for making the best grants over the timescale of decades, and the Patient Philanthropy Fund, which tries to optimize for making the best grants over the timescale of centuries.
There are several considerations in favor of not attempting to do too much donation smoothing:
We are not experts on the question of longtermist funding over time, and aren’t trying to be. This question is arguably best decided by the surrounding community and ecosystem, and for donors to decide to donate to us when they believe (roughly) that we are a better usage of marginal funds than other options, at whatever level of desired spending.
We don’t and probably can’t make aggressive investment choices. Having unused donation money in our bank accounts is likely costly compared to individuals and large foundations holding the money, who can opt to make better and riskier investment choices than we’re able to.
The donor community has in the past been opposed to funds not disbursing money. E.g. “slow grant disbursement” was a recurring problem on CEA’s Mistakes page.
However, this policy is not set in stone. If donors or the community have strong opinions, we welcome engagement here!
See this appendix in the payout report for how we set grant and stipend amounts.
Note that this grant would be controversial within the fund at a $100k funding bar, as some fund managers and advisers would say we shouldn’t fund any biosecurity grants at that level of funding.