The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn’t represent what we think is the ideal split of total EA funding between cause-areas.
In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our applications are for speculative or early-stage projects. Given this, if you’re reading this and are interested in applying to the LTFF but haven’t seen us fund projects in your area before—don’t let that put you off. We’re open to funding things in a very broad range of areas provided there’s a compelling long-termist case.
Because cause prioritization isn’t actually that decision relevant for most of our applications, I haven’t thought especially deeply about it. In general, I’d say the fund is comparably excited about marginal work in reducing long-term risks from AI, biosafety, and general longtermist macrostrategy and capacity building. I don’t currently see promising interventions in climate change, which already attracts significant funding from other sources, although we’d be open to funding something that seemed neglected, especially if it focused on mitigating or predicting extreme risks.
One area where there’s active debate is the degree to which we should support general governance improvements. For example, we made a $50,000 grant to the Center for Election Science (CES) in our September 2020 round. CES has significantly more room for funding, so the main thing holding us back was uncertainty regarding the long-termist case for impact compared to more targeted interventions.
LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?
The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn’t represent what we think is the ideal split of total EA funding between cause-areas.
In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our applications are for speculative or early-stage projects. Given this, if you’re reading this and are interested in applying to the LTFF but haven’t seen us fund projects in your area before—don’t let that put you off. We’re open to funding things in a very broad range of areas provided there’s a compelling long-termist case.
Because cause prioritization isn’t actually that decision relevant for most of our applications, I haven’t thought especially deeply about it. In general, I’d say the fund is comparably excited about marginal work in reducing long-term risks from AI, biosafety, and general longtermist macrostrategy and capacity building. I don’t currently see promising interventions in climate change, which already attracts significant funding from other sources, although we’d be open to funding something that seemed neglected, especially if it focused on mitigating or predicting extreme risks.
One area where there’s active debate is the degree to which we should support general governance improvements. For example, we made a $50,000 grant to the Center for Election Science (CES) in our September 2020 round. CES has significantly more room for funding, so the main thing holding us back was uncertainty regarding the long-termist case for impact compared to more targeted interventions.