Historically I think the LTFF’s biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren’t funding interventions on climate change. We’ve received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it’s important that donors have clear expectations regarding how their money will be used.
We’ve edited the fund page to make our focus areas more explicit, and EA Funds also added Founders Pledge Climate Change Fund for donors who want to focus on that area (and Jonas emailed donors who made this complaint, encouraging to switch their donations to the climate change fund). I hope this will help clarify things, but we’ll have to be attentive to donor feedback both via things like this AMA and our donor survey, so that we can proactively correct any misconceptions.
Another issue I think we have is that we currently lack the capacity to be more proactively engaged with our grantees. I’d like us to do this for around 10% of our grant applications, particularly those where we are a large proportion of an organisation’s budget. In these cases it’s particularly important that we hold the organisation accountable, and provide strategic advice. In around a third of these cases, we’ve chosen not to make the grant because we feel unexcited about the organisation’s current direction, even though we think it could be a good donation opportunity for a more proactive philanthropist. We’re looking to grow our capacity, so we can hopefully pursue more active philanthropy in the future.
Historically I think the LTFF’s biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren’t funding interventions on climate change. We’ve received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it’s important that donors have clear expectations regarding how their money will be used.
We’ve edited the fund page to make our focus areas more explicit
I agree unclear messaging has been a big problem for the LTFF, and I’m glad to see the EA Funds team being responsive to feedback around this. However, the updated messaging on the fund page still looks extremely unclear and I’m surprised you think it will clear up the misunderstandings donors have.
It would probably clear up most of the confusion if donors saw the clear articulation of the LTFF’s historical and forward looking priorities that is already on the fund page (emphasis added):
“While the Long-Term Future Fund is open to funding organizations that seek to reduce any type of global catastrophic risk — including risks from extreme climate change, nuclear war, and pandemics — grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term.”
The problem is that this text is buried in the 6th subsection of the 6th section of the page. So people have to read through ~1500 words, the equivalent of three single spaced typed pages, to get an accurate description of how the fund is managed. This information should be in the first paragraph (and I believe that was the case at one point).
Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding. (Aside: it’s frustrating that there’s not an easy way to see all grants categorized in a spreadsheet so that I could pull the actual numbers without going through each grant report and hand entering and classifying each grant.)
In addition to clearly communicating that the fund prioritizes AI, I would like to see the fund page (and other communications) explain why that’s the case. What are the main arguments informing the decision? Did the fund managers decide this? Did whoever selected the fund managers (almost all of who have AI backgrounds) decide this? Under what conditions would the LTFF team expect this prioritization to change? The LTFF has done a fantastic job providing transparency into the rationale behind specific grants, and I hope going forward there will be similar transparency around higher level prioritization decisions.
The very first sentence on that page reads (emphasis mine):
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics.
I personally think that’s quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn’t mention pandemics in that sentence? Perhaps you think “especially” is not strong enough?
An important reason why we don’t make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.
Here’s a spreadsheet with all EA Funds grants (though without categorization). I agree a proper grants database would be good to set up at some point; I have now added this to my list of things we might work on in 2021.
We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil’s report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.
I personally think that’s quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn’t mention pandemics in that sentence? Perhaps you think “especially” is not strong enough?
I don’t think it’s appropriate to discuss pandemics in that first sentence. You’re saying the fund makes grants that “especially” address pandemics, and that doesn’t seem accurate. I looked at your spreadsheet (thank you!) and tried to do a quick classification. As best I can tell, AI has gotten over half the money the LTFF has granted, ~19x the amount granted to pandemics (5 grants for $114,000). Forecasting projects have received 2.5x as much money as pandemics, and rationality training has received >4x as much money. So historically, pandemics aren’t even that high among non-AI priorities.
If pandemics will be on equal footing with AI going forward, then that first sentence would be okay. But if that’s the plan, why is the management team skillset so heavily tilted toward AI?
An important reason why we don’t make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.
I’m glad there’s interest in funding more biosecurity work going forward. I’m pretty skeptical that relying on applications is an effective way to source biosecurity proposals though, since relatively few EAs work in that area (at least compared to AI) and big biosecurity funding opportunities (like Open Phil grantees Johns Hopkins Center for Health Security and Blue Ribbon Study Panel on Biodefense) probably aren’t going to be applying for LTFF grants.
Regarding the page’s dual purpose, I’d say informing donors is much more important than informing applicants: it’s a bad look to misinform people who are investing money based on your information.
We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil’s report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.
There’s been plenty of discussion (including that Open Phil report) on why AI is a priority, but there’s been very little explicit discussion of why AI should be prioritized relative to other causes like biosecurity.
Open Phil prioritizes both AI and biosecurity. For every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI. If the LTFF had a similar proportion, I’d say the fund page’s messaging would be fine. But for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI. That degree of concentration warrants an explicit explanation, and shouldn’t be obscured by the fund’s messaging.
Thanks, I appreciate the detailed response, and agree with many of the points you made. I don’t have the time to engage much more (and can’t share everything), but we’re working on improving several of these things.
Thanks Jonas, glad to hear there are some related improvements in the works For whatever it’s worth, here’s an example of messaging that I think accurately captures what the fund has done, what it’s likely to do in the near term, and what it would ideally like to do:
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks or promote the adoption of longtermist thinking. While many grants so far have prioritized projects addressing risks posed by artificial intelligence (and the grantmakers expect to continue this at least in the short term), the Fund is open to funding, and welcomes applications from, a broader range of activities related to the long-term future.
I personally think that’s quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn’t mention pandemics in that sentence? Perhaps you think “especially” is not strong enough?
I agree with you that that’s pretty clear. Perhaps you could just have another sentence explaining that most grants historically have been AI-related because that’s where you receive most of your applications?
On another note, I can’t help but feel that “Global Catastrophic Risk Fund” would be a better name than “Long-term Future Fund”. This is because there are other ways to improve the long-term trajectory of civilisation than by mitigating global catastrophic risks. Also, if you were to make this change, it may help distinguish the fund from the long-term investment fund that Founders Pledge may set up.
Some of the LTFF grants (forecasting, long-term institutions, etc.) are broader than GCRs, and my guess is that at least some Fund managers are pretty excited about trajectory changes, so I’d personally think the current name seems more accurate.
Ah OK. The description below does make it sound like it’s only global catastrophic risks.
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics.
Perhaps include the word ‘predominantly’ before the word “making”?
The second sentence on that page (i.e. the sentence right after this one) reads:
In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
“Predominantly” would seem redundant with “in addition”, so I’d prefer leaving it as-is.
Which of these two sentences, both from the fund page, do you think describes the fund more accurately?
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. (First sentence of fund page.)
Grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term. (Located 1500 words into fund page.)
I’d say 2 is clearly more accurate, and I think the feedback you’ve received about donors being surprised at how many AI grants were made suggests I’m not alone.
Could you operationalize “more accurately” a bit more? Both sentences match my impression of the fund. The first is more informative as to what our aims are, the second is more informative as to the details of our historical (and immediate future) grant composition.
My sense is that the first will give people an accurate predictive model of the LTFF in a wider range of scenarios. For example, if next round we happen to receive an amazing application for a new biosecurity org, the majority of the round’s funding could go on that. The first sentence would predict this, the second not.
But the second will give most people better predictions in a “business as usual” case, where our applications in future rounds are similar to those of current rounds.
My hunch is that knowing what our aims are is more important for most donors. In particular, many people reading this for the first time will be choosing between the LTFF and one of the other EA Funds, which focus on completely different cause areas. The high-level motivation seems more salient than our current grant composition for this purpose.
Ideally of course we’d communicate both. I’ll think about if we should add some kind of high-level summary of % of grants to different areas under the “Grantmaking and Impact” section which occurs earlier. My main worry is this kind of thing is hard to keep up to date, and as described above could end up misleading donors in the other direction, if our application pool suddenly changes.
Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding
Adam has mentioned elsewhere here that he will prefer making more biosecurity grants. An interesting question here is how much the messaging should be descriptive of past donations, vs aspirational of where they want to donate more to in the future.
Good point! I’d say ideally the messaging should describe both forward and backward looking donations, and if they differ, why. I don’t think this needs to be particularly lengthy, a few sentences could do it.
Historically I think the LTFF’s biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren’t funding interventions on climate change. We’ve received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it’s important that donors have clear expectations regarding how their money will be used.
We’ve edited the fund page to make our focus areas more explicit, and EA Funds also added Founders Pledge Climate Change Fund for donors who want to focus on that area (and Jonas emailed donors who made this complaint, encouraging to switch their donations to the climate change fund). I hope this will help clarify things, but we’ll have to be attentive to donor feedback both via things like this AMA and our donor survey, so that we can proactively correct any misconceptions.
Another issue I think we have is that we currently lack the capacity to be more proactively engaged with our grantees. I’d like us to do this for around 10% of our grant applications, particularly those where we are a large proportion of an organisation’s budget. In these cases it’s particularly important that we hold the organisation accountable, and provide strategic advice. In around a third of these cases, we’ve chosen not to make the grant because we feel unexcited about the organisation’s current direction, even though we think it could be a good donation opportunity for a more proactive philanthropist. We’re looking to grow our capacity, so we can hopefully pursue more active philanthropy in the future.
I agree unclear messaging has been a big problem for the LTFF, and I’m glad to see the EA Funds team being responsive to feedback around this. However, the updated messaging on the fund page still looks extremely unclear and I’m surprised you think it will clear up the misunderstandings donors have.
It would probably clear up most of the confusion if donors saw the clear articulation of the LTFF’s historical and forward looking priorities that is already on the fund page (emphasis added):
The problem is that this text is buried in the 6th subsection of the 6th section of the page. So people have to read through ~1500 words, the equivalent of three single spaced typed pages, to get an accurate description of how the fund is managed. This information should be in the first paragraph (and I believe that was the case at one point).
Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding. (Aside: it’s frustrating that there’s not an easy way to see all grants categorized in a spreadsheet so that I could pull the actual numbers without going through each grant report and hand entering and classifying each grant.)
In addition to clearly communicating that the fund prioritizes AI, I would like to see the fund page (and other communications) explain why that’s the case. What are the main arguments informing the decision? Did the fund managers decide this? Did whoever selected the fund managers (almost all of who have AI backgrounds) decide this? Under what conditions would the LTFF team expect this prioritization to change? The LTFF has done a fantastic job providing transparency into the rationale behind specific grants, and I hope going forward there will be similar transparency around higher level prioritization decisions.
The very first sentence on that page reads (emphasis mine):
I personally think that’s quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn’t mention pandemics in that sentence? Perhaps you think “especially” is not strong enough?
An important reason why we don’t make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.
Here’s a spreadsheet with all EA Funds grants (though without categorization). I agree a proper grants database would be good to set up at some point; I have now added this to my list of things we might work on in 2021.
We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil’s report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.
I don’t think it’s appropriate to discuss pandemics in that first sentence. You’re saying the fund makes grants that “especially” address pandemics, and that doesn’t seem accurate. I looked at your spreadsheet (thank you!) and tried to do a quick classification. As best I can tell, AI has gotten over half the money the LTFF has granted, ~19x the amount granted to pandemics (5 grants for $114,000). Forecasting projects have received 2.5x as much money as pandemics, and rationality training has received >4x as much money. So historically, pandemics aren’t even that high among non-AI priorities.
If pandemics will be on equal footing with AI going forward, then that first sentence would be okay. But if that’s the plan, why is the management team skillset so heavily tilted toward AI?
I’m glad there’s interest in funding more biosecurity work going forward. I’m pretty skeptical that relying on applications is an effective way to source biosecurity proposals though, since relatively few EAs work in that area (at least compared to AI) and big biosecurity funding opportunities (like Open Phil grantees Johns Hopkins Center for Health Security and Blue Ribbon Study Panel on Biodefense) probably aren’t going to be applying for LTFF grants.
Regarding the page’s dual purpose, I’d say informing donors is much more important than informing applicants: it’s a bad look to misinform people who are investing money based on your information.
There’s been plenty of discussion (including that Open Phil report) on why AI is a priority, but there’s been very little explicit discussion of why AI should be prioritized relative to other causes like biosecurity.
Open Phil prioritizes both AI and biosecurity. For every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI. If the LTFF had a similar proportion, I’d say the fund page’s messaging would be fine. But for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI. That degree of concentration warrants an explicit explanation, and shouldn’t be obscured by the fund’s messaging.
Thanks, I appreciate the detailed response, and agree with many of the points you made. I don’t have the time to engage much more (and can’t share everything), but we’re working on improving several of these things.
Thanks Jonas, glad to hear there are some related improvements in the works For whatever it’s worth, here’s an example of messaging that I think accurately captures what the fund has done, what it’s likely to do in the near term, and what it would ideally like to do:
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks or promote the adoption of longtermist thinking. While many grants so far have prioritized projects addressing risks posed by artificial intelligence (and the grantmakers expect to continue this at least in the short term), the Fund is open to funding, and welcomes applications from, a broader range of activities related to the long-term future.
Thanks!
I agree with you that that’s pretty clear. Perhaps you could just have another sentence explaining that most grants historically have been AI-related because that’s where you receive most of your applications?
On another note, I can’t help but feel that “Global Catastrophic Risk Fund” would be a better name than “Long-term Future Fund”. This is because there are other ways to improve the long-term trajectory of civilisation than by mitigating global catastrophic risks. Also, if you were to make this change, it may help distinguish the fund from the long-term investment fund that Founders Pledge may set up.
Some of the LTFF grants (forecasting, long-term institutions, etc.) are broader than GCRs, and my guess is that at least some Fund managers are pretty excited about trajectory changes, so I’d personally think the current name seems more accurate.
Ah OK. The description below does make it sound like it’s only global catastrophic risks.
Perhaps include the word ‘predominantly’ before the word “making”?
The second sentence on that page (i.e. the sentence right after this one) reads:
“Predominantly” would seem redundant with “in addition”, so I’d prefer leaving it as-is.
OK sorry this is just me not doing my homework! That all seems reasonable.
Which of these two sentences, both from the fund page, do you think describes the fund more accurately?
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. (First sentence of fund page.)
Grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term. (Located 1500 words into fund page.)
I’d say 2 is clearly more accurate, and I think the feedback you’ve received about donors being surprised at how many AI grants were made suggests I’m not alone.
Could you operationalize “more accurately” a bit more? Both sentences match my impression of the fund. The first is more informative as to what our aims are, the second is more informative as to the details of our historical (and immediate future) grant composition.
My sense is that the first will give people an accurate predictive model of the LTFF in a wider range of scenarios. For example, if next round we happen to receive an amazing application for a new biosecurity org, the majority of the round’s funding could go on that. The first sentence would predict this, the second not.
But the second will give most people better predictions in a “business as usual” case, where our applications in future rounds are similar to those of current rounds.
My hunch is that knowing what our aims are is more important for most donors. In particular, many people reading this for the first time will be choosing between the LTFF and one of the other EA Funds, which focus on completely different cause areas. The high-level motivation seems more salient than our current grant composition for this purpose.
Ideally of course we’d communicate both. I’ll think about if we should add some kind of high-level summary of % of grants to different areas under the “Grantmaking and Impact” section which occurs earlier. My main worry is this kind of thing is hard to keep up to date, and as described above could end up misleading donors in the other direction, if our application pool suddenly changes.
Adam has mentioned elsewhere here that he will prefer making more biosecurity grants. An interesting question here is how much the messaging should be descriptive of past donations, vs aspirational of where they want to donate more to in the future.
Good point! I’d say ideally the messaging should describe both forward and backward looking donations, and if they differ, why. I don’t think this needs to be particularly lengthy, a few sentences could do it.
I agree that both of these are among our biggest mistakes.