1. I think the level of rigorous analysis for LTFF grants is not comparable to GiveWell’s standards. I’m sorry if I ever gave that impression, and am happy to correct that impression wherever I can.
2. The average LTFF grant size is around $40,000, while the average GiveWell grant is over $5 million, indicating a substantial difference in effort put into each grant.
3. Reasoning about existential risks and the long-term future is very difficult due to a lack of RCTs, sign confusions, and the rapidly changing landscape.
4. LTFF primarily aims to provide seed funding for potentially high-impact, long-tail projects, particularly in AI safety, with the hope that larger funders will support the projects if and when they are ready to scale.
5. For those interested in funding more (relatively) rigorous projects in the longtermist or global catastrophic risk space, you may wish to directly support established organizations like the Nuclear Threat Initiative or Johns Hopkins Center for Health Security. But please be aware that they’re still much much more speculative than Givewell’s recommendations.
____ Longer comment:
I work for, and make grants for, the Long-Term Future Fund. I was a fund manager at the time this grant was made, but I was not a primary investigator on this grant, believe I did not vote on it.
Thank you for the post!
I think Caleb already and Ozzie both made some points I wanted to make. So just wanted to give some context on a few things that are interesting to me.
Donors contribute to these funds expecting rigorous analysis comparable to GiveWell’s standards, even for more speculative areas that rely on hypotheticals, hoping their money is not wasted, so they entrust that responsibility to EA fund managers, whom they assume make better and more informed decisions with their contributions.
I’m sorry if we gave the impression that we arrived at our grants with the level of rigorous analysis comparable to GiveWell’s standards. I think this is false, and I’m happy to dispel any impressions that people have of this.
From the outside view, my impression is that the amount of work (and money) that’s put into each grant at the Long-Term Future Fund is much lower than the amount of work (and money) that’s put into each GiveWell’s charity. For context, our median grant is about $33,000 and our average grant is about $40,000[1]. In comparison, if I’m reading this airtable correctly, the average GiveWell grant/recommendation is for over 5 million.
This means that there is over 100x difference between the size of the average GiveWell grant and the size of the average LTFF grant. I’m not sure how much difference in effort the difference in dollar amount translates to, but if anything I would guess that the difference in effort is noticeably higher, not lower, than 100x.
So unless you think we’re over 100x more efficient than GiveWell (we’re not), you should not think of our analysis as similarly rigorous to GiveWell’s, just from an outside view look at the data.
From an inside view, I think it’s very difficult to reason correctly about existential risks or the long-term future. Doing this type of reasoning is extremely important, but also very tricky. There is a profound lack of RCTs, sign confusions are abundant, and the space is moving very quickly where safeguards are very much not keeping up. So I think it’s not possible to be as rigorous as GiveWell, even if we wanted to be.
Which brings me to my next point: We also mostly don’t view ourselves as “trying to be as rigorous as GiveWell, but worse, and for longtermism.” Instead we view our jobs primarily as making grants that are more like seed funding for long-tail, potentially highly impactful projects, particularly in AI safety. The implicit theory of change here is that other larger funders (Open Phil, other philanthropic foundations, corporate labs, maybe governments one day) can pick up the work if and when the projects make sense to scale.
If you’re very interested in funding (relatively) rigorous projects in the longtermist or GCR space, a better option than LTFF might be to directly fund larger organizations with a more established track record, like the Nuclear Threat Initiative or Johns Hopkins Center for Health Security. To a lesser extent, places that are significant but have a shorter track record like SecureBio and Center for AI Safety.
I think this is a reasonable take in its own right, but it sits uncomfortably with Caleb Parikh’s statement in a critical response to the Nonlinear Fund that ‘I think the current funders are able to fund things down to the point where a good amount of things being passed on are net negative by their lights or have pretty low upside.’
tl;dr:
1. I think the level of rigorous analysis for LTFF grants is not comparable to GiveWell’s standards. I’m sorry if I ever gave that impression, and am happy to correct that impression wherever I can.
2. The average LTFF grant size is around $40,000, while the average GiveWell grant is over $5 million, indicating a substantial difference in effort put into each grant.
3. Reasoning about existential risks and the long-term future is very difficult due to a lack of RCTs, sign confusions, and the rapidly changing landscape.
4. LTFF primarily aims to provide seed funding for potentially high-impact, long-tail projects, particularly in AI safety, with the hope that larger funders will support the projects if and when they are ready to scale.
5. For those interested in funding more (relatively) rigorous projects in the longtermist or global catastrophic risk space, you may wish to directly support established organizations like the Nuclear Threat Initiative or Johns Hopkins Center for Health Security. But please be aware that they’re still much much more speculative than Givewell’s recommendations.
____
Longer comment:
I work for, and make grants for, the Long-Term Future Fund. I was a fund manager at the time this grant was made, but I was not a primary investigator on this grant, believe I did not vote on it.
Thank you for the post!
I think Caleb already and Ozzie both made some points I wanted to make. So just wanted to give some context on a few things that are interesting to me.
I’m sorry if we gave the impression that we arrived at our grants with the level of rigorous analysis comparable to GiveWell’s standards. I think this is false, and I’m happy to dispel any impressions that people have of this.
From the outside view, my impression is that the amount of work (and money) that’s put into each grant at the Long-Term Future Fund is much lower than the amount of work (and money) that’s put into each GiveWell’s charity. For context, our median grant is about $33,000 and our average grant is about $40,000[1]. In comparison, if I’m reading this airtable correctly, the average GiveWell grant/recommendation is for over 5 million.
This means that there is over 100x difference between the size of the average GiveWell grant and the size of the average LTFF grant. I’m not sure how much difference in effort the difference in dollar amount translates to, but if anything I would guess that the difference in effort is noticeably higher, not lower, than 100x.
So unless you think we’re over 100x more efficient than GiveWell (we’re not), you should not think of our analysis as similarly rigorous to GiveWell’s, just from an outside view look at the data.
From an inside view, I think it’s very difficult to reason correctly about existential risks or the long-term future. Doing this type of reasoning is extremely important, but also very tricky. There is a profound lack of RCTs, sign confusions are abundant, and the space is moving very quickly where safeguards are very much not keeping up. So I think it’s not possible to be as rigorous as GiveWell, even if we wanted to be.
Which brings me to my next point: We also mostly don’t view ourselves as “trying to be as rigorous as GiveWell, but worse, and for longtermism.” Instead we view our jobs primarily as making grants that are more like seed funding for long-tail, potentially highly impactful projects, particularly in AI safety. The implicit theory of change here is that other larger funders (Open Phil, other philanthropic foundations, corporate labs, maybe governments one day) can pick up the work if and when the projects make sense to scale.
If you’re very interested in funding (relatively) rigorous projects in the longtermist or GCR space, a better option than LTFF might be to directly fund larger organizations with a more established track record, like the Nuclear Threat Initiative or Johns Hopkins Center for Health Security. To a lesser extent, places that are significant but have a shorter track record like SecureBio and Center for AI Safety.
Numbers pulled from memory. Exact numbers depend on how you count but I’d be surprised if it’s hugely different. See eg this payout report.
I think this is a reasonable take in its own right, but it sits uncomfortably with Caleb Parikh’s statement in a critical response to the Nonlinear Fund that ‘I think the current funders are able to fund things down to the point where a good amount of things being passed on are net negative by their lights or have pretty low upside.’