So I wish the EA funding ecosystem was a lot more competent than we currently are. Like if we were good consequentialists, we ought to have detailed internal estimates of the value of various grants and grantmakers, models for under which assumptions one group or another is better, detailed estimates for marginal utility, careful retroactive evaluations, etc.
But we aren’t very competent. So here’s some lower-rigor takes:
My current guess is that of the reasonably large longtermist grantmakers, solely valued at expected longtermist impact/$, our marginal grants are at or above the quality of all other grantmakers, for any given time period.
Compared to Open Phil longtermism, before ~2021 LTFF was just pretty clearly more funding constrained. I expect this means more triaging for good grants (though iiuc the pool of applications was also worse back then; however I expect OP longtermism to face similar constraints).
In ~2021 and 2022 (when I joined) LTFF was to some degree trying to adopt something like a “shared longtermist bar” across funders, so in practice we were trying to peg our bar to be like Open Phil’s.
So during that time I’m not sure there’s much difference, naively I’d guess LTFF to do better than OP per $ by the lights of LTFF fund managers, and OP to do better by the lights of the median OP longtermist grant associate.
In 2023 (especially after June), the bars have gotten quite out of skew because of our liquidity issues. So I expect LTFF marginal grants to be noticeably better than OP’s at the current margin (and I moderately expect the median longtermist grantmaker at OP to agree with this assessment).
However, if LTFF fundraising will go as well as I currently expect it to, I think by ~October or so I expect us to roughly recalibrate to having a bar similar to OP’s. We (or at least I) am not trying very hard to substantially exceed OP’s bar in an ideal case.
Note that I’m trying my best to compare LTFF marginal grants to actual OP marginal grants. Unlike us OP also has a very large warchest, and I happen to be very confused about the value of OP’s last dollar, which might be a more salient comparison for the in-practice counterfactual.
My understanding is that OP will give more grants if they have more grantmaker capacity
I know less about SFF, but my guess is that we’re noticeably better than them. My reasoning is that I think their grants are somewhat high variance, and some of their grants are rather bad by my lights; while I haven’t seen much evidence for SFF having a higher proportion of positive “hits” to justify the high variance. So my guess is the heavier left tail without a correspondingly higher right tail means that SFF grants have a lower mean, and maybe a lower median as well.
I think we did better than the now-defunct Future Fund (both the main team and the regranting program) per $. I think Future Fund was trying to move a lot of money on fast timescales, and their deal flow was somewhat limited by applications that OP didn’t pick up (which is much less of a problem at LTFF’s scale). Though to be fair they were founded in 2022 when cost-effectiveness with money was much less of a concern[1].
I also have the same “fatter left tail, not much evidence in favor of a fatter right tail” objection as I did with SFF.
I would draw attention to this self-quote “Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors.”
I think a number of other medium-sized grantmakers (Longview, Effective Giving, GWWC’s Longtermist Fund, etc) are trying to explicitly or implicitly funge with OP’s last dollar while adding a few more constraints, so naively I’d guess that LTFF’s better than them to the extent we’re better than OP (plus a little bit more to account for the additional cost of those constraints).
A weak piece of outside-view evidence for LTFF being better than other longtermist funders for the marginal dollar is that other funders (OP, SFF) have given money to us to regrant, whereas we have not afaict given money to other funders to regrant.
Though the same evidence can adequately be explained by us being more power-hungry and/or insufficiently humble, of course.
A weak piece of outside-view evidence for LTFF being worse than other LTFF funders for the marginal dollar is that other institutional funders haven’t directly offered to cover our funding gap (yet), though to some degree we’re also actively seeking to have more independence from them.
I’m less sure about Habryka’s new thing (Lightspeed Grants), Manifund, and other new grantmakers. I think “too soon to tell” is my current stance.
In Lightspeed’s case, my impression is that they share a number of both applicants and evaluators with LTFF, so if they otherwise have a better process, I wouldn’t be surprised if their grants are competitive with or better than ours.
Otoh my impression is that they were more swamped with work than anticipated, so presumably this means some sacrifice in decision quality.
Of course, “quality of grants evaluations/$” isn’t the only thing that matters in a grantmaking organization. I think we do worse on other desiderata:
I think we are solidly middle-of-the-pack or above average in terms of $s moved/grantmaker time or impact/grantmaker time.
I think we do a pretty shitty job in terms of grantee experience:
I think Manifund is better? Lightspeed was trying to be better as well, not sure if they succeeded.
As many people on the forum complained about, we rarely give feedback to rejected applicants, and limited feedback to approved applicants as well.
Our “brand” is less solid than Open Phil’s. I suspect this limits our applicant pool some.
I suspect we do very poorly on donor experience as well.
To some degree our donor experience is non-existent, eg until recently we didn’t even offer to talk to our largest donors.
That said, one plus of the current model is that we aren’t really Goodharting on or otherwise optimizing for non-consequentialist donor preferences, simply because to a large extent we aren’t even aware of them!
I suspect we’re leaving a ton of value on the table by not trying to engage with new donors, who might counterfactually not have given anything to longtermist orgs.
I think we do reasonably well on transparency, making informative posts, etc, especially recently.
But this is compared to a relatively mediocre baseline.
Though far from the most important priority, I wouldn’t be surprised if LTFF is less fun to volunteer or work at than at other grantmaking organizations, especially as a grant evaluator.
Asya: “I also suspect that the lack of active discussion about grants has made the fund a worse experience for fund managers— I might describe the overall shift in the culture of the fund to have gone from “lively epistemic forum” to “solitary grantmaking machine”.)”
In comparison, I expect being a Future Fund regrantor to be more fun, SFF’s S-process and Manifund to have more productive discussions, etc.
SFF changes their grant evaluators regularly to minimize goodharting by grantees; this is not something LTFF directly optimizes for nearly as much, so I suspect we’re worse at it.
If anything, I’d expect our level of public transparency (eg this Q&A, my posts) to make Goodharting even easier than baseline; this is a conscious tradeoff we’re making in favor of greater transparency.
So I wish the EA funding ecosystem was a lot more competent than we currently are. Like if we were good consequentialists, we ought to have detailed internal estimates of the value of various grants and grantmakers, models for under which assumptions one group or another is better, detailed estimates for marginal utility, careful retroactive evaluations, etc.
But we aren’t very competent. So here’s some lower-rigor takes:
My current guess is that of the reasonably large longtermist grantmakers, solely valued at expected longtermist impact/$, our marginal grants are at or above the quality of all other grantmakers, for any given time period.
Compared to Open Phil longtermism, before ~2021 LTFF was just pretty clearly more funding constrained. I expect this means more triaging for good grants (though iiuc the pool of applications was also worse back then; however I expect OP longtermism to face similar constraints).
In ~2021 and 2022 (when I joined) LTFF was to some degree trying to adopt something like a “shared longtermist bar” across funders, so in practice we were trying to peg our bar to be like Open Phil’s.
So during that time I’m not sure there’s much difference, naively I’d guess LTFF to do better than OP per $ by the lights of LTFF fund managers, and OP to do better by the lights of the median OP longtermist grant associate.
In 2023 (especially after June), the bars have gotten quite out of skew because of our liquidity issues. So I expect LTFF marginal grants to be noticeably better than OP’s at the current margin (and I moderately expect the median longtermist grantmaker at OP to agree with this assessment).
However, if LTFF fundraising will go as well as I currently expect it to, I think by ~October or so I expect us to roughly recalibrate to having a bar similar to OP’s. We (or at least I) am not trying very hard to substantially exceed OP’s bar in an ideal case.
Note that I’m trying my best to compare LTFF marginal grants to actual OP marginal grants. Unlike us OP also has a very large warchest, and I happen to be very confused about the value of OP’s last dollar, which might be a more salient comparison for the in-practice counterfactual.
My understanding is that OP will give more grants if they have more grantmaker capacity
I know less about SFF, but my guess is that we’re noticeably better than them. My reasoning is that I think their grants are somewhat high variance, and some of their grants are rather bad by my lights; while I haven’t seen much evidence for SFF having a higher proportion of positive “hits” to justify the high variance. So my guess is the heavier left tail without a correspondingly higher right tail means that SFF grants have a lower mean, and maybe a lower median as well.
I think we did better than the now-defunct Future Fund (both the main team and the regranting program) per $. I think Future Fund was trying to move a lot of money on fast timescales, and their deal flow was somewhat limited by applications that OP didn’t pick up (which is much less of a problem at LTFF’s scale). Though to be fair they were founded in 2022 when cost-effectiveness with money was much less of a concern[1].
I also have the same “fatter left tail, not much evidence in favor of a fatter right tail” objection as I did with SFF.
Some potential relevant thoughts here are in my adversarial selection in longtermist grantmaking post.
I would draw attention to this self-quote “Most, although not all, of the examples are personal experiences from working on the LTFF. Many of these examples are grants that have later been funded by other grantmakers or private donors.”
I think a number of other medium-sized grantmakers (Longview, Effective Giving, GWWC’s Longtermist Fund, etc) are trying to explicitly or implicitly funge with OP’s last dollar while adding a few more constraints, so naively I’d guess that LTFF’s better than them to the extent we’re better than OP (plus a little bit more to account for the additional cost of those constraints).
A weak piece of outside-view evidence for LTFF being better than other longtermist funders for the marginal dollar is that other funders (OP, SFF) have given money to us to regrant, whereas we have not afaict given money to other funders to regrant.
Though the same evidence can adequately be explained by us being more power-hungry and/or insufficiently humble, of course.
A weak piece of outside-view evidence for LTFF being worse than other LTFF funders for the marginal dollar is that other institutional funders haven’t directly offered to cover our funding gap (yet), though to some degree we’re also actively seeking to have more independence from them.
I’m less sure about Habryka’s new thing (Lightspeed Grants), Manifund, and other new grantmakers. I think “too soon to tell” is my current stance.
In Lightspeed’s case, my impression is that they share a number of both applicants and evaluators with LTFF, so if they otherwise have a better process, I wouldn’t be surprised if their grants are competitive with or better than ours.
Otoh my impression is that they were more swamped with work than anticipated, so presumably this means some sacrifice in decision quality.
Of course, “quality of grants evaluations/$” isn’t the only thing that matters in a grantmaking organization. I think we do worse on other desiderata:
I think we are solidly middle-of-the-pack or above average in terms of $s moved/grantmaker time or impact/grantmaker time.
I think we do a pretty shitty job in terms of grantee experience:
We are rather slow in getting back to applicants.
I think Manifund is better? Lightspeed was trying to be better as well, not sure if they succeeded.
As many people on the forum complained about, we rarely give feedback to rejected applicants, and limited feedback to approved applicants as well.
Our “brand” is less solid than Open Phil’s. I suspect this limits our applicant pool some.
I suspect we do very poorly on donor experience as well.
To some degree our donor experience is non-existent, eg until recently we didn’t even offer to talk to our largest donors.
That said, one plus of the current model is that we aren’t really Goodharting on or otherwise optimizing for non-consequentialist donor preferences, simply because to a large extent we aren’t even aware of them!
I suspect we’re leaving a ton of value on the table by not trying to engage with new donors, who might counterfactually not have given anything to longtermist orgs.
I think we do reasonably well on transparency, making informative posts, etc, especially recently.
But this is compared to a relatively mediocre baseline.
Though far from the most important priority, I wouldn’t be surprised if LTFF is less fun to volunteer or work at than at other grantmaking organizations, especially as a grant evaluator.
Asya: “I also suspect that the lack of active discussion about grants has made the fund a worse experience for fund managers— I might describe the overall shift in the culture of the fund to have gone from “lively epistemic forum” to “solitary grantmaking machine”.)”
In comparison, I expect being a Future Fund regrantor to be more fun, SFF’s S-process and Manifund to have more productive discussions, etc.
SFF changes their grant evaluators regularly to minimize goodharting by grantees; this is not something LTFF directly optimizes for nearly as much, so I suspect we’re worse at it.
If anything, I’d expect our level of public transparency (eg this Q&A, my posts) to make Goodharting even easier than baseline; this is a conscious tradeoff we’re making in favor of greater transparency.
My day job at the time was trying to do research to identify good “longtermist megaprojects” lmao.
Awesome reply, thanks