(Not sure if this is the best place to ask this. I know the Q&A is over, but on balance I think it’s better for EA discourse for me to ask this question publicly rather than privately, to see if others concur with this analysis, or if I’m trivially wrong for boring reasons and thus don’t need a response).
Open Phil’s Grantmaking Approaches and Process has the 50/40/10 rule, where (in my medicore summarization) 50% of a grantmaker’s grants have to have the core stakeholders (Holden Karnofsky from Open Phil and Cari Tuna from Good Ventures) on board, 40% have to be grants where Holden and Cari are not clearly on board, but can imagine being on board if they knew more, and up to 10% can be more “discretionary.”
Reading between the lines, this suggests that up to 10% of funding from Open Phil will go to places Holden Karnofsky and Cari Tuna are not inside-view excited about, because they trust the grantmakers’ judgements enough.
Is there a similar (explicit or implicit) process at LTFF?
I ask because
part of the original pitch for EA Funds, as I understood it, was that it would be able to evaluate higher-uncertainty, higher-reward donation opportunities that individual donors may not be equipped to evaluate.
Yet there’s an obvious structural incentive to make “safer” and easier-to-justify-to-donors decisions.
When looking at the April, September, and November 2020 reports, none of the grants look obviously dumb, and there’s only one donation that I feel moderately confident in disagreeing with.
Now perhaps both I and the LTFF grantmakers are unusually enlightened individuals, and accurately converged independently to great donation opportunities given the information available. Or I coincidentally share the same taste and interests. But it seems more likely that the LTFF is somewhat bounding the upside by making grants that seems good to informed donors on a first glance with public information in addition to grants that are good for very informed grantmakers upon careful reflection and private information. This seems suboptimal if true.
A piece of evidence for this view is that the April 2019 grants seems more inside-view intuitively suspicious to me at the time (and judging from the high density of critical comments on that post, this opinion is shared by many others on the EA Forum).
Now part of this is certainly that both LTFF and the EA community were trying to “find its feet” so to speak, and there was less of a shared social reality for what LTFF ought to do. And nowadays we’re more familiar with funding independent researchers and projects like that.
However, I do not think this is the full story.
In general, I think I’m inclined to encourage the LTFF to become moderately more risk-seeking. In particular (if I recall my thoughts at the time correctly, and note that I have far from perfect memory or self-knowledge), I think if I were to rank the “most suspicious” LTFF grants in April 2019, I would have missed quite a few grants that I now think are good (moderate confidence). This suggests to me that moderately informed donors are not in a great spot to quickly evaluate the quality of LTFF grants.
This is an important question. It seems like there’s an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position—and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:
Accountability generally seems to improve organisations functioning. It’d be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
There’s asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there’s a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
There’s may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.
I’m not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and “safer” fund. Then donors can choose what kind of worldview they want to buy into.
That said, personally I don’t feel like I make any significantly different votes for LTFF money v.s. my own donations. The main difference would be that I am much more cautious about conflicts of interest with LTFF money than my personal money, but I don’t think I’d want to change that. However, I do think I tend to have a more conservative taste in grants than some others in the long-termist community.
One thing to flag is that we do occasionally (with applicant’s permission) make recommendations to private donors rather than providing funding directly from the LTFF. This is often for logistical reasons, if something is tricky for CEA to fund, but it’s also an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.
I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions.
trust from donors can still be gained by explaining a meaningful fraction of decisions
less legible bets may have higher EV
I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate
some donors may still not trust judgement sufficiently
maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way)
evaluation of funded projects takes effort (but I imagine you want to do this anyway)
there’s an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I’d favor an analogue of Open Phil’s 50/40/10 rule (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.
I’m not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and “safer” fund
This seems like a fine compromise that I’m in the abstract excited about, though of course it depends a lot on implementation details.
One thing to flag is that we do occasionally (with applicant’s permission) make recommendations to private donors rather than providing funding directly from the LTFF[..] an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.
I do indeed think there has been a pressure towards lower risk grants, am not very happy about it, and think it reduced the expected value of the fund by a lot. I am reasonably optimistic about that changing again in the future, but it’s one of the reasons why I’ve become somewhat less engaged with the fund. In particular Alex Zhu leaving the fund was I think a really great loss on this dimension.
I think you, Adam, and Oli covered a lot of the relevant points.
I’d add that the LTFF’s decision-making is based on the average score vote from the different fund managers, which would allow for grants to go through in scenarios where one person is very excited, and others aren’t very unexcited or against the grant. I.e., the mechanism allows an excited minority to make a grant that wouldn’t be approved by the majority of the committee. Overall, the mechanism strikes me as near-optimal. (Perhaps we should lower the threshold for making grants a bit further.)
I do think the LTFF might be slightly too risk-averse, and splitting the LTFF into a “legible longtermist fund” and a “judgment-driven longtermist fund” to remove pressure from donors towards the legible version seems a good idea and is tentatively on the roadmap.
(Not sure if this is the best place to ask this. I know the Q&A is over, but on balance I think it’s better for EA discourse for me to ask this question publicly rather than privately, to see if others concur with this analysis, or if I’m trivially wrong for boring reasons and thus don’t need a response).
Open Phil’s Grantmaking Approaches and Process has the 50/40/10 rule, where (in my medicore summarization) 50% of a grantmaker’s grants have to have the core stakeholders (Holden Karnofsky from Open Phil and Cari Tuna from Good Ventures) on board, 40% have to be grants where Holden and Cari are not clearly on board, but can imagine being on board if they knew more, and up to 10% can be more “discretionary.”
Reading between the lines, this suggests that up to 10% of funding from Open Phil will go to places Holden Karnofsky and Cari Tuna are not inside-view excited about, because they trust the grantmakers’ judgements enough.
Is there a similar (explicit or implicit) process at LTFF?
I ask because
part of the original pitch for EA Funds, as I understood it, was that it would be able to evaluate higher-uncertainty, higher-reward donation opportunities that individual donors may not be equipped to evaluate.
Yet there’s an obvious structural incentive to make “safer” and easier-to-justify-to-donors decisions.
When looking at the April, September, and November 2020 reports, none of the grants look obviously dumb, and there’s only one donation that I feel moderately confident in disagreeing with.
Now perhaps both I and the LTFF grantmakers are unusually enlightened individuals, and accurately converged independently to great donation opportunities given the information available. Or I coincidentally share the same taste and interests. But it seems more likely that the LTFF is somewhat bounding the upside by making grants that seems good to informed donors on a first glance with public information in addition to grants that are good for very informed grantmakers upon careful reflection and private information. This seems suboptimal if true.
A piece of evidence for this view is that the April 2019 grants seems more inside-view intuitively suspicious to me at the time (and judging from the high density of critical comments on that post, this opinion is shared by many others on the EA Forum).
Now part of this is certainly that both LTFF and the EA community were trying to “find its feet” so to speak, and there was less of a shared social reality for what LTFF ought to do. And nowadays we’re more familiar with funding independent researchers and projects like that.
However, I do not think this is the full story.
In general, I think I’m inclined to encourage the LTFF to become moderately more risk-seeking. In particular (if I recall my thoughts at the time correctly, and note that I have far from perfect memory or self-knowledge), I think if I were to rank the “most suspicious” LTFF grants in April 2019, I would have missed quite a few grants that I now think are good (moderate confidence). This suggests to me that moderately informed donors are not in a great spot to quickly evaluate the quality of LTFF grants.
This is an important question. It seems like there’s an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position—and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:
Accountability generally seems to improve organisations functioning. It’d be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
There’s asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there’s a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
There’s may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.
I’m not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and “safer” fund. Then donors can choose what kind of worldview they want to buy into.
That said, personally I don’t feel like I make any significantly different votes for LTFF money v.s. my own donations. The main difference would be that I am much more cautious about conflicts of interest with LTFF money than my personal money, but I don’t think I’d want to change that. However, I do think I tend to have a more conservative taste in grants than some others in the long-termist community.
One thing to flag is that we do occasionally (with applicant’s permission) make recommendations to private donors rather than providing funding directly from the LTFF. This is often for logistical reasons, if something is tricky for CEA to fund, but it’s also an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.
Re: Accountability
I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions.
trust from donors can still be gained by explaining a meaningful fraction of decisions
less legible bets may have higher EV
I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate
some donors may still not trust judgement sufficiently
maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way)
evaluation of funded projects takes effort (but I imagine you want to do this anyway)
(Looks like this sentence got cut off in the middle)
Thanks, fixed.
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I’d favor an analogue of Open Phil’s 50/40/10 rule (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.
This seems like a fine compromise that I’m in the abstract excited about, though of course it depends a lot on implementation details.
This is really good to hear!
I do indeed think there has been a pressure towards lower risk grants, am not very happy about it, and think it reduced the expected value of the fund by a lot. I am reasonably optimistic about that changing again in the future, but it’s one of the reasons why I’ve become somewhat less engaged with the fund. In particular Alex Zhu leaving the fund was I think a really great loss on this dimension.
I think you, Adam, and Oli covered a lot of the relevant points.
I’d add that the LTFF’s decision-making is based on the average score vote from the different fund managers, which would allow for grants to go through in scenarios where one person is very excited, and others aren’t very unexcited or against the grant. I.e., the mechanism allows an excited minority to make a grant that wouldn’t be approved by the majority of the committee. Overall, the mechanism strikes me as near-optimal. (Perhaps we should lower the threshold for making grants a bit further.)
I do think the LTFF might be slightly too risk-averse, and splitting the LTFF into a “legible longtermist fund” and a “judgment-driven longtermist fund” to remove pressure from donors towards the legible version seems a good idea and is tentatively on the roadmap.