This is an important question. It seems like there’s an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position—and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:
Accountability generally seems to improve organisations functioning. It’d be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
There’s asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there’s a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
There’s may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.
I’m not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and “safer” fund. Then donors can choose what kind of worldview they want to buy into.
That said, personally I don’t feel like I make any significantly different votes for LTFF money v.s. my own donations. The main difference would be that I am much more cautious about conflicts of interest with LTFF money than my personal money, but I don’t think I’d want to change that. However, I do think I tend to have a more conservative taste in grants than some others in the long-termist community.
One thing to flag is that we do occasionally (with applicant’s permission) make recommendations to private donors rather than providing funding directly from the LTFF. This is often for logistical reasons, if something is tricky for CEA to fund, but it’s also an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.
I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions.
trust from donors can still be gained by explaining a meaningful fraction of decisions
less legible bets may have higher EV
I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate
some donors may still not trust judgement sufficiently
maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way)
evaluation of funded projects takes effort (but I imagine you want to do this anyway)
there’s an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I’d favor an analogue of Open Phil’s 50/40/10 rule (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.
I’m not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and “safer” fund
This seems like a fine compromise that I’m in the abstract excited about, though of course it depends a lot on implementation details.
One thing to flag is that we do occasionally (with applicant’s permission) make recommendations to private donors rather than providing funding directly from the LTFF[..] an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.
This is an important question. It seems like there’s an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position—and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:
Accountability generally seems to improve organisations functioning. It’d be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
There’s asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there’s a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
There’s may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.
I’m not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and “safer” fund. Then donors can choose what kind of worldview they want to buy into.
That said, personally I don’t feel like I make any significantly different votes for LTFF money v.s. my own donations. The main difference would be that I am much more cautious about conflicts of interest with LTFF money than my personal money, but I don’t think I’d want to change that. However, I do think I tend to have a more conservative taste in grants than some others in the long-termist community.
One thing to flag is that we do occasionally (with applicant’s permission) make recommendations to private donors rather than providing funding directly from the LTFF. This is often for logistical reasons, if something is tricky for CEA to fund, but it’s also an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.
Re: Accountability
I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions.
trust from donors can still be gained by explaining a meaningful fraction of decisions
less legible bets may have higher EV
I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate
some donors may still not trust judgement sufficiently
maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way)
evaluation of funded projects takes effort (but I imagine you want to do this anyway)
(Looks like this sentence got cut off in the middle)
Thanks, fixed.
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I’d favor an analogue of Open Phil’s 50/40/10 rule (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.
This seems like a fine compromise that I’m in the abstract excited about, though of course it depends a lot on implementation details.
This is really good to hear!