As part of CEA’s due diligence process, all grantees must submit progress reports documenting how they’ve spent their money. If a grantee applies for renewal, we’ll perform a detailed evaluation of their past work. Additionally, we informally look back at past grants, focusing on grants that were controversial at the time, or seem to have been particularly good or bad.
I’d like us to be more systematic in our grant evaluation, and this is something we’re discussing. One problem is that many of the grants we make are quite small: so it just isn’t cost-effective for us to evaluate all our grants in detail. Because of this, any more detailed evaluation we perform would have to be on a subset of grants.
I view there being two main benefits of evaluation: 1) improving future grant decisions; 2) holding the fund accountable. Point 1) would suggest choosing grants we expect to be particularly informative: for example, those where fund managers disagreed internally, or those which we were particularly excited about and would like to replicate. Point 2) would suggest focusing on grants that were controversial amongst donors, or where there were potential conflicts of interest.
It’s important to note that other things help with these points, too. For 1) improving our grant making process, we are working on sharing best-practices between the different EA Funds. For 2) we are seeking to increase transparency about our internal processes, such as in this doc (which we will soon add as an FAQ entry). Since evaluation is time consuming in the short-term we are likely to only evaluate a small percentage of our grants, though we may scale this up as fund capacity grows.
Do the LTFF fund managers make forecasts about potential outcomes of grants?
And/or do you write down in advance what sort of proxies you’d want to see from this grant after x amount of time? (E.g., what you’d want to see to feel that this had been a big success and that similar grant applications should be viewed (even) more positively in future, or that it would be worth renewing the grant if the grantee applied again.)
One reason that that first question came to mind was that I previously read a 2016 Open Phil post that states:
Both the Open Philanthropy Project and GiveWell recently began to make probabilistic forecasts about our grants. For the Open Philanthropy Project, see e.g. our forecasts about recent grants to Philip Tetlock and CIWF. For GiveWell, see e.g. forecasts about recent grants to Evidence Action and IPA. We also make and track some additional grant-related forecasts privately. The idea here is to be able to measure our accuracy later, as those predictions come true or are falsified, and perhaps to improve our accuracy from past experience. So far, we are simply encouraging predictions without putting much effort into ensuring their later measurability.
We’re going to experiment with some forecasting sessions led by an experienced “forecast facilitator”—someone who helps elicit forecasts from people about the work they’re doing, in a way that tries to be as informative and helpful as possible. This might improve the forecasts mentioned in the previous bullet point.
(I don’t know whether, how, and how much Open Phil and GiveWell still do things like this.)
We haven’t historically done this. As someone who has tried pretty hard to incorporate forecasting into my work at LessWrong, my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn’t really super feasible to do for lots of grants. I’ve made forecasts for LessWrong, and usually creating a set of forecasts that actually feels useful in assessing our performance takes me at least 5-10 hours.
It’s possible that other people are much better at this than I am, but this makes me kind of hesitant to use at least classical forecasting methods as part of LTFF evaluation.
It seems plausible to me that a useful version of forecasting grant outcomes would be too time-consuming to be worthwhile. (I don’t really have a strong stance on the matter currently.) And your experience with useful forecasting for LessWrong work being very time-consuming definitely seems like relevant data.
But this part of your answer confused me:
my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn’t really super feasible to do for lots of grants
Naively, I’d have thought that, if that was a major obstacle, you could just have a bunch of separate operationalisations, and people can forecast on whichever ones they want to forecast on. If, later, some or all operationalisations do indeed seem to have been too flawed for it to be useful to compare reality to them, assess calibration, etc., you could just not do those things for those operationalisations/that grant.
(Note that I’m not necessarily imagining these forecasts being made public in advance or afterwards. They could be engaged in internally to the extent that makes sense—sometimes ignoring them if that seems appropriate in a given case.)
Is there a reason I’m missing for why this doesn’t work?
Or was the point about difficulty of agreeing on an operationalisation really meant just as evidence of how useful operationalisations are hard to generate, as opposed to the disagreement itself being the obstacle?
I think the most lightweight-but-still-useful forecasting operationalization I’d be excited about is something like
12/24/120 months from now, will I still be very excited about this grant?
12/24/120 months from now, will I be extremely excited about this grant?
This gets at whether people think it’s a good idea ex post, and also (if people are well-calibrated) can quantify whether people are insufficiently or too risk/ambiguity-averse, in the classic sense of the term.
This seems helpful to assess fund managers’ calibration and improve their own thinking and decision-making. It’s less likely to be useful for communicating their views transparently to one another, or to the community, and it’s susceptible to post-hoc rationalization. I’d prefer an oracle external to the fund, like “12 months from now, will X have a ≥7/10 excitement about this grant on a 1-10 scale?”, where X is a person trusted by the fund managers who will likely know about the project anyway, such that the cost to resolve the forecast is small.
I plan to encourage the funds to experiment with something like this going forward.
Just to make sure I’m understanding, are you also indicating that the LTFF doesn’t write down in advance what sort of proxies you’d want to see from this grant after x amount of time? And that you think the same challenges with doing useful forecasting for your LessWrong work would also apply to that?
These two things (forecasts and proxies) definitely seem related, and both would involve challenges in operationalising things. But they also seem meaningfully different.
I’d also think that, in evaluating a grant, I might find it useful to partly think in terms of “What would I like to see from this grantee x months/years from now? What sorts of outputs or outcomes would make me update more in favour of renewing this grant—if that’s requested—and making similar grants in future?”
We’ve definitely written informally things like “this is what would convince me that this grant was a good idea”, but we don’t have a more formalized process for writing down specific objective operationalizations that we all forecast on.
I’m personally actually pretty excited about trying to make some quick forecasts for a significant fraction (say, half) of the grants that we actually make, but this is something that’s on my list to discuss at some point with the LTFF. I mostly agree with the issues that Habryka mentions, though.
Do the LTFF fund managers make forecasts about potential outcomes of grants?
To add to Habryka’s response: we do give each grant a quantitative score (on −5 to +5, where 0 is zero impact). This obviously isn’t as helpful as a detailed probabilistic forecast, but I think it does give a lot of the value. For example, one question I’d like to answer from retrospective evaluation is whether we should be more consensus driven or fund anything that at least one manager is excited about. We could address this by scrutinizing past grants that had a high variance in scores between managers.
I think it might make sense to start doing forecasting for some of our larger grants (where we’re willing to invest more time), and when the key uncertainties are easy to operationalize.
As part of CEA’s due diligence process, all grantees must submit progress reports documenting how they’ve spent their money. If a grantee applies for renewal, we’ll perform a detailed evaluation of their past work. Additionally, we informally look back at past grants, focusing on grants that were controversial at the time, or seem to have been particularly good or bad.
I’d like us to be more systematic in our grant evaluation, and this is something we’re discussing. One problem is that many of the grants we make are quite small: so it just isn’t cost-effective for us to evaluate all our grants in detail. Because of this, any more detailed evaluation we perform would have to be on a subset of grants.
I view there being two main benefits of evaluation: 1) improving future grant decisions; 2) holding the fund accountable. Point 1) would suggest choosing grants we expect to be particularly informative: for example, those where fund managers disagreed internally, or those which we were particularly excited about and would like to replicate. Point 2) would suggest focusing on grants that were controversial amongst donors, or where there were potential conflicts of interest.
It’s important to note that other things help with these points, too. For 1) improving our grant making process, we are working on sharing best-practices between the different EA Funds. For 2) we are seeking to increase transparency about our internal processes, such as in this doc (which we will soon add as an FAQ entry). Since evaluation is time consuming in the short-term we are likely to only evaluate a small percentage of our grants, though we may scale this up as fund capacity grows.
Interesting question and answer!
Do the LTFF fund managers make forecasts about potential outcomes of grants?
And/or do you write down in advance what sort of proxies you’d want to see from this grant after x amount of time? (E.g., what you’d want to see to feel that this had been a big success and that similar grant applications should be viewed (even) more positively in future, or that it would be worth renewing the grant if the grantee applied again.)
One reason that that first question came to mind was that I previously read a 2016 Open Phil post that states:
(I don’t know whether, how, and how much Open Phil and GiveWell still do things like this.)
We haven’t historically done this. As someone who has tried pretty hard to incorporate forecasting into my work at LessWrong, my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn’t really super feasible to do for lots of grants. I’ve made forecasts for LessWrong, and usually creating a set of forecasts that actually feels useful in assessing our performance takes me at least 5-10 hours.
It’s possible that other people are much better at this than I am, but this makes me kind of hesitant to use at least classical forecasting methods as part of LTFF evaluation.
Thanks for that answer.
It seems plausible to me that a useful version of forecasting grant outcomes would be too time-consuming to be worthwhile. (I don’t really have a strong stance on the matter currently.) And your experience with useful forecasting for LessWrong work being very time-consuming definitely seems like relevant data.
But this part of your answer confused me:
Naively, I’d have thought that, if that was a major obstacle, you could just have a bunch of separate operationalisations, and people can forecast on whichever ones they want to forecast on. If, later, some or all operationalisations do indeed seem to have been too flawed for it to be useful to compare reality to them, assess calibration, etc., you could just not do those things for those operationalisations/that grant.
(Note that I’m not necessarily imagining these forecasts being made public in advance or afterwards. They could be engaged in internally to the extent that makes sense—sometimes ignoring them if that seems appropriate in a given case.)
Is there a reason I’m missing for why this doesn’t work?
Or was the point about difficulty of agreeing on an operationalisation really meant just as evidence of how useful operationalisations are hard to generate, as opposed to the disagreement itself being the obstacle?
I think the most lightweight-but-still-useful forecasting operationalization I’d be excited about is something like
This gets at whether people think it’s a good idea ex post, and also (if people are well-calibrated) can quantify whether people are insufficiently or too risk/ambiguity-averse, in the classic sense of the term.
This seems helpful to assess fund managers’ calibration and improve their own thinking and decision-making. It’s less likely to be useful for communicating their views transparently to one another, or to the community, and it’s susceptible to post-hoc rationalization. I’d prefer an oracle external to the fund, like “12 months from now, will X have a ≥7/10 excitement about this grant on a 1-10 scale?”, where X is a person trusted by the fund managers who will likely know about the project anyway, such that the cost to resolve the forecast is small.
I plan to encourage the funds to experiment with something like this going forward.
I agree that your proposed operationalization is better for the stated goals, assuming similar levels of overhead.
Just to make sure I’m understanding, are you also indicating that the LTFF doesn’t write down in advance what sort of proxies you’d want to see from this grant after x amount of time? And that you think the same challenges with doing useful forecasting for your LessWrong work would also apply to that?
These two things (forecasts and proxies) definitely seem related, and both would involve challenges in operationalising things. But they also seem meaningfully different.
I’d also think that, in evaluating a grant, I might find it useful to partly think in terms of “What would I like to see from this grantee x months/years from now? What sorts of outputs or outcomes would make me update more in favour of renewing this grant—if that’s requested—and making similar grants in future?”
We’ve definitely written informally things like “this is what would convince me that this grant was a good idea”, but we don’t have a more formalized process for writing down specific objective operationalizations that we all forecast on.
I’m personally actually pretty excited about trying to make some quick forecasts for a significant fraction (say, half) of the grants that we actually make, but this is something that’s on my list to discuss at some point with the LTFF. I mostly agree with the issues that Habryka mentions, though.
To add to Habryka’s response: we do give each grant a quantitative score (on −5 to +5, where 0 is zero impact). This obviously isn’t as helpful as a detailed probabilistic forecast, but I think it does give a lot of the value. For example, one question I’d like to answer from retrospective evaluation is whether we should be more consensus driven or fund anything that at least one manager is excited about. We could address this by scrutinizing past grants that had a high variance in scores between managers.
I think it might make sense to start doing forecasting for some of our larger grants (where we’re willing to invest more time), and when the key uncertainties are easy to operationalize.
Thank you!