I’d be a bit surprised if there wasn’t already a process in place for retrospective analysis of this sort. Is there any public info available about if/how EA Funds do this?
When I looked at this as part of the 2022 red teaming contest, I found that “EA Funds has received roughly $50 million in donations and has made hundreds of grants, but has never published any post-grant assessments.” I’m almost positive there haven’t been any retrospective analyses of EA Funds grants since then.
This problem isn’t unique to EA Funds. I also found that EA Grants and the Community Building Grants program both lacked any kind of public post grant assessment.
The unfortunate result of this situation is that while lots of time and money have been invested in various grantmaking programs, we don’t really know much about what types of grantmaking are most effective (e.g. granting to individuals vs. established organizations). It’s true that post-grant assessment is costly to conduct, but it’s disappointing that we haven’t made this investment which could significantly improve the efficacy of future grantmaking.
I wonder if you could make post-grant assessment really cheap by automatically emailing grantees some sort of Google Form. It could show them what they wrote on their grant application and ask them how well they achieved their stated objectives, plus various other questions. You could have a human randomly audit the responses to incentivize honesty.
Wow, I didn’t realize that evaluation existed! Thanks for sharing! (Though given that this evaluation only covers ~2 dozen grants for one fund, I think my overall assessment that there’s little in the way of post-grant evaluation still holds).
Self-assessment via a simple google form is an interesting idea. My initial reaction is that it would be hard to structure the incentives well enough for me to trust self-assessments. But it still could be better than nothing. I’d be more excited about third party evaluations (like the one you shared) even if they were extremely cursory (e.g. flagging which ones have evidence that the project was even executed vs. those that don’t) and selective (e.g. ignoring small grants to save time/effort).
Yeah, to be clear, I am also quite sad about this. If I had more time next to my other responsibilities, I think doing better public retrospectives on grants the LTFF made would be one of my top things to do.
When I looked at this as part of the 2022 red teaming contest, I found that “EA Funds has received roughly $50 million in donations and has made hundreds of grants, but has never published any post-grant assessments.” I’m almost positive there haven’t been any retrospective analyses of EA Funds grants since then.
This problem isn’t unique to EA Funds. I also found that EA Grants and the Community Building Grants program both lacked any kind of public post grant assessment.
The unfortunate result of this situation is that while lots of time and money have been invested in various grantmaking programs, we don’t really know much about what types of grantmaking are most effective (e.g. granting to individuals vs. established organizations). It’s true that post-grant assessment is costly to conduct, but it’s disappointing that we haven’t made this investment which could significantly improve the efficacy of future grantmaking.
There was an LTFF evaluation a few years ago.
I wonder if you could make post-grant assessment really cheap by automatically emailing grantees some sort of Google Form. It could show them what they wrote on their grant application and ask them how well they achieved their stated objectives, plus various other questions. You could have a human randomly audit the responses to incentivize honesty.
Wow, I didn’t realize that evaluation existed! Thanks for sharing! (Though given that this evaluation only covers ~2 dozen grants for one fund, I think my overall assessment that there’s little in the way of post-grant evaluation still holds).
Self-assessment via a simple google form is an interesting idea. My initial reaction is that it would be hard to structure the incentives well enough for me to trust self-assessments. But it still could be better than nothing. I’d be more excited about third party evaluations (like the one you shared) even if they were extremely cursory (e.g. flagging which ones have evidence that the project was even executed vs. those that don’t) and selective (e.g. ignoring small grants to save time/effort).
Yeah, to be clear, I am also quite sad about this. If I had more time next to my other responsibilities, I think doing better public retrospectives on grants the LTFF made would be one of my top things to do.