I wonder if you could make post-grant assessment really cheap by automatically emailing grantees some sort of Google Form. It could show them what they wrote on their grant application and ask them how well they achieved their stated objectives, plus various other questions. You could have a human randomly audit the responses to incentivize honesty.
Wow, I didn’t realize that evaluation existed! Thanks for sharing! (Though given that this evaluation only covers ~2 dozen grants for one fund, I think my overall assessment that there’s little in the way of post-grant evaluation still holds).
Self-assessment via a simple google form is an interesting idea. My initial reaction is that it would be hard to structure the incentives well enough for me to trust self-assessments. But it still could be better than nothing. I’d be more excited about third party evaluations (like the one you shared) even if they were extremely cursory (e.g. flagging which ones have evidence that the project was even executed vs. those that don’t) and selective (e.g. ignoring small grants to save time/effort).
There was an LTFF evaluation a few years ago.
I wonder if you could make post-grant assessment really cheap by automatically emailing grantees some sort of Google Form. It could show them what they wrote on their grant application and ask them how well they achieved their stated objectives, plus various other questions. You could have a human randomly audit the responses to incentivize honesty.
Wow, I didn’t realize that evaluation existed! Thanks for sharing! (Though given that this evaluation only covers ~2 dozen grants for one fund, I think my overall assessment that there’s little in the way of post-grant evaluation still holds).
Self-assessment via a simple google form is an interesting idea. My initial reaction is that it would be hard to structure the incentives well enough for me to trust self-assessments. But it still could be better than nothing. I’d be more excited about third party evaluations (like the one you shared) even if they were extremely cursory (e.g. flagging which ones have evidence that the project was even executed vs. those that don’t) and selective (e.g. ignoring small grants to save time/effort).