GiveWell did their first “lookbacks” (reviews of past grants) to see if they’ve met initial expectations and what they could learn from them:
Lookbacks compare what we thought would happen before making a grant to what we think happened after at least some of the grant’s activities have been completed and we’ve conducted follow-up research. While we can’t know everything about a grant’s true impact, we can learn a lot by talking to grantees and external stakeholders, reviewing program data, and updating our research. We then create a new cost-effectiveness analysis with this updated information and compare it to our original estimates.
(While I’m very glad they did so with their usual high quality and rigor, I’m also confused why they hadn’t started doing this earlier, given that “okay, but did we really help as much as we think we would’ve? Let’s check?” feels like such a basic M&E / ops-y question. I’m obviously missing something trivial here, but also I find it hard to buy “limited org capacity”-type explanations for GW in particular given total funding moved, how long they’ve worked, their leading role in the grantmaking ecosystem etc)
Their lookbacks led to substantial changes vs original estimates, in New Incentives’ case driven by large drops in cost per child enrolled (“we think this is due to economies of scale, efficiency efforts by New Incentives, and the devaluation of the Nigerian naira, but we haven’t prioritized a deep assessment of drivers of cost changes”) and in HKI’s case driven by vitamin A deficiency rates in Nigeria being lower and counterfactual coverage rates higher than originally estimated:
I’m obviously missing something trivial here, but also I find it hard to buy “limited org capacity”-type explanations for GW in particular given total funding moved, how long they’ve worked, their leading role in the grantmaking ecosystem etc)
This should be very easy for you to buy! The opportunity cost of lookbacks is investigating new grants. It’s not obvious that lookbacks are the right way to spend limited research capacity. Worth remembering that GW only has around 30 researchers and makes grants in a lot of areas. And while they are a leading EA grantmaker, it’s only recently that their giving has scaled up to being a notable player in the total development ecosystem.
I suspect there is a confusion of terminology here, and also perhaps some loss of institutional knowledge. Givewell did post-hoc analyses starting in 2011 of their 2009 and 2010 recommendations to donate to VillageReach, but these were not technically “grants”, but rather “charity recommendations”, so I guess wouldn’t be considered a “grant lookback”.
In recent years GiveWell shifted from a charity recommendation model to a more direct grantmaking model, so this could be the first reviews of grants under that new model.
I would also like to see OpenPhil look back at a bunch of their “hits based” grants. They’ve done a decent amount of them and I think we should be able to get some idea about whether the approach is working as planned. It wouldn’t have to be too detailed. They could even do something a bit loose, like categorising them into maybe 4 buckets like …..
1. Miss 2. Probable miss 3. Some benefit 4. Home Run hit successful!
GiveWell did their first “lookbacks” (reviews of past grants) to see if they’ve met initial expectations and what they could learn from them:
(While I’m very glad they did so with their usual high quality and rigor, I’m also confused why they hadn’t started doing this earlier, given that “okay, but did we really help as much as we think we would’ve? Let’s check?” feels like such a basic M&E / ops-y question. I’m obviously missing something trivial here, but also I find it hard to buy “limited org capacity”-type explanations for GW in particular given total funding moved, how long they’ve worked, their leading role in the grantmaking ecosystem etc)
Their lookbacks led to substantial changes vs original estimates, in New Incentives’ case driven by large drops in cost per child enrolled (“we think this is due to economies of scale, efficiency efforts by New Incentives, and the devaluation of the Nigerian naira, but we haven’t prioritized a deep assessment of drivers of cost changes”) and in HKI’s case driven by vitamin A deficiency rates in Nigeria being lower and counterfactual coverage rates higher than originally estimated:
This should be very easy for you to buy! The opportunity cost of lookbacks is investigating new grants. It’s not obvious that lookbacks are the right way to spend limited research capacity. Worth remembering that GW only has around 30 researchers and makes grants in a lot of areas. And while they are a leading EA grantmaker, it’s only recently that their giving has scaled up to being a notable player in the total development ecosystem.
I suspect there is a confusion of terminology here, and also perhaps some loss of institutional knowledge. Givewell did post-hoc analyses starting in 2011 of their 2009 and 2010 recommendations to donate to VillageReach, but these were not technically “grants”, but rather “charity recommendations”, so I guess wouldn’t be considered a “grant lookback”.
In recent years GiveWell shifted from a charity recommendation model to a more direct grantmaking model, so this could be the first reviews of grants under that new model.
Yep I agree!
I’ve done a quicky sanity check on the New Incentives numbers and it doesn’t seem quite plausible, but my it was fast and I could be plain wrong.
https://forum.effectivealtruism.org/posts/FxAtFMRnJZ2dbLBhA/sanity-check-givewell-s-new-incentives-estimate-seems
I would also like to see OpenPhil look back at a bunch of their “hits based” grants. They’ve done a decent amount of them and I think we should be able to get some idea about whether the approach is working as planned. It wouldn’t have to be too detailed. They could even do something a bit loose, like categorising them into maybe 4 buckets like …..
1. Miss 2. Probable miss 3. Some benefit 4. Home Run hit successful!
Or similar