Summary:
The charity evaluator GiveWell increased their rating of the cost-effectiveness of GiveDirectly’s Cash for Poverty Relief program by 3-4x after reevaluating our work, including assessing new evidence.
This update was driven by new estimates of direct cash’s positive impact on local economies, consumption, and child mortality, which show our work (past & present) is more impactful than they’d previously assumed.
This update hasn’t changed GiveWell’s top charity and funding recommendations, but this could shift in the future.
We’re excited about this update and look forward to continued conversations with GiveWell, as we continue to generate new evidence on cash’s long-term impact which may shift their assumptions again.
From GiveWell’s post:
We estimate that this program is ~3-4x more cost-effective than we had previously estimated, and around ~30-40% as cost-effective as our marginal funding opportunity.
Nick Allardice (GiveDirectly’s CEO) has posted this video.
I think this is great and a pretty huge development. I have two broad strokes comments here.
Few other interventions have the research clout to look at such a wide range of outcomes years after what they do, which may favor cash.
I still think the survey based follow up (nearly all of the follow up) after cash transfers biases towards cash to an extent people underrate, including Interviewer bias (blinding is practically impossible), desirability bias and future hope bias. This is simply because people loooooove getting cash more than any other intervention.
In saying that I still think we should give loads more cash to everyone. Give Directly also becomes even more cost effective as almost all of their money comes from non EA sources.
On (1), I tend to agree and don’t think a lot of the CEA on health even takes into account the cost of treatment purchases/copays and clinic transport (something I’m sure you have great data on!) which is not insignificant if your annual cash income is <$200 and your kid potentially gets infected multiple times per annum. Some CEA doesn’t even include morbidity. But I don’t think there’s much scope for comparable medium term multiplier effects on a neighbourhood from typical health programs.
I think (2) is an important point. I’ve read studies before (not the Egger one which appears to have been the biggest factor here) which claim the remarkable finding that cash transfers had positive impact on things like domestic violence in neighbouring households that didn’t receive them, and thought to myself have you not considered the possibility that people have noticed the outsiders with clipboards asking personal questions seem to be associated in some way with their neighbours getting unexpected windfalls, and started to speculate about what sort of answers the NGOs are looking for...
It’s an interestingly large upward revision by GiveWell though, especially since they seem to have heavily discounted some of the study results
A quick look at Egger suggests their main novelty was that they considered positive spillover effects on other villages up to 2km away (whereas other studies didn’t or may even have used those neighbouring villages as controls with increases in their income reducing the estimated cash transfer effect size). This seems plausible, though it also seems plausible they’re bundling in other local effects with that. They seem to have plausible data that non-recipients are actually getting higher earned incomes by being paid to do more labour by recipients though, which is the sort of thing these programmes hope to achieve
I love the way you put this
”have you not considered the possibility that people have noticed the outsiders with clipboards asking personal questions seem to be associated in some way with their neighbours getting unexpected windfalls, and started to speculate about what sort of answers the NGOs are looking for...”
Hi Nick & David,
I wrote this piece and wanted to offer my $0.02 on Hawthorne effects driving these consumption spillover results as it’s not covered in the report. I don’t think this is likely to be a key driver of the large spillovers reported, for two reasons:
To measure consumption spillovers, Egger et al. is essentially comparing consumption in nearby non-recipient households (e.g. <2km away) to consumption in further away non-recipient households (e.g. 10km). For this to produce biased results, you’d have to think the nearer non-recipients are gaming their answers in a way that the further away non-recipients aren’t. That seems plausible to me – but it also seems plausible that the further away non-recipients will still be aware of the program (so might have similar, counterbalancing incentives)
Even if you didn’t buy this, I’m not convinced the bias would be in the direction you’re implying. The program studied in Egger et al. was means-tested – cash transfers were only given to households with thatched roofs. If you think nearby non-recipients are more likely to be gaming the system, it seems plausible to me that they’d infer poorer households are more likely to get cash, so it makes sense for them to understate their consumption. This would downward bias the results
Hawthorne effects for recipient consumption gains seem more intuitively concerning to me, and I’ve been wondering whether this could be part of the story behind these large recipient consumption gains at 5-7 years we’ve been sent. We’re not putting much weight on these results at the moment as they’ve not been externally scrutinized, but it’s something I plan to think more about if/when we revisit these.
Interestingly, metaculus forecast on this was off by an order of magnitude (15% vs 300-400%). Only three people forecasted, so I wouldn’t read too much into it, but it is a wide gap.
I haven’t read the whole report and I don’t know anything about development economics, so I might be misinterpreting it, but I was really surprised by:
If I read this table correctly, GiveWell estimates there’s only a 50% chance that they’ll make a >=40% adjustment to GiveDirectly’s main program estimated cost-effectiveness, right after making a 330% adjustment and with many uncertainties still unresolved
Looking at this table and this graph, it seems that GiveDirectly’s program has had increasing marginal cost-effectiveness, instead of diminishing returns, by expanding to Malawi, Rwanda and Mozambique. This is another update against https://www.givedirectly.org/dont-wait/
The 46% reduction in all-cause under 5 mortality seems absurdly high, even the 23% that GiveWell uses after discounting it is way higher than I would have ever thought, and has extremely depressing implications.
Interesting one nice observations. What do you mean when you say that the 23% mortality reduction has “extremely depressing implications”
I think he’s referring to the paragraph lower down which says
Oh yeah that’s super interesting that the mortality effect doesn’t change the cost-effectiveness estimate that much. I wonder why that is excactly? Might look into it later!
Cash transfers are not targeted (i.e. lots of households receive transfers that don’t have young children) and are very expensive relative to other ways to avert child deaths ($1000 vs a few dollars for a bednet). The latter varies over more orders of magnitude than child mortality effects, so it dominates the calculation.
Wouldn’t the economic spillover effects depend on macroeconomic conditions? Government stimulus is more useful when there is more slack in the economy and more inflationary when there’s a tight labor market. I’d expect cash transfers to be similar.
I don’t know the conditions in the specific places studied, but in a lot of places there was significant slack in the economy from the Great Recession until Covid, and the labor markets are now tighter. So studies conducted in the 2010s might overestimate the present-day net benefits of economic spillovers.