Thanks Charles for your detailed response.
I agree with your central point that it’s very hard to use statistics to prove anything. In particular, you need a huge amount of data and there is lots of noise as the real world is not a clean & tidy place.
For bednets, we do have a huge amount of data. The World Malaria Report 2011, used in GiveWell’s macro review, says 145 million bednets were distrubuted in sub-Saharan Africa in 2010 alone [1]. That’s theoretical coverage for around 30% of the population [2]. This is a massive level of intervention.
For malaria, we also have lots of noise. The same World Malaria Report puts annual deaths in the range 537,000-907,000. That’s a pretty wide confidence interval. The Lancet gives 929,000-1,685,000 deaths per year. That’s a wider range than the first and the two ranges don’t even overlap. [3]
I understand GiveWell’s position (and yours?) to be “There is so much noise, the real world observations don’t really tell you anything. You have to focus on the Randomised Control Trials as proving the concept & monitor AMF to ensure competent delivery”. This might well be right. However, it is then unclear what information could ever be supplied to change GiveWell’s mind. How many bednets would we have to distribute with no evidence of impact before we revisit the recommendation? A billion? 100 billion? Put another way, imagine 10 years from now we find out that bednet distributions had much less impact than we expected. What would be the evidence that demonstrates this, and where might we look now for clues that such evidence is emerging?
More generally, if an intervention can’t stand out from the statistical noise then I’m not sure it passes my personal threshold for a top intervention. As a minimum this means the scale of the problem, and so the scale of our impact, is not well understood. An intervention that can’t stand out from statistical noise has no way of providing feedback to providers on when it is going well or badly, and so has no way to avoid mistakes and no way to improve. Finally, there’s also a psychological element about certainty of impact that will be a big deal to some donors, but that’s a topic for another day.
[1] Source: https://www.who.int/malaria/world_malaria_report_2011/WMR2011_factsheet.pdf
[2] Based on GiveWell’s assumption of 1.8 people covered per net & a population of 869m, as per here: https://www.statista.com/statistics/805605/total-population-sub-saharan-africa/
Thanks Linch, interesting thoughts.
To clarify, my point is not just there’s no direct empirical evidence of AMF’s specific distributions saving lives. My point is that there is no direct evidence of any non-RCT/”real world” distributions saving lives.
Further, this is not because nobody is looking for such evidence. GiveWell’s macro review of the evidence suggests every time somebody has looked for evidence of non-RCT/”real world” distributions saving lives they’ve come up with nothing.
I agree with your summary of the GiveWell argument (strong RCT evidence + AMF as competent distributor). However, in order to turn these two facts into a prediction about future we need to add the assumption that the RCT evidence applies to future distributions. This is the weak link in the chain. As you say, differences in malarial load could distort things. Differences in the underlying health of the population, differences in net usage and increasing insecticide resistance are other contenders, along with many more I’m sure. If we can’t see any evidence of impact after distributing hundreds of millions of bednets then it seems reasonable to question if this key assumption is leading us astray.