I agree with your central point that it’s very hard to use statistics to prove anything. In particular, you need a huge amount of data and there is lots of noise as the real world is not a clean & tidy place.
For bednets, we do have a huge amount of data. The World Malaria Report 2011, used in GiveWell’s macro review, says 145 million bednets were distrubuted in sub-Saharan Africa in 2010 alone [1]. That’s theoretical coverage for around 30% of the population [2]. This is a massive level of intervention.
For malaria, we also have lots of noise. The same World Malaria Report puts annual deaths in the range 537,000-907,000. That’s a pretty wide confidence interval. The Lancet gives 929,000-1,685,000 deaths per year. That’s a wider range than the first and the two ranges don’t even overlap. [3]
I understand GiveWell’s position (and yours?) to be “There is so much noise, the real world observations don’t really tell you anything. You have to focus on the Randomised Control Trials as proving the concept & monitor AMF to ensure competent delivery”. This might well be right. However, it is then unclear what information could ever be supplied to change GiveWell’s mind. How many bednets would we have to distribute with no evidence of impact before we revisit the recommendation? A billion? 100 billion? Put another way, imagine 10 years from now we find out that bednet distributions had much less impact than we expected. What would be the evidence that demonstrates this, and where might we look now for clues that such evidence is emerging?
More generally, if an intervention can’t stand out from the statistical noise then I’m not sure it passes my personal threshold for a top intervention. As a minimum this means the scale of the problem, and so the scale of our impact, is not well understood. An intervention that can’t stand out from statistical noise has no way of providing feedback to providers on when it is going well or badly, and so has no way to avoid mistakes and no way to improve. Finally, there’s also a psychological element about certainty of impact that will be a big deal to some donors, but that’s a topic for another day.
Thanks Charles for your detailed response.
I agree with your central point that it’s very hard to use statistics to prove anything. In particular, you need a huge amount of data and there is lots of noise as the real world is not a clean & tidy place.
For bednets, we do have a huge amount of data. The World Malaria Report 2011, used in GiveWell’s macro review, says 145 million bednets were distrubuted in sub-Saharan Africa in 2010 alone [1]. That’s theoretical coverage for around 30% of the population [2]. This is a massive level of intervention.
For malaria, we also have lots of noise. The same World Malaria Report puts annual deaths in the range 537,000-907,000. That’s a pretty wide confidence interval. The Lancet gives 929,000-1,685,000 deaths per year. That’s a wider range than the first and the two ranges don’t even overlap. [3]
I understand GiveWell’s position (and yours?) to be “There is so much noise, the real world observations don’t really tell you anything. You have to focus on the Randomised Control Trials as proving the concept & monitor AMF to ensure competent delivery”. This might well be right. However, it is then unclear what information could ever be supplied to change GiveWell’s mind. How many bednets would we have to distribute with no evidence of impact before we revisit the recommendation? A billion? 100 billion? Put another way, imagine 10 years from now we find out that bednet distributions had much less impact than we expected. What would be the evidence that demonstrates this, and where might we look now for clues that such evidence is emerging?
More generally, if an intervention can’t stand out from the statistical noise then I’m not sure it passes my personal threshold for a top intervention. As a minimum this means the scale of the problem, and so the scale of our impact, is not well understood. An intervention that can’t stand out from statistical noise has no way of providing feedback to providers on when it is going well or badly, and so has no way to avoid mistakes and no way to improve. Finally, there’s also a psychological element about certainty of impact that will be a big deal to some donors, but that’s a topic for another day.
[1] Source: https://www.who.int/malaria/world_malaria_report_2011/WMR2011_factsheet.pdf
[2] Based on GiveWell’s assumption of 1.8 people covered per net & a population of 869m, as per here: https://www.statista.com/statistics/805605/total-population-sub-saharan-africa/
[3] Source: https://blog.givewell.org/2013/01/23/guest-post-from-david-barry-about-deworming-cost-effectiveness/