We just don’t want to give an unfair advantage to applicants who have previously seen a version of the trial task that might be in use by the time they apply.
Alex Cohen
Help GiveWell test a new research work trial
Hey Nick, thanks for raising this question about the plausibility of chlorination’s effect of mortality and the need for more research to understand why. I’m a senior researcher at GiveWell and wanted to chime in with a little more context.
When we did our analysis, we agreed that the ~25%-30% headline figure from the Kremer et al. meta-analysis felt implausibly high. We end up with estimates of ~5%-15% depending on the country and program (e.g., ~6% for the Dispensers for Safe Water Program in Uganda and ~12% for the in-line chlorination program in Malawi).
A bit more on what we did:
We were reluctant to take the Kremer et al. results at face value for the reasons you listed — the reduction in mortality is higher than we’d expect, even if we make pretty generous assumptions on the Mills-Reinke effect, and higher than experts we spoke to would’ve expected, too.
Instead, we first did our own meta-analysis. We focus only on RCTs that study chlorination (as opposed to other ways to increase water quality) and have follow-up lengths of at least a year. We also exclude one RCT with an implausibly high effect. Based on this, we estimate an effect of ~12%.
We adjust this downward to account for some of the RCTs including additional interventions beyond chlorination like hygiene programs (which could overstate the effect of chlorination alone). Then we adjust for differences across countries in the share of deaths we think are attributable to chlorination, compared to the RCTs (e.g., in countries where enteric infection is a smaller share of deaths than in trials, we estimate a smaller effect) and differences in amount of chlorination provided by programs, compared to the RCTs (e.g., we think in-line chlorination provides more chlorination than chlorination programs studied in trials and so has a larger effect).
We also compare our best guess to a “plausibility cap” — an upper bound on what we think the effect of chlorination on mortality could be. This is (I think) the 17% figure the post mentions. In countries where we considered funding in-line chlorination, for example, we guess that if chlorination reduces diarrhea by 25% and infectious diseases account for 70% of mortality in under-5s in these countries, a plausible maximum reduction of mortality is ~17% (25% x 70%). This plausibility cap requires some really uncertain assumptions (e.g., what share of deaths could plausibly be affected by chlorination and by how much?), and we don’t have a lot of confidence in it. We explain more of our rationale for this cap here. In countries we’ve looked at, though, the plausibility cap tends to exceed our initial best guess so it doesn’t end up making a difference in our bottom line estimates.
There’s more detail in our intervention report on water quality and this blog post on why we changed our mind on chlorination programs.
These mortality effect estimates are really uncertain, though, so we’ve funded follow-up research (as Dan alluded to in his comment).
Our best guess is still much larger than we’d expect based on the effect of chlorination on diarrhea alone (implying chlorination averts ~3 deaths from non-diarrhea causes for each death averted due to diarrhea) and relies on a lot of judgment calls.
Because of that, we recently made a grant of $1.8 to Michael Kremer and colleagues at the Development Innovation Lab at the University of Chicago to launch an additional RCT in Kenya and scope larger trials in Nigeria and India.
More on why we made the grant is on our grant page.
Thanks again for boosting this question — it’s something we’ve been thinking about a lot, and I’m glad it’s getting some more attention. I think we’d be open to hearing more thoughts about how we could learn more about the extent to which chlorination affects mortality and why, since we’re continuing to explore more grants to chlorination.
Hey, thanks for the question! I’m Alex Cohen, a researcher at GiveWell, and wanted to chime in.
We did say we’d include a 25th/75th percentile range on bottom line cost-effectiveness (in addition to the one-way sensitivity checks). We haven’t added that yet, and we should. We ran into some issues running the full sensitivity analyses (instead of the one-way sensitivity checks we do have), and we prioritized publishing updated intervention reports and cost-effective analyses without them.
We’ll add those percentile ranges to our top charity intervention reports (so the simple cost-effective analyses will also include a bottom line cost-effectiveness 25⁄75 range, in addition to one-way sensitivity checks) and ensure that new intervention reports/grant pages have them included before publishing. We think it’s worth emphasizing how uncertain our cost-effectiveness estimates are, and this is one way to do that (though it has limitations).
We’re still not planning to base our decision-making on this uncertainty in the bottom line cost-effectiveness (like the “Change Our Mind Contest” post recommended) or model uncertainty on every parameter. To defend against the Optimizer’s Curse, we prefer our approach of skeptically adjusting our inputs, rather than an all-in adjustment to bottom-line cost-effectiveness. We explain why in the uncertainty post.
Really appreciate you raising this. Sorry this has taken so long, and grateful for the nudge!