Thanks Ben :)
Looking for a better study—here’s a something with an imperfect but not too bad on the surface RCT protocol—which found pretty much no effect between full healthcare coverage and no healthcare coverage in the context of Ghana.
Such a shame these important questions are littered with such bad work.
The Ghana study didn’t have enough statistical power to detect a change in all-cause mortality, even if the intervention caused ont. (Heck, it was barely powered to detect a change in anemia rates; they powered it to detect an absolute difference of 4% in anemia levels, but the anemia level was only 3% in the control group!)
The authors don’t even bother to give a confidence interval on the odds ratio for mortality that I can find, but you can extrapolate by taking the CI for anemia (0.66-1.67) and noting that mortality was even rarer than anemia in their sample. The observed hazard ratio in Burkina Faso (~0.5) is probably within the confidence interval of the Ghana study.
And, even if this study had found a convincing demonstration of no effect, it wouldn’t necessarily be inconsistent with the other study. Remember, they’re different programs in different countries; it may be that health insurance works well in Burkina Faso but not in Ghana for some reason or other.
I would be wary of calling either of the studies “bad work.” The authors seem mostly aware of the relevant caveats (except perhaps the issue of power in the Ghana study); the problems is more that it’s very easy to over-interpret or falsely generalize. Admittedly most development economics studies (and studies in general) could do better at conveying the limitations of their work, but it’s not like there are glaring flaws in the studies themselves.
OK, thanks for the analysis, I would have assumed that 1,000+ in each arm was enough to show something if something was there but guess I didn’t take into account the rarity of childhood mortality. Soz.
Yup. The way in which sample size matters is that it decreases the size of the confidence intervals, but there are other things that affect that as well. So the best way to tell whether the sample size is large enough is looking at the confidence intervals directly.
There’s nothing inherently wrong with observational studies. RCTs and observational studies may be on different rungs (or tiers) in the hierarchy of evidence, but that only means we have to put their conclusions into the proper context. This, of course, is not always easy for laymen like us.
Thanks Ben :) Looking for a better study—here’s a something with an imperfect but not too bad on the surface RCT protocol—which found pretty much no effect between full healthcare coverage and no healthcare coverage in the context of Ghana.
Such a shame these important questions are littered with such bad work.
Effect of Removing Direct Payment for Health Care on Utilisation and Health Outcomes in Ghanaian Children: A Randomised Controlled Trial
The Ghana study didn’t have enough statistical power to detect a change in all-cause mortality, even if the intervention caused ont. (Heck, it was barely powered to detect a change in anemia rates; they powered it to detect an absolute difference of 4% in anemia levels, but the anemia level was only 3% in the control group!)
The authors don’t even bother to give a confidence interval on the odds ratio for mortality that I can find, but you can extrapolate by taking the CI for anemia (0.66-1.67) and noting that mortality was even rarer than anemia in their sample. The observed hazard ratio in Burkina Faso (~0.5) is probably within the confidence interval of the Ghana study.
And, even if this study had found a convincing demonstration of no effect, it wouldn’t necessarily be inconsistent with the other study. Remember, they’re different programs in different countries; it may be that health insurance works well in Burkina Faso but not in Ghana for some reason or other.
I would be wary of calling either of the studies “bad work.” The authors seem mostly aware of the relevant caveats (except perhaps the issue of power in the Ghana study); the problems is more that it’s very easy to over-interpret or falsely generalize. Admittedly most development economics studies (and studies in general) could do better at conveying the limitations of their work, but it’s not like there are glaring flaws in the studies themselves.
OK, thanks for the analysis, I would have assumed that 1,000+ in each arm was enough to show something if something was there but guess I didn’t take into account the rarity of childhood mortality. Soz.
Yup. The way in which sample size matters is that it decreases the size of the confidence intervals, but there are other things that affect that as well. So the best way to tell whether the sample size is large enough is looking at the confidence intervals directly.
There’s nothing inherently wrong with observational studies. RCTs and observational studies may be on different rungs (or tiers) in the hierarchy of evidence, but that only means we have to put their conclusions into the proper context. This, of course, is not always easy for laymen like us.