Nick, just wanted to share that I personally strongly held the view that RCTs usually produce smaller effects than observational trials until reading a 2024 Cochrane study examining this specific question—in short, I think it supports a prior view that well-conducted observational studies (at least in healthcare) are not much more likely to overestimate effects than RCTs. To your credit you explicitly include “larger cohorts” among your example of “better studies”, so perhaps you already know this! But it was news to me. (Of course I also agree that small and/or low-quality studies, regardless of methodology, should be taken with caution).
Hey there yes that’s a great review. I’m not sure how relevant to this development stuff it is though, because
It only accepts really high quality observational studies
it’s focuses on human health. We’re reasonably good at controlling for confounders with humans, but we have very little clue how to do that with development interventions.
I would love a similar review for development studies but I doubt there would be enough good quality research to do a similar comparison
I have a paper that can help answer this, which uses JPAL and IPA studies! However, you might think observational study overestimates come from selection bias during the publication process—our result doesn’t say anything about that.
“First, we find that there is little bias on average. Using our
best-performing observational method (DDML), there is a statistically insignificant and modest
negative mean bias of −0.025 standard deviations. This implies that observational studies do not
systematically over- or underestimate the welfare impact of the programs they evaluate.”
Nick, just wanted to share that I personally strongly held the view that RCTs usually produce smaller effects than observational trials until reading a 2024 Cochrane study examining this specific question—in short, I think it supports a prior view that well-conducted observational studies (at least in healthcare) are not much more likely to overestimate effects than RCTs. To your credit you explicitly include “larger cohorts” among your example of “better studies”, so perhaps you already know this! But it was news to me. (Of course I also agree that small and/or low-quality studies, regardless of methodology, should be taken with caution).
Hey there yes that’s a great review. I’m not sure how relevant to this development stuff it is though, because
It only accepts really high quality observational studies
it’s focuses on human health. We’re reasonably good at controlling for confounders with humans, but we have very little clue how to do that with development interventions.
I would love a similar review for development studies but I doubt there would be enough good quality research to do a similar comparison
I have a paper that can help answer this, which uses JPAL and IPA studies! However, you might think observational study overestimates come from selection bias during the publication process—our result doesn’t say anything about that.
https://www.jondequidt.com/pdfs/Lalonde30.pdf
“First, we find that there is little bias on average. Using our best-performing observational method (DDML), there is a statistically insignificant and modest negative mean bias of −0.025 standard deviations. This implies that observational studies do not systematically over- or underestimate the welfare impact of the programs they evaluate.”