(Attention conservation notice: rambling in public)
A striking throwaway remark, given its context:
There is remarkably little evidence that evidence-based medicine leads to better health outcomes for patients, though this is absence of (good) evidence rather than (good) evidence of absence of effect.
It’s striking given that this comes from this book on Thailand’s Health Intervention and Technology Assessment Program (HITAP) (ch 1 pg 22), albeit perhaps understandable given the authors’ stance that evidence is necessary but not sufficient to determine the best course of action (to treat a patient, to design a social insurance scheme, etc), which seems completely unobjectionable.
That said, I did wonder about the first half of the quoted throwaway remark, so I asked Elicit; its top-4 paper summary is
Evidence-based medicine (EBM) has been shown to improve patient outcomes and healthcare efficiency. A study in a Spanish hospital found that an EBP unit had lower mortality rates (6.27% vs 7.75%) and shorter lengths of stay (6.01 vs 8.46 days) compared to standard practice (Emparanza et al., 2015). EBM can reduce clinical uncertainty, leading to better patient outcomes, improved population health, and reduced costs (Molony & Samuels, 2012). The implementation of EBM is expected to enhance the quality of care as part of healthcare reform initiatives (Hughes, 2011). Additionally, EBM has paralleled the growth of patient empowerment, supporting informed decision-making by integrating the best available research with individual patient values and concerns (Hendler, 2004). While challenges remain in translating EBM principles for public consumption, its adoption has the potential to significantly improve healthcare delivery and patient outcomes.
although the summary didn’t include these papers it listed in the top 10
Bahtsevani et al 2004′s systematic review (weak evidence of limited findings)
Every-Palmer & Howick 2014′s paper with these dramatic sentences in their abstract:
“In this paper we suggest that EBM’s potential for improving patients’ health care has been thwarted by bias in the choice of hypotheses tested, manipulation of study design and selective publication.”
“Evidence for these flaws is clearest in industry-funded studies. We argue EBM’s indiscriminate acceptance of industry-generated ‘evidence’ is akin to letting politicians count their own votes. Given that most intervention studies are industry funded, this is a serious problem for the overall evidence base. Clinical decisions based on such evidence are likely to be misinformed, with patients given less effective, harmful or more expensive treatments.”
“More investment in independent research is urgently required. Independent bodies, informed democratically, need to set research priorities. We also propose that evidence rating schemes are formally modified so research with conflict of interest bias is explicitly downgraded in value.”
Shaw et al 2007′s dramatically-titled Why Evidence Based Medicine May Be Bad for You and Your Patients (“This review argues that the basis of EBM is so deeply flawed that in many cases it cannot usefully inform clinical practice, reflected in fact by the current majority outcome of most trials as “no-blood,” or no result”)
With the proviso that I’m a layperson w.r.t. medicine and healthcare, and that I didn’t ask Elicit further questions or really dig further into this at all — I find myself mostly unmoved by these papers & reviews, while the younger me of (say) a decade ago would’ve epistemically panicked. Partly it’s that they aren’t really contra “using evidence to inform medicine” per se: to oversimplify a bit, Bahtsevani et al recommend more evidence generation, Every-Palmer & Howick recommend less industry-biased evidence generation, and Shaw et al argue that other less legible-than-RCT types of evidence should occupy more mindshare than they did back in ’07 (there’s a loose parallel here to the more recent growth vs randomista debate in dev econ). Partly it’s that I suspect there’s some talking past each other, which only becomes clear when one digs into the nuts-and-bolts. Partly it’s that I think the general underlying ethos of “using evidence to inform medicine” is a lot more robust than any particular instantiation of it (e.g. using only empirical data from systematic reviews of RCTs), sort of like how cluster thinking > sequence thinking for decision-making, or like how foxes have weak views strongly held (side note: in that essay’s framing I used to be a hedgehog, hopefully I’m now more fox than degenerate cactus). Partly it’s that I’ve “seen this before” with other topics, cf. Scott Alexander’s many deep dives. Maybe I’m just getting old…
I haven’t looked in detail, but my quick comment would be that these studies seem to basically be comparing extreme careful following ofevidence based medicine, vs. “normal medical practise” which is like 90%+ based on evidence anyway. Standard medical training and registered medical practise in most of the world closely follows the evidence—it would be very difficullt (maybe impossible) to practise “outside” of the evidence. So not finding a huge difference between these 2 ways of practising isn’t so surprising.
(Attention conservation notice: rambling in public)
A striking throwaway remark, given its context:
It’s striking given that this comes from this book on Thailand’s Health Intervention and Technology Assessment Program (HITAP) (ch 1 pg 22), albeit perhaps understandable given the authors’ stance that evidence is necessary but not sufficient to determine the best course of action (to treat a patient, to design a social insurance scheme, etc), which seems completely unobjectionable.
That said, I did wonder about the first half of the quoted throwaway remark, so I asked Elicit; its top-4 paper summary is
although the summary didn’t include these papers it listed in the top 10
Bahtsevani et al 2004′s systematic review (weak evidence of limited findings)
Every-Palmer & Howick 2014′s paper with these dramatic sentences in their abstract:
“In this paper we suggest that EBM’s potential for improving patients’ health care has been thwarted by bias in the choice of hypotheses tested, manipulation of study design and selective publication.”
“Evidence for these flaws is clearest in industry-funded studies. We argue EBM’s indiscriminate acceptance of industry-generated ‘evidence’ is akin to letting politicians count their own votes. Given that most intervention studies are industry funded, this is a serious problem for the overall evidence base. Clinical decisions based on such evidence are likely to be misinformed, with patients given less effective, harmful or more expensive treatments.”
“More investment in independent research is urgently required. Independent bodies, informed democratically, need to set research priorities. We also propose that evidence rating schemes are formally modified so research with conflict of interest bias is explicitly downgraded in value.”
Shaw et al 2007′s dramatically-titled Why Evidence Based Medicine May Be Bad for You and Your Patients (“This review argues that the basis of EBM is so deeply flawed that in many cases it cannot usefully inform clinical practice, reflected in fact by the current majority outcome of most trials as “no-blood,” or no result”)
With the proviso that I’m a layperson w.r.t. medicine and healthcare, and that I didn’t ask Elicit further questions or really dig further into this at all — I find myself mostly unmoved by these papers & reviews, while the younger me of (say) a decade ago would’ve epistemically panicked. Partly it’s that they aren’t really contra “using evidence to inform medicine” per se: to oversimplify a bit, Bahtsevani et al recommend more evidence generation, Every-Palmer & Howick recommend less industry-biased evidence generation, and Shaw et al argue that other less legible-than-RCT types of evidence should occupy more mindshare than they did back in ’07 (there’s a loose parallel here to the more recent growth vs randomista debate in dev econ). Partly it’s that I suspect there’s some talking past each other, which only becomes clear when one digs into the nuts-and-bolts. Partly it’s that I think the general underlying ethos of “using evidence to inform medicine” is a lot more robust than any particular instantiation of it (e.g. using only empirical data from systematic reviews of RCTs), sort of like how cluster thinking > sequence thinking for decision-making, or like how foxes have weak views strongly held (side note: in that essay’s framing I used to be a hedgehog, hopefully I’m now more fox than degenerate cactus). Partly it’s that I’ve “seen this before” with other topics, cf. Scott Alexander’s many deep dives. Maybe I’m just getting old…
I haven’t looked in detail, but my quick comment would be that these studies seem to basically be comparing extreme careful following of evidence based medicine, vs. “normal medical practise” which is like 90%+ based on evidence anyway. Standard medical training and registered medical practise in most of the world closely follows the evidence—it would be very difficullt (maybe impossible) to practise “outside” of the evidence. So not finding a huge difference between these 2 ways of practising isn’t so surprising.