I think your distinction rests on an overly simplistic description of ‘evidence based medicine’, and that to divide effective altruism into camps is likewise a false dichotomy.
(TLDR: EBM doesn’t equal total reliance on meta-analyses. Evidence based medicine still requires reason. EBM without reason is just as dangerous as reason without evidence)
As most people here known, in EBM, the highest standard of evidence is a meta-anaysis of well conducted randomised, controlled, double blind trials. Unfortunately decisions that can be supported by evidence of such quality are relatively few. There are many reasons for this: trials are difficult to design and run in such a way they actually answer the question correctly; when designed well they are expensive to run; not all trial data are reported; there is a long lag time between identifying a question, conceiving and conducting the trials, analysing and reporting the data, and considering how this changes the weight of available evidence.
In general trials are most often run for investigations or interventions that can make somebody money or where regulation demands it for something novel. Drugs and devices need trials to be licensed, but once licensed they can be sold and used ‘off license’ for other conditions without further RCTs. It is considered unethical to withdraw what is currently considered standard best practice and replace it with a placebo in a trial. For these and other reasons there remains a huge amount of medical decision making is not supported by highest quality evidence.
However we still need to act. We can’t put off our patients and ask them to come back with their perforated bowel once somebody has done a controlled trial of operating on a burst colon vs placebo. In the absence of highest quality evidence medical professionals still practice in a way that is considering and respectful of evidence. We consider lesser forms of evidence, we weigh the likelihoods based on biological theories, and we update our beliefs as new evidence comes to light. It all sounds a bit Bayesian, doesn’t it?
In fact, the physicians I know who are most committed to guiding their practice with evidence and reason are prepared to act against the results of randomised clinical trials. As a recent example: a US based RCT showed that EGDT—targeting treatment of septic (ie very sick from infection) at particular numbers—was superior to usual care (ie being guided by the treating doctor). In fact they demonstrated a massive 16% absolute decrease in 30 day mortality. UK, European and US centres set about trying to replicate it, but given what was at stake this required huge and co-ordinated efforts. It took over a decade and all 3 multicentre studies showed… absolutely no benefit. http://blogs.nejm.org/now/index.php/the-final-nail-in-early-goal-directed-therapys-coffin/2015/03/24/
In the meantime, what should the evidence-based practitioner have done? A shallow answer would be to do what the evidence said: immediately change practice to EGDT, and only update when a further RCT countered the result. But that would have been a costly mistake that subjected patients to unnecessary invasive monitoring. The results of the first trial were counterintuitive to many experts, especially those who have seen fashions for ‘treating the numbers’ (rather than the patient) arise and be discredited over many years. Most senior ED/ITU doctors did not follow EGDT in the decade between results because the trial data was not enough to cause them to update their practice.
It later came out that in the original trial there were several systematic ways that the intervention group differed from the usual care group, but this was not fully captured in the study report. Further, the mortality in the first trial was much higher than we see in the NHS. The overall ‘story’ is probably that treatment by numbers is not better than getting the basics right, which we overall do.
Reason and evidence aren’t separate camps. They are both fundamentally important when you act in the real world.
DOI—studying evidence based medicine on and off since 1999.
Thank you for your very informative response. I must admit that my knowledge of EBM is much more limited than yours and is primarily Wikipedia-based.
The lines which particularly led me to believe that EBM favoured formal approaches rather than doctors’ intuitions were:
“Although all medicine based on science has some degree of empirical support, EBM goes further, classifying evidence by its epistemologic strength and requiring that only the strongest types (coming from meta-analyses, systematic reviews, and randomized controlled trials) can yield strong recommendations; weaker types (such as from case-control studies) can yield only weak recommendations”
“Whether applied to medical education, decisions about individuals, guidelines and policies applied to populations, or administration of health services in general, evidence-based medicine advocates that to the greatest extent possible, decisions and policies should be based on evidence, not just the beliefs of practitioners, experts, or administrators.”
Criticism of EBM: “Research tends to focus on populations, but individual persons can vary substantially from population norms, meaning that extrapolation of lessons learned may founder. Thus EBM applies to groups of people, but this should not preclude clinicians from using their personal experience in deciding how to treat each patient.”
Perhaps the disagreement comes from my unintentional implication that the two camps were diametrically opposed to each other.
I agree that they are “both fundamentally important when you act in the real world” and that evidence based giving / evidence based medicine are not the last word on the matter and need to be supplemented by reason. At the same time though, I think there is an important distinction between maximising expected utility and being averse to ambiguity.
For example, to the best of my knowledge, the tradeoff between donating to SCI ($1.23 per treatment) and Deworm the World Initiative ($0.50 per treatment), is that DWI has demonstrated higher cost effectiveness but with a wider confidence interval (less of a track record). Interestingly, this actually sounds similar to your EGDT example. I therefore donate to SCI because I prefer to be confident in the effect. I think this distinction also applies to XRisk vs. development.
The methods of EBM do absolutely favour formal approaches and concrete results. However—and partly because of some of the pitfalls you describe—it’s relatively common to find you have no high quality evidence that specifically applies to inform your decision. It is also relatively common to find poor quality evidence (such as a badly constructed trial, or very confounded cohort studies). If those constitute the best-available evidence, a strict reading of the phrase ‘to greatest extent possible, decisions and policies should be based on evidence’ would imply that decisions should be founded on that dubious evidence. However in practice I think most doctors who are committed to EBM would not change their practice on the basis of a bad trial.
Regarding tradeoffs between maximising expected good and certainty of results (which I guess is maximising the minimum you achieve), I agree that’s a point where people come down on different sides. I don’t think it strictly divides causes (because as you say, one can lean to maximising expected utility within the global poverty), though the overlap between those who favour maximising expectation and those think existential risk is the best cause to focus on is probably high. I think this is actually going to be a topic of panel discussion at EA Global Oxford if you’re going?
Not to imply that you were implying otherwise, but I don’t think that the ‘evidence camp’ generally sees itself as maximising the minimum you achieve, or as disagreeing with maximising expected good. Instead it often disagrees with specific claims about what does the most good, particularly ones based on a certain sort of expected value calculation.
(In a way this only underscores your point that there isn’t that sharp a divide between the two approaches, and that we need to take into account all the evidence and reasons that we have. As you say, we often don’t have RCTs to settle things, leaving everyone with the tricky job of weighting different forms of evidence. There will be disagreements about that, but they won’t look like a sharp, binary division into two opposed ‘camps’. Describing what actually happens in medicine seems very helpful to understanding this.)
I think your distinction rests on an overly simplistic description of ‘evidence based medicine’, and that to divide effective altruism into camps is likewise a false dichotomy.
(TLDR: EBM doesn’t equal total reliance on meta-analyses. Evidence based medicine still requires reason. EBM without reason is just as dangerous as reason without evidence)
As most people here known, in EBM, the highest standard of evidence is a meta-anaysis of well conducted randomised, controlled, double blind trials. Unfortunately decisions that can be supported by evidence of such quality are relatively few. There are many reasons for this: trials are difficult to design and run in such a way they actually answer the question correctly; when designed well they are expensive to run; not all trial data are reported; there is a long lag time between identifying a question, conceiving and conducting the trials, analysing and reporting the data, and considering how this changes the weight of available evidence.
In general trials are most often run for investigations or interventions that can make somebody money or where regulation demands it for something novel. Drugs and devices need trials to be licensed, but once licensed they can be sold and used ‘off license’ for other conditions without further RCTs. It is considered unethical to withdraw what is currently considered standard best practice and replace it with a placebo in a trial. For these and other reasons there remains a huge amount of medical decision making is not supported by highest quality evidence.
However we still need to act. We can’t put off our patients and ask them to come back with their perforated bowel once somebody has done a controlled trial of operating on a burst colon vs placebo. In the absence of highest quality evidence medical professionals still practice in a way that is considering and respectful of evidence. We consider lesser forms of evidence, we weigh the likelihoods based on biological theories, and we update our beliefs as new evidence comes to light. It all sounds a bit Bayesian, doesn’t it?
In fact, the physicians I know who are most committed to guiding their practice with evidence and reason are prepared to act against the results of randomised clinical trials. As a recent example: a US based RCT showed that EGDT—targeting treatment of septic (ie very sick from infection) at particular numbers—was superior to usual care (ie being guided by the treating doctor). In fact they demonstrated a massive 16% absolute decrease in 30 day mortality. UK, European and US centres set about trying to replicate it, but given what was at stake this required huge and co-ordinated efforts. It took over a decade and all 3 multicentre studies showed… absolutely no benefit.
http://blogs.nejm.org/now/index.php/the-final-nail-in-early-goal-directed-therapys-coffin/2015/03/24/
In the meantime, what should the evidence-based practitioner have done? A shallow answer would be to do what the evidence said: immediately change practice to EGDT, and only update when a further RCT countered the result. But that would have been a costly mistake that subjected patients to unnecessary invasive monitoring. The results of the first trial were counterintuitive to many experts, especially those who have seen fashions for ‘treating the numbers’ (rather than the patient) arise and be discredited over many years. Most senior ED/ITU doctors did not follow EGDT in the decade between results because the trial data was not enough to cause them to update their practice.
It later came out that in the original trial there were several systematic ways that the intervention group differed from the usual care group, but this was not fully captured in the study report. Further, the mortality in the first trial was much higher than we see in the NHS. The overall ‘story’ is probably that treatment by numbers is not better than getting the basics right, which we overall do.
Reason and evidence aren’t separate camps. They are both fundamentally important when you act in the real world.
DOI—studying evidence based medicine on and off since 1999.
Bernadette,
Thank you for your very informative response. I must admit that my knowledge of EBM is much more limited than yours and is primarily Wikipedia-based.
The lines which particularly led me to believe that EBM favoured formal approaches rather than doctors’ intuitions were:
“Although all medicine based on science has some degree of empirical support, EBM goes further, classifying evidence by its epistemologic strength and requiring that only the strongest types (coming from meta-analyses, systematic reviews, and randomized controlled trials) can yield strong recommendations; weaker types (such as from case-control studies) can yield only weak recommendations”
“Whether applied to medical education, decisions about individuals, guidelines and policies applied to populations, or administration of health services in general, evidence-based medicine advocates that to the greatest extent possible, decisions and policies should be based on evidence, not just the beliefs of practitioners, experts, or administrators.”
Criticism of EBM: “Research tends to focus on populations, but individual persons can vary substantially from population norms, meaning that extrapolation of lessons learned may founder. Thus EBM applies to groups of people, but this should not preclude clinicians from using their personal experience in deciding how to treat each patient.”
Perhaps the disagreement comes from my unintentional implication that the two camps were diametrically opposed to each other.
I agree that they are “both fundamentally important when you act in the real world” and that evidence based giving / evidence based medicine are not the last word on the matter and need to be supplemented by reason. At the same time though, I think there is an important distinction between maximising expected utility and being averse to ambiguity.
For example, to the best of my knowledge, the tradeoff between donating to SCI ($1.23 per treatment) and Deworm the World Initiative ($0.50 per treatment), is that DWI has demonstrated higher cost effectiveness but with a wider confidence interval (less of a track record). Interestingly, this actually sounds similar to your EGDT example. I therefore donate to SCI because I prefer to be confident in the effect. I think this distinction also applies to XRisk vs. development.
Sorry for being slow to reply James.
The methods of EBM do absolutely favour formal approaches and concrete results. However—and partly because of some of the pitfalls you describe—it’s relatively common to find you have no high quality evidence that specifically applies to inform your decision. It is also relatively common to find poor quality evidence (such as a badly constructed trial, or very confounded cohort studies). If those constitute the best-available evidence, a strict reading of the phrase ‘to greatest extent possible, decisions and policies should be based on evidence’ would imply that decisions should be founded on that dubious evidence. However in practice I think most doctors who are committed to EBM would not change their practice on the basis of a bad trial.
Regarding tradeoffs between maximising expected good and certainty of results (which I guess is maximising the minimum you achieve), I agree that’s a point where people come down on different sides. I don’t think it strictly divides causes (because as you say, one can lean to maximising expected utility within the global poverty), though the overlap between those who favour maximising expectation and those think existential risk is the best cause to focus on is probably high. I think this is actually going to be a topic of panel discussion at EA Global Oxford if you’re going?
Not to imply that you were implying otherwise, but I don’t think that the ‘evidence camp’ generally sees itself as maximising the minimum you achieve, or as disagreeing with maximising expected good. Instead it often disagrees with specific claims about what does the most good, particularly ones based on a certain sort of expected value calculation.
(In a way this only underscores your point that there isn’t that sharp a divide between the two approaches, and that we need to take into account all the evidence and reasons that we have. As you say, we often don’t have RCTs to settle things, leaving everyone with the tricky job of weighting different forms of evidence. There will be disagreements about that, but they won’t look like a sharp, binary division into two opposed ‘camps’. Describing what actually happens in medicine seems very helpful to understanding this.)