Could you say a bit more about what you mean by “should not have maintained once they were made aware of them” in point 2? As you characterize below, this is an org “making a funding request in a financially precarious position,” and in that context I think it’s even more important than usual to be clear about HLI has “maintained” its “mistakes” “once they were made aware of them.” Furthermore, I think the claim that HLI has “maintained” is an important crux for your final point.
Example: I do not like that HLI’s main donor advice page lists the 77 WELLBY per $1,000 estimate with only a very brief and neutral statement that “Note: we plan to update our analysis of StrongMinds by the end of 2023.” There is a known substantial, near-typographical error underlying that analysis:
The first thing worth acknowledging is that he pointed out a mistake that substantially changes our results. [ . . . .] He pointed out that Kemp et al., (2009) finds a negative effect, while we recorded its effect as positive — meaning we coded the study as having the wrong sign.
[ . . . .]
This correction would reduce the spillover effect from 53% to 38% and reduce the cost-effectiveness comparison from 9.5 to 7.5x, a clear downwards correction.
While I’m sympathetic to HLI’s small size and desire to produce a more comprehensive updated analysis, I don’t think it’s appropriate to be quoting numbers from an unpatched version of the CEA over four months after the error was discovered. (I’d be somewhat more flexible if this were based on new information rather than HLI’s coding error, and/or if the difference didn’t flip the recommendation for a decent percentage of would-be donors: deprivationists who believe the neutral point is less than 1.56 or so).
With apologies for delay. I agree with you that I am asserting HLI’s mistakes have further ‘aggravating factors’ which I also assert invites highly adverse inference. I had hoped the links I provided provided clear substantiation, but demonstrably not (my bad). Hopefully my reply to Michael makes them somewhat clearer, but in case not, I give a couple of examples below with as best an explanation I can muster.
I will also be linking and quoting extensively from the Cochrane handbook for systematic reviews—so hopefully even if my attempt to clearly explain the issues fail, a reader can satisfy themselves my view on them agrees with expert consensus. (Rather than, say, “Cantankerous critic with idiosyncratic statistical tastes flexing his expertise to browbeat the laity into aquiescence”.)
0) Per your remarks, there’s various background issues around reasonableness, materiality, timeliness etc. I think my views basically agree with yours. In essence: I think HLI is significantly ‘on the hook’ for work (such as the meta-analysis) it relies upon to make recommendations to donors—who will likely be taking HLI’s representations on its results and reliability (cf. HLI’s remarks about its ‘academic research’, ‘rigour’ etc.) on trust. Discoveries which threaten the ‘bottom line numbers’ or overall reliability of this work should be addressed with urgency and robustness appropriate to their gravity. “We’ll put checking this on our to-do list” seems fine for an analytic choice which might be dubious but of unclear direction and small expected magnitude. As you say, a typo which where corrected reduces the bottom line efficacy by ~ 20% should be done promptly.
The two problems I outlined 6 months ago each should have prompted withdrawal/suspension of both the work and the recommendation unless and until they were corrected.[1] Instead, HLI has not made appropriate corrections, and instead persists in misguiding donations and misrepresenting the quality of its research on the basis of work it has partly acknowledged (and which reasonable practicioners would overwhelmingly concur) was gravely compromised.[2]
1.0) Publication bias/Small study effects
It is commonplace in the literature for smaller studies to show different (typically larger) effect sizes than large studies. This is typically attributed to a mix of factors which differentially inflate effect size in smaller studies (see), perhaps the main one being publication bias: although big studies are likely to be published “either way”, investigators may not finish (or journals may not publish) smaller studies reporting negative results.
It is extremely well recognised that these effects can threaten the validity of meta-analysis results. If you are producing something (very roughly) like an ‘average effect size’ from your included studies, the studies being selected for showing a positive effect means the average is inflated upwards. This bias is very difficult to reliably adjust for or ‘patch’ (more later), but it can easily be large enough to mean “Actually, the treatment has no effect, and your meta-analysis is basically summarizing methodological errors throughout the literature”.
Hence why most work on this topic stresses the importance of arduous efforts in prevention (e.g trying really hard to find ‘unpublished’ studies) and diagnosis (i.e. carefully checking for statistical evidence of this problem) rather than ‘cure’ (see eg.). If a carefully conducted analysis nonetheless finds stark small study effects, this—rather than the supposed ~‘average’ effect—would typically be (and should definitely be) the main finding: “The literature is a complete mess—more, and much better, research needed”.
As in many statistical matters, a basic look at your data can point you in the right direction. For meta-analysis, this standard is a forest plot:
To orientate: each row is a study (presented in order of increasing effect size), and the horizontal scale is effect size (where to the right = greater effect size favouring the intervention). The horizontal bar for each study is gives the confidence interval for the effect size, with the middle square marking the central estimate (also given in the rightmost column). The diamond right at the bottom is the pooled effect size—the (~~)[3] average effect across studies mentioned earlier.
Here, the studies are all over the map, many of which do not overlap with one another, nor with the pooled effect size estimate. In essence, dramatic heterogeneity: the studies are reporting very different effect sizes from another. Heterogeneity is basically a fact of life in meta-analysis, but a forest plot like this invites curiousity (or concern) about why effects are varying quite this much. [I’m going to be skipping discussion of formal statistical tests/metrics for things like this for clarity—you can safely assume a) yes, you can provide more rigorous statistical assessment of ‘how much’ besides ‘eyeballing it’ - although visually obvious things are highly informative, b) the things I mention you can see are indeed (highly) statistically significant etc. etc.]
There are some hints from this forest plot that small study effects could have a role to play. Although very noisy, larger studies (those with narrower horizontal lines lines, because bigger study ~ less uncertainty in effect size) tend to be higher up the plot and have smaller effects. There is a another plot designed to look at this better—a funnel plot.
To orientate: each study is now a point on a scatterplot, with effect size again on the x-axis (right = greater effect). The y-axis is now the standard error: bigger studies have greater precision, and so lower sampling error, so are plotted higher on the y axis. Each point is a single study—all being well, the scatter should look like a (symmetrical) triangle or funnel like those being drawn on the plot.
All is not well here. The scatter is clearly asymmetric and sloping to the right—smaller studies (towards the bottom of the graph) tend towards greater effect sizes. The lines being drawn on the plot make this even clearer. Briefly:
The leftmost ‘funnel’ with shaded wings is centered on an effect size of zero (i.e. zero effect). The white middle triangle are findings which would give a p value of > 0.05, and the shaded wings correspond to a p value between 0.05 (‘statistically significant’) and 0.01: it is an upward-pointing triangle because bigger studies can detect find smaller differences from zero as ‘statistically significant’ than smaller ones. There appears to be clustering in the shaded region, suggestive that studies may be being tweaked to get them ‘across the threshold’ of statistically significant effects.
The rightmost ‘funnel’ without shading is centered on the pooled effect estimate (0.5). Within the triangle is where you would expect 95% of the scatter of studies to fall in the absence of heterogeneity (i.e. there was just one true effect size, and the studies varied from this just thanks to sampling error). Around half are outside this region.
The red dashed line is the best fit line through the scatter of studies. If there weren’t small study effects, it would be basically vertical. Instead, it slopes off heavily to the right.
Although a very asymmetric funnel plot is not proof positive of publication bias, findings like this demand careful investigation and cautious interpretation (see generally). It is challenging to assess exactly ‘how big a deal is it, though?’: statistical adjustiment for biases in the original data is extremely fraught.
But we are comfortably in ‘big deal’ territory: this finding credibly up-ends HLI’s entire analysis:
a) There are different ways of getting a ‘pooled estimate’ (~~average, or ~~ typical effect size): random effects (where you assume the true effect is rather a distribution of effects from which each study samples from), vs. fixed effects (where there is a single value for the true effect size). Random effects are commonly preferred as—in reality—one expects the true effect to vary, but the results are much more vulnerable to any small study effects/publication bias (see generally). Comparing the random effect vs. the fixed effect estimate can give a quantitative steer on the possible scale of the problem, as well as guide subsequent analysis.[4] Here, the random effect estimate is 0.52, whilst the fixed one is less than half the size: 0.18.
b) There are other statistical methods you could use (more later). One of the easier to understand (but one of the most conservative) goes back to the red dashed line in the funnel plot. You could extrapolate from it to the point where standard error = 0: so the predicted effect of an infinitely large (so infinitely precise) study—and so also where the ‘small study effect’ is zero. There are a few different variants of these sorts of ‘regression methods’, but the ones I tried predict effect sizes of such a hypothetical study between 0.17 and 0.05. So, quantitatively, 70-90% cuts of effect size are on the table here.
c) A reason why regression methods methods are conservative as they will attribute as much variation in reported results as possible to differences in study size. Yet there could be alternative explanations for this besides publication bias: maybe smaller studies have different patient populations with (genuinely) greater efficacy, etc.
However, this statistical confounding can go the other way. HLI is not presenting simple meta-analytic results, but rather meta-regressions: where the differences in reported effect sizes are being predicted by differences between and within the studies (e.g. follow-up time, how much therapy was provided, etc.). One of HLI’s findings from this work is that psychotherpy with Strongminds-like traits is ~70% more effective than psychotherapy in general (0.8 vs. 0.46). If this is because factors like ‘group or individual therapy’ correlate with study size, the real story for this could simply be: “Strongminds-like traits are indicators for methodological weaknesses which greatly inflate true effect size, rather than for a more effective therapeutic modality.” In HLI’s analysis, the latter is presumed, giving about a ~10% uplift to the bottom line results.[5]
1.2) A major issue, and a major mistake to miss
So this is a big issue, and would be revealed by standard approaches. HLI instead used a very non-standard approach (see), novel—as far as I can tell—to existing practice and, unfortunately, inappropriate (cf., point 5): it gives ~ a 10-15% discount (although I’m not sure this has been used in the Strongminds assessment, although it is in the psychotherapy one).
I came across these problems ~6m ago, prompted by a question by Ryan Briggs (someone with considerably greater expertise than my own) asking after the forest and funnel plot. I also started digging into the data in general at the same time, and noted the same key points explained labouriously above: looks like marked heterogeneity and small study effects, they look very big, and call the analysis results into question. Long story short, they said they would take a look at it urgently then report back.
This response is fine, but as my comments then indicated, I did have (and I think reasonably had) HLI on pretty thin ice/‘epistemic probation’ after finding these things out. You have to make a lot of odd choices to end up this far from normal practice, nonetheless still have to make some surprising oversights too, to end up missing problems which would appear to greatly undermine a positive finding for Strongminds.[6]
1.3) Maintaining this major mistake
HLI fell through this thin ice after its follow-up. Their approach was to try a bunch of statistical techniques to adjust for publication bias (excellent technique), do the same for their cash transfers meta-analysis (sure), then using the relative discounts between them to get an adjustment for psychotherapy vs. cash transfers (good, esp. as adding directly into the multi-level meta-regressions would be difficult). Further, they provided full code and data for replication (great). But the results made no sense whatsoever:
To orientate: each row is a different statistical technique applied to the two meta-analyses (more later). The x-axis is the ‘multiple’ of Strongminds vs. cash transfers, and the black line is at 9.4x, the previous ‘status quo value’. Bars shorter than this means adjusting for publication bias results in an overall discount for Strongminds, and vice-versa.
The cash transfers funnel plot looks like this:
Compared to the psychotherapy one, it basically looks fine: the scatter looks roughly like a funnel, and no massive trend towards smaller studies = bigger effects. So how could so many statistical methods discount the ‘obvious small study effect’ meta-analysis less than the ‘no apparent small study effect’ meta-analysis, to give an increased multiple? As I said at the time, the results look like nonsense to the naked eye.
One problem was a coding error in two of the statistical methods (blue and pink bars). The bigger problem is how the comparisons are being done is highly misleading.
Take a step back from all the dividing going on to just look at the effect sizes. The basic, nothing fancy, random effects model applied to the psychotherapy data gives an effect size of 0.5. If you take the average across all the other model variants, you get ~0.3, a 40% drop. For the cash transfers meta-analysis, the basic model gives 0.1, and the average of all the other models is ~0.9, a 10% drop. So in fact you are seeing—as you should—bigger discounts when adjusting the psychotherapy analysis vs. the cash transfers meta-analysis. This is lost by how the divisions are being done, which largely ‘play off’ multiple adjustments against one another. (see, pt.2). What the graph should look like is this:
Two things are notable: 1) the different models tend to point to a significant drop (~30-40% on average) in effect size; 2) there is a lot of variation in the discount—from ~0 to ~90% (so visual illustration about why this is known to be v. hard to reliably ‘adjust’). I think these results oblige something like the following:
Re. write-up: At least including the forest and funnel plots, alongside a description of why they are concerning. Should also include some ‘best guess’ correction from the above, and noting this has a (very) wide range. Probably warrants ‘back to the drawing board’ given reliability issues.
Re. overall recommendation: At least a very heavy astericks placed besides the recommendation. Should also highlight both the adjustment and uncertainty in front facing materials (e.g. ‘tentative suggestion’ vs. ‘recommendation’). Probably warrants withdrawal.
Re. general reflection: I think a reasonable evaluator—beyond directional effects—would be concerned about the ‘near’(?) miss property of having a major material issue not spotted before pushing a strong recommendation, ‘phase 1 complete/mission accomplished’ etc. - especially when found by a third party many months after initial publication. They might also be concerned about the direction of travel. When published, the multiplier was 12x; with spillovers, it falls to 9.5%; with spillovers and the typo corrected, it falls to 7.5x; with a 30% best guess correction for publication bias, we’re now at 5.3x. Maybe any single adjustment is not recommendation-reversing, but in concert they are, and the track record suggests the next one is more likely to be further down rather than back up.
What happened instead 5 months ago was HLI would read some more and discuss among themselves whether my take on the comparators was the right one (I am, and it is not reasonably controversial, e.g. 1, 2, cf. fn4). Although ‘looking at publication bias is part of their intended ‘refining’ of the Strongminds assessment, there’s been nothing concrete done yet.
Maybe I should have chased, but the exchange on this (alongside the other thing) made me lose faith that HLI was capable of reasonably assessing and appropriately responding to criticisms of their work when material to their bottom line.
2) The cost effectiveness guestimate.
[Readers will be relieved ~no tricky stats here]
As I was looking at the meta-analysis, I added my attempt at ‘adjusted’ effect sizes of the same into the CEA to see what impact they had on the results. To my surprise, not very much. Hence my previous examples about ‘Even if the meta-analysis has zero effect the CEA still recommends Strongminds as several times GD’, and ‘You only get to equipoise with GD if you set all the effect sizes in the CEA to near-zero.’
I noted this alongside my discussion around the meta-analysis 6m ago. Earlier remarks from HLI suggested they accepted these were diagnostic of something going wrong with how the CEA is aggregating information (but fixing it would be done but not as a priority); more recent ones suggest more ‘doubling down’.
In any case, they are indeed diagnostic for a lack of face validity. You obviouslywould, in fact, be highly sceptical if the meta-analysis of psychotherapy in general was zero (or harmful!) that nonetheless a particular psychotherapy intervention was extremely effective. The (pseudo-)bayesian gloss on why is that the distribution of reported effect sizes gives additional information on the likely size of the ‘real’ effects underlying them. (cf. heterogeneity discussed above) A bunch of weird discrepancies among them, if hard to explain by intervention characteristics, increases the suspicion of weird distortions, rather than true effects, underlie the observations. So increasing discrepancy between indirect and direct evidence should reduce effect size beyond impacts on any weighted average.
It does not help the findings as-is are highly discrepant and generally weird. Among many examples:
Why are the strongminds like trials in the direct evidence having among the greatest effects of any of the studies included—and ~1.5x-2x the effect of a regression prediction of studies with strongminds-like traits?
Why are the most strongminds-y studies included in the meta-analysis marked outliers—even after ‘correction’ for small study effects?
What happened between the original Strongminds Phase 2 and the Strongminds RCT to up the intevention efficacy by 80%?
How come the only study which compares psychotherapy to a cash transfer comparator is also the only study which gives a negative effect size?
I don’t know what the magnitude of the directional ‘adjustment’ would be, as this relies on specific understanding of the likelier explanations for the odd results (I’d guess a 10%+ downward correction assuming I’m wrong about everything else—obviously, much more if indeed ’the vast bulk in effect variation can be explained by sample size +/- registration status of the study). Alone, I think it mainly points to the quantative engine needing an overhaul and the analysis being known-unreliable until it is.
In any case, it seems urgent and important to understand and fix. The numbers are being widely used and relied upon (probably all of them need at least a big public astericks pending developing more reliable technique). It seems particularly unwise to be reassured by “Well sure, this is a downward correction, but the CEA still gives a good bottom line multiple”, as the bottom line number may not be reasonable, especially conditioned on different inputs. Even more so to persist in doing so 6m after being made aware of the problem.
These are mentioned in 3a and 3b of my reply to Michael. Point 1 there (kind of related to 3a) would on its own warrant immediate retraction, but that is not a case (yet) of ‘maintained’ error.
One quote from the Cochrane handbook feels particularly apposite:
Do not start here!
It can be tempting to jump prematurely into a statistical analysis when undertaking a systematic review. The production of a diamond at the bottom of a plot is an exciting moment for many authors, but results of meta-analyses can be very misleading if suitable attention has not been given to formulating the review question; specifying eligibility criteria; identifying and selecting studies; collecting appropriate data; considering risk of bias; planning intervention comparisons; and deciding what data would be meaningful to analyse. Review authors should consult the chapters that precede this one before a meta-analysis is undertaken.
In the presence of heterogeneity, a random-effects meta-analysis weights the studies relatively more equally than a fixed-effect analysis (see Chapter 10, Section 10.10.4.1). It follows that in the presence of small-study effects, in which the intervention effect is systematically different in the smaller compared with the larger studies, the random-effects estimate of the intervention effect will shift towards the results of the smaller studies. We recommend that when review authors are concerned about the influence of small-study effects on the results of a meta-analysis in which there is evidence of between-study heterogeneity (I2 > 0), they compare the fixed-effect and random-effects estimates of the intervention effect. If the estimates are similar, then any small-study effects have little effect on the intervention effect estimate. If the random-effects estimate has shifted towards the results of the smaller studies, review authors should consider whether it is reasonable to conclude that the intervention was genuinely different in the smaller studies, or if results of smaller studies were disseminated selectively. Formal investigations of heterogeneity may reveal other explanations for funnel plot asymmetry, in which case presentation of results should focus on these. If the larger studies tend to be those conducted with more methodological rigour, or conducted in circumstances more typical of the use of the intervention in practice, then review authors should consider reporting the results of meta-analyses restricted to the larger, more rigorous studies.
This is not the only problem in HLI’s meta-regression analysis. Analyses here should be pre-specified (especially if intended as the primary result rather than some secondary exploratory analysis), to limit risks of inadvertently cherry-picking a model which gives a preferred result. Cochrane (see):
Authors should, whenever possible, pre-specify characteristics in the protocol that later will be subject to subgroup analyses or meta-regression. The plan specified in the protocol should then be followed (data permitting), without undue emphasis on any particular findings (see MECIR Box 10.11.b). Pre-specifying characteristics reduces the likelihood of spurious findings, first by limiting the number of subgroups investigated, and second by preventing knowledge of the studies’ results influencing which subgroups are analysed. True pre-specification is difficult in systematic reviews, because the results of some of the relevant studies are often known when the protocol is drafted. If a characteristic was overlooked in the protocol, but is clearly of major importance and justified by external evidence, then authors should not be reluctant to explore it. However, such post-hoc analyses should be identified as such.
HLI does not mention any pre-specification, and there is good circumstantial evidence of a lot of this work being ad hoc re. ‘Strongminds-like traits’. HLI’s earlier analysis on psychotherapy in general, using most (?all) of the same studies as in their Strongminds CEA (4.2, here), had different variables used in a meta-regression on intervention properties (table 2). It seems likely the change of model happened after study data was extracted (the lack of significant prediction and including a large number of variables for a relatively small number of studies would be further concerns). This modification seems to favour the intervention: I think the earlier model, if applied to Strongminds, gives an effect size of ~0.6.
I really appreciate you putting in the work and being so diligent Gregory. I did very little here, though I appreciate your kind words. Without you seriously digging in, we’d have a very distorted picture of this important area.
Hello Jason. FWIW, I’ve drafted a reply to your other comment and I’m getting it checked internally before I post it.
On this comment about you not liking that we hadn’t updated our website to include the new numbers: we all agree with you! It’s a reasonable complaint. The explanation is fairly boring: we have been working on a new charity recommendations page for the website, at which point we were going to update the numbers at add a note, so we could do it all in one go. (We still plan to do a bigger reanalysis later this year.) However, that has gone slower than expected and hadn’t happened yet. Because of your comment, we’ll add a ‘hot fix’ update in the next week, and hopefully have the new charity recommendations page live in a couple of weeks.
I think we’d have moved faster on this if it had substantially changed the results. On our numbers, StrongMinds is still the best life-improving intervention (it’s several times better than cash and we’re not confident deworming has a longterm effect). You’re right it would slightly change the crossover point for choosing between life-saving and life-improving interventions, but we’ve got the impression that donors weren’t making much use of our analysis anyway; even if they were, it’s a pretty small difference, and well within the margin of uncertainty.
(Looking back at the comment, I see the example actually ended up taking more space than the lead point! Although I definitely agree that the hot fix should happen, I hope the example didn’t overshadow the comment’s main intended point—that people who have concerns about HLI’s response to recent criticisms should raise their concerns with a degree of specificity, and explain why they have those concerns, to allow HLI an opportunity to address them.)
Meta-note as a casual lurker in this thread: This comment being down-voted to oblivion while Jason’s comment is not, is pretty bizarre to me. The only explanation I can think of is that people who have provided criticism think Michael is saying they shouldn’t criticise? It is blatantly obvious to me that this is not what he is saying and is simply agreeing with Jason that specific actionable-criticism is better.
Fun meta-meta note I just realized after writing the above: This does mean I am potentially criticising some critics who are critical of how Micheal is criticising their criticism.
Okkkk, that’s enough internet for me. Peace and love, y’all.
Michael’s comment has 14 non-author up/downvotes and 10 non-author agree/disagreevotes; mine has one of each. This is possibly due to the potential to ascribe a comment by HLI’s director several meanings that are not plausible to give a comment by a disinterested observer—e.g., “Org expresses openness to changes to address concerns,” “Org is critical of critics,” etc.
I’m not endorsing any potential meaning, although I have an upvote on his comment.
The more disappointing meta-note to me is that helpful, concrete suggestions have been relatively sparse on this post as a whole. I wrote some suggestions for future epistemic practices, and someone else called for withdrawing the SM recommendation and report. But overall, there seemed to be much more energy invested in litigating than in figuring out a path forward.
...helpful, concrete suggestions have been relatively sparse on this post as a whole.
I don’t really share this sense (I think that even most of Gregory Lewis’ posts in this thread have had concretely useful advice for HLI, e.g. this one), but let’s suppose for the moment that it’s true. Should we care?
In the last round of posts, four to six months ago, HLI got plenty of concrete and helpful suggestions. A lot of them were unpleasant, stuff like “you should withdraw your cost-effectiveness analysis” and “here are ~10 easy-to-catch problems with the stats you published”, but highly specific and actionable. What came of that? What improvements has HLI made? As far as I can tell, almost nothing has changed, and they’re still fundraising off of the same flawed analyses. There wasn’t even any movement on this unambiguous blunder until you called it out. It seems to me that giving helpful, concrete suggestions to HLI has been tried, and shown to be low impact.
One thing people can do in a thread like this one is talk to HLI, to praise them, ask them questions, or try to get them to do things differently. But another thing they can do is talk to each other, to try and figure out whether they should donate to HLI or not. For that, criticism of HLI is valuable, even if it’s not directed to HLI. This, too, counts as “figuring out a path forward”.
Could you say a bit more about what you mean by “should not have maintained once they were made aware of them” in point 2? As you characterize below, this is an org “making a funding request in a financially precarious position,” and in that context I think it’s even more important than usual to be clear about HLI has “maintained” its “mistakes” “once they were made aware of them.” Furthermore, I think the claim that HLI has “maintained” is an important crux for your final point.
Example: I do not like that HLI’s main donor advice page lists the 77 WELLBY per $1,000 estimate with only a very brief and neutral statement that “Note: we plan to update our analysis of StrongMinds by the end of 2023.” There is a known substantial, near-typographical error underlying that analysis:
While I’m sympathetic to HLI’s small size and desire to produce a more comprehensive updated analysis, I don’t think it’s appropriate to be quoting numbers from an unpatched version of the CEA over four months after the error was discovered. (I’d be somewhat more flexible if this were based on new information rather than HLI’s coding error, and/or if the difference didn’t flip the recommendation for a decent percentage of would-be donors: deprivationists who believe the neutral point is less than 1.56 or so).
Hello Jason,
With apologies for delay. I agree with you that I am asserting HLI’s mistakes have further ‘aggravating factors’ which I also assert invites highly adverse inference. I had hoped the links I provided provided clear substantiation, but demonstrably not (my bad). Hopefully my reply to Michael makes them somewhat clearer, but in case not, I give a couple of examples below with as best an explanation I can muster.
I will also be linking and quoting extensively from the Cochrane handbook for systematic reviews—so hopefully even if my attempt to clearly explain the issues fail, a reader can satisfy themselves my view on them agrees with expert consensus. (Rather than, say, “Cantankerous critic with idiosyncratic statistical tastes flexing his expertise to browbeat the laity into aquiescence”.)
0) Per your remarks, there’s various background issues around reasonableness, materiality, timeliness etc. I think my views basically agree with yours. In essence: I think HLI is significantly ‘on the hook’ for work (such as the meta-analysis) it relies upon to make recommendations to donors—who will likely be taking HLI’s representations on its results and reliability (cf. HLI’s remarks about its ‘academic research’, ‘rigour’ etc.) on trust. Discoveries which threaten the ‘bottom line numbers’ or overall reliability of this work should be addressed with urgency and robustness appropriate to their gravity. “We’ll put checking this on our to-do list” seems fine for an analytic choice which might be dubious but of unclear direction and small expected magnitude. As you say, a typo which where corrected reduces the bottom line efficacy by ~ 20% should be done promptly.
The two problems I outlined 6 months ago each should have prompted withdrawal/suspension of both the work and the recommendation unless and until they were corrected.[1] Instead, HLI has not made appropriate corrections, and instead persists in misguiding donations and misrepresenting the quality of its research on the basis of work it has partly acknowledged (and which reasonable practicioners would overwhelmingly concur) was gravely compromised.[2]
1.0) Publication bias/Small study effects
It is commonplace in the literature for smaller studies to show different (typically larger) effect sizes than large studies. This is typically attributed to a mix of factors which differentially inflate effect size in smaller studies (see), perhaps the main one being publication bias: although big studies are likely to be published “either way”, investigators may not finish (or journals may not publish) smaller studies reporting negative results.
It is extremely well recognised that these effects can threaten the validity of meta-analysis results. If you are producing something (very roughly) like an ‘average effect size’ from your included studies, the studies being selected for showing a positive effect means the average is inflated upwards. This bias is very difficult to reliably adjust for or ‘patch’ (more later), but it can easily be large enough to mean “Actually, the treatment has no effect, and your meta-analysis is basically summarizing methodological errors throughout the literature”.
Hence why most work on this topic stresses the importance of arduous efforts in prevention (e.g trying really hard to find ‘unpublished’ studies) and diagnosis (i.e. carefully checking for statistical evidence of this problem) rather than ‘cure’ (see eg.). If a carefully conducted analysis nonetheless finds stark small study effects, this—rather than the supposed ~‘average’ effect—would typically be (and should definitely be) the main finding: “The literature is a complete mess—more, and much better, research needed”.
As in many statistical matters, a basic look at your data can point you in the right direction. For meta-analysis, this standard is a forest plot:
To orientate: each row is a study (presented in order of increasing effect size), and the horizontal scale is effect size (where to the right = greater effect size favouring the intervention). The horizontal bar for each study is gives the confidence interval for the effect size, with the middle square marking the central estimate (also given in the rightmost column). The diamond right at the bottom is the pooled effect size—the (~~)[3] average effect across studies mentioned earlier.
Here, the studies are all over the map, many of which do not overlap with one another, nor with the pooled effect size estimate. In essence, dramatic heterogeneity: the studies are reporting very different effect sizes from another. Heterogeneity is basically a fact of life in meta-analysis, but a forest plot like this invites curiousity (or concern) about why effects are varying quite this much. [I’m going to be skipping discussion of formal statistical tests/metrics for things like this for clarity—you can safely assume a) yes, you can provide more rigorous statistical assessment of ‘how much’ besides ‘eyeballing it’ - although visually obvious things are highly informative, b) the things I mention you can see are indeed (highly) statistically significant etc. etc.]
There are some hints from this forest plot that small study effects could have a role to play. Although very noisy, larger studies (those with narrower horizontal lines lines, because bigger study ~ less uncertainty in effect size) tend to be higher up the plot and have smaller effects. There is a another plot designed to look at this better—a funnel plot.
To orientate: each study is now a point on a scatterplot, with effect size again on the x-axis (right = greater effect). The y-axis is now the standard error: bigger studies have greater precision, and so lower sampling error, so are plotted higher on the y axis. Each point is a single study—all being well, the scatter should look like a (symmetrical) triangle or funnel like those being drawn on the plot.
All is not well here. The scatter is clearly asymmetric and sloping to the right—smaller studies (towards the bottom of the graph) tend towards greater effect sizes. The lines being drawn on the plot make this even clearer. Briefly:
The leftmost ‘funnel’ with shaded wings is centered on an effect size of zero (i.e. zero effect). The white middle triangle are findings which would give a p value of > 0.05, and the shaded wings correspond to a p value between 0.05 (‘statistically significant’) and 0.01: it is an upward-pointing triangle because bigger studies can detect find smaller differences from zero as ‘statistically significant’ than smaller ones. There appears to be clustering in the shaded region, suggestive that studies may be being tweaked to get them ‘across the threshold’ of statistically significant effects.
The rightmost ‘funnel’ without shading is centered on the pooled effect estimate (0.5). Within the triangle is where you would expect 95% of the scatter of studies to fall in the absence of heterogeneity (i.e. there was just one true effect size, and the studies varied from this just thanks to sampling error). Around half are outside this region.
The red dashed line is the best fit line through the scatter of studies. If there weren’t small study effects, it would be basically vertical. Instead, it slopes off heavily to the right.
Although a very asymmetric funnel plot is not proof positive of publication bias, findings like this demand careful investigation and cautious interpretation (see generally). It is challenging to assess exactly ‘how big a deal is it, though?’: statistical adjustiment for biases in the original data is extremely fraught.
But we are comfortably in ‘big deal’ territory: this finding credibly up-ends HLI’s entire analysis:
a) There are different ways of getting a ‘pooled estimate’ (~~average, or ~~ typical effect size): random effects (where you assume the true effect is rather a distribution of effects from which each study samples from), vs. fixed effects (where there is a single value for the true effect size). Random effects are commonly preferred as—in reality—one expects the true effect to vary, but the results are much more vulnerable to any small study effects/publication bias (see generally). Comparing the random effect vs. the fixed effect estimate can give a quantitative steer on the possible scale of the problem, as well as guide subsequent analysis.[4] Here, the random effect estimate is 0.52, whilst the fixed one is less than half the size: 0.18.
b) There are other statistical methods you could use (more later). One of the easier to understand (but one of the most conservative) goes back to the red dashed line in the funnel plot. You could extrapolate from it to the point where standard error = 0: so the predicted effect of an infinitely large (so infinitely precise) study—and so also where the ‘small study effect’ is zero. There are a few different variants of these sorts of ‘regression methods’, but the ones I tried predict effect sizes of such a hypothetical study between 0.17 and 0.05. So, quantitatively, 70-90% cuts of effect size are on the table here.
c) A reason why regression methods methods are conservative as they will attribute as much variation in reported results as possible to differences in study size. Yet there could be alternative explanations for this besides publication bias: maybe smaller studies have different patient populations with (genuinely) greater efficacy, etc.
However, this statistical confounding can go the other way. HLI is not presenting simple meta-analytic results, but rather meta-regressions: where the differences in reported effect sizes are being predicted by differences between and within the studies (e.g. follow-up time, how much therapy was provided, etc.). One of HLI’s findings from this work is that psychotherpy with Strongminds-like traits is ~70% more effective than psychotherapy in general (0.8 vs. 0.46). If this is because factors like ‘group or individual therapy’ correlate with study size, the real story for this could simply be: “Strongminds-like traits are indicators for methodological weaknesses which greatly inflate true effect size, rather than for a more effective therapeutic modality.” In HLI’s analysis, the latter is presumed, giving about a ~10% uplift to the bottom line results.[5]
1.2) A major issue, and a major mistake to miss
So this is a big issue, and would be revealed by standard approaches. HLI instead used a very non-standard approach (see), novel—as far as I can tell—to existing practice and, unfortunately, inappropriate (cf., point 5): it gives ~ a 10-15% discount (although I’m not sure this has been used in the Strongminds assessment, although it is in the psychotherapy one).
I came across these problems ~6m ago, prompted by a question by Ryan Briggs (someone with considerably greater expertise than my own) asking after the forest and funnel plot. I also started digging into the data in general at the same time, and noted the same key points explained labouriously above: looks like marked heterogeneity and small study effects, they look very big, and call the analysis results into question. Long story short, they said they would take a look at it urgently then report back.
This response is fine, but as my comments then indicated, I did have (and I think reasonably had) HLI on pretty thin ice/‘epistemic probation’ after finding these things out. You have to make a lot of odd choices to end up this far from normal practice, nonetheless still have to make some surprising oversights too, to end up missing problems which would appear to greatly undermine a positive finding for Strongminds.[6]
1.3) Maintaining this major mistake
HLI fell through this thin ice after its follow-up. Their approach was to try a bunch of statistical techniques to adjust for publication bias (excellent technique), do the same for their cash transfers meta-analysis (sure), then using the relative discounts between them to get an adjustment for psychotherapy vs. cash transfers (good, esp. as adding directly into the multi-level meta-regressions would be difficult). Further, they provided full code and data for replication (great). But the results made no sense whatsoever:
To orientate: each row is a different statistical technique applied to the two meta-analyses (more later). The x-axis is the ‘multiple’ of Strongminds vs. cash transfers, and the black line is at 9.4x, the previous ‘status quo value’. Bars shorter than this means adjusting for publication bias results in an overall discount for Strongminds, and vice-versa.
The cash transfers funnel plot looks like this:
Compared to the psychotherapy one, it basically looks fine: the scatter looks roughly like a funnel, and no massive trend towards smaller studies = bigger effects. So how could so many statistical methods discount the ‘obvious small study effect’ meta-analysis less than the ‘no apparent small study effect’ meta-analysis, to give an increased multiple? As I said at the time, the results look like nonsense to the naked eye.
One problem was a coding error in two of the statistical methods (blue and pink bars). The bigger problem is how the comparisons are being done is highly misleading.
Take a step back from all the dividing going on to just look at the effect sizes. The basic, nothing fancy, random effects model applied to the psychotherapy data gives an effect size of 0.5. If you take the average across all the other model variants, you get ~0.3, a 40% drop. For the cash transfers meta-analysis, the basic model gives 0.1, and the average of all the other models is ~0.9, a 10% drop. So in fact you are seeing—as you should—bigger discounts when adjusting the psychotherapy analysis vs. the cash transfers meta-analysis. This is lost by how the divisions are being done, which largely ‘play off’ multiple adjustments against one another. (see, pt.2). What the graph should look like is this:
Two things are notable: 1) the different models tend to point to a significant drop (~30-40% on average) in effect size; 2) there is a lot of variation in the discount—from ~0 to ~90% (so visual illustration about why this is known to be v. hard to reliably ‘adjust’). I think these results oblige something like the following:
Re. write-up: At least including the forest and funnel plots, alongside a description of why they are concerning. Should also include some ‘best guess’ correction from the above, and noting this has a (very) wide range. Probably warrants ‘back to the drawing board’ given reliability issues.
Re. overall recommendation: At least a very heavy astericks placed besides the recommendation. Should also highlight both the adjustment and uncertainty in front facing materials (e.g. ‘tentative suggestion’ vs. ‘recommendation’). Probably warrants withdrawal.
Re. general reflection: I think a reasonable evaluator—beyond directional effects—would be concerned about the ‘near’(?) miss property of having a major material issue not spotted before pushing a strong recommendation, ‘phase 1 complete/mission accomplished’ etc. - especially when found by a third party many months after initial publication. They might also be concerned about the direction of travel. When published, the multiplier was 12x; with spillovers, it falls to 9.5%; with spillovers and the typo corrected, it falls to 7.5x; with a 30% best guess correction for publication bias, we’re now at 5.3x. Maybe any single adjustment is not recommendation-reversing, but in concert they are, and the track record suggests the next one is more likely to be further down rather than back up.
What happened instead 5 months ago was HLI would read some more and discuss among themselves whether my take on the comparators was the right one (I am, and it is not reasonably controversial, e.g. 1, 2, cf. fn4). Although ‘looking at publication bias is part of their intended ‘refining’ of the Strongminds assessment, there’s been nothing concrete done yet.
Maybe I should have chased, but the exchange on this (alongside the other thing) made me lose faith that HLI was capable of reasonably assessing and appropriately responding to criticisms of their work when material to their bottom line.
2) The cost effectiveness guestimate.
[Readers will be relieved ~no tricky stats here]
As I was looking at the meta-analysis, I added my attempt at ‘adjusted’ effect sizes of the same into the CEA to see what impact they had on the results. To my surprise, not very much. Hence my previous examples about ‘Even if the meta-analysis has zero effect the CEA still recommends Strongminds as several times GD’, and ‘You only get to equipoise with GD if you set all the effect sizes in the CEA to near-zero.’
I noted this alongside my discussion around the meta-analysis 6m ago. Earlier remarks from HLI suggested they accepted these were diagnostic of something going wrong with how the CEA is aggregating information (but fixing it would be done but not as a priority); more recent ones suggest more ‘doubling down’.
In any case, they are indeed diagnostic for a lack of face validity. You obviously would, in fact, be highly sceptical if the meta-analysis of psychotherapy in general was zero (or harmful!) that nonetheless a particular psychotherapy intervention was extremely effective. The (pseudo-)bayesian gloss on why is that the distribution of reported effect sizes gives additional information on the likely size of the ‘real’ effects underlying them. (cf. heterogeneity discussed above) A bunch of weird discrepancies among them, if hard to explain by intervention characteristics, increases the suspicion of weird distortions, rather than true effects, underlie the observations. So increasing discrepancy between indirect and direct evidence should reduce effect size beyond impacts on any weighted average.
It does not help the findings as-is are highly discrepant and generally weird. Among many examples:
Why are the strongminds like trials in the direct evidence having among the greatest effects of any of the studies included—and ~1.5x-2x the effect of a regression prediction of studies with strongminds-like traits?
Why are the most strongminds-y studies included in the meta-analysis marked outliers—even after ‘correction’ for small study effects?
What happened between the original Strongminds Phase 2 and the Strongminds RCT to up the intevention efficacy by 80%?
How come the only study which compares psychotherapy to a cash transfer comparator is also the only study which gives a negative effect size?
I don’t know what the magnitude of the directional ‘adjustment’ would be, as this relies on specific understanding of the likelier explanations for the odd results (I’d guess a 10%+ downward correction assuming I’m wrong about everything else—obviously, much more if indeed ’the vast bulk in effect variation can be explained by sample size +/- registration status of the study). Alone, I think it mainly points to the quantative engine needing an overhaul and the analysis being known-unreliable until it is.
In any case, it seems urgent and important to understand and fix. The numbers are being widely used and relied upon (probably all of them need at least a big public astericks pending developing more reliable technique). It seems particularly unwise to be reassured by “Well sure, this is a downward correction, but the CEA still gives a good bottom line multiple”, as the bottom line number may not be reasonable, especially conditioned on different inputs. Even more so to persist in doing so 6m after being made aware of the problem.
These are mentioned in 3a and 3b of my reply to Michael. Point 1 there (kind of related to 3a) would on its own warrant immediate retraction, but that is not a case (yet) of ‘maintained’ error.
So in terms of ‘epistemic probation’, I think this was available 6m ago, but closed after flagrant and ongoing ‘violations’.
One quote from the Cochrane handbook feels particularly apposite:
Cochrane
This is not the only problem in HLI’s meta-regression analysis. Analyses here should be pre-specified (especially if intended as the primary result rather than some secondary exploratory analysis), to limit risks of inadvertently cherry-picking a model which gives a preferred result. Cochrane (see):
HLI does not mention any pre-specification, and there is good circumstantial evidence of a lot of this work being ad hoc re. ‘Strongminds-like traits’. HLI’s earlier analysis on psychotherapy in general, using most (?all) of the same studies as in their Strongminds CEA (4.2, here), had different variables used in a meta-regression on intervention properties (table 2). It seems likely the change of model happened after study data was extracted (the lack of significant prediction and including a large number of variables for a relatively small number of studies would be further concerns). This modification seems to favour the intervention: I think the earlier model, if applied to Strongminds, gives an effect size of ~0.6.
Briggs comments have a similar theme, suggestive that my attitude does not solely arise from particular cynicism on my part.
I really appreciate you putting in the work and being so diligent Gregory. I did very little here, though I appreciate your kind words. Without you seriously digging in, we’d have a very distorted picture of this important area.
Hello Jason. FWIW, I’ve drafted a reply to your other comment and I’m getting it checked internally before I post it.
On this comment about you not liking that we hadn’t updated our website to include the new numbers: we all agree with you! It’s a reasonable complaint. The explanation is fairly boring: we have been working on a new charity recommendations page for the website, at which point we were going to update the numbers at add a note, so we could do it all in one go. (We still plan to do a bigger reanalysis later this year.) However, that has gone slower than expected and hadn’t happened yet. Because of your comment, we’ll add a ‘hot fix’ update in the next week, and hopefully have the new charity recommendations page live in a couple of weeks.
I think we’d have moved faster on this if it had substantially changed the results. On our numbers, StrongMinds is still the best life-improving intervention (it’s several times better than cash and we’re not confident deworming has a longterm effect). You’re right it would slightly change the crossover point for choosing between life-saving and life-improving interventions, but we’ve got the impression that donors weren’t making much use of our analysis anyway; even if they were, it’s a pretty small difference, and well within the margin of uncertainty.
Thanks, I appreciate that.
(Looking back at the comment, I see the example actually ended up taking more space than the lead point! Although I definitely agree that the hot fix should happen, I hope the example didn’t overshadow the comment’s main intended point—that people who have concerns about HLI’s response to recent criticisms should raise their concerns with a degree of specificity, and explain why they have those concerns, to allow HLI an opportunity to address them.)
Oh yes. I agree with you that it would be good if people could make helpful suggestions as to what we could do, rather than just criticise.
Meta-note as a casual lurker in this thread: This comment being down-voted to oblivion while Jason’s comment is not, is pretty bizarre to me. The only explanation I can think of is that people who have provided criticism think Michael is saying they shouldn’t criticise? It is blatantly obvious to me that this is not what he is saying and is simply agreeing with Jason that specific actionable-criticism is better.
Fun meta-meta note I just realized after writing the above: This does mean I am potentially criticising some critics who are critical of how Micheal is criticising their criticism.
Okkkk, that’s enough internet for me. Peace and love, y’all.
Michael’s comment has 14 non-author up/downvotes and 10 non-author agree/disagreevotes; mine has one of each. This is possibly due to the potential to ascribe a comment by HLI’s director several meanings that are not plausible to give a comment by a disinterested observer—e.g., “Org expresses openness to changes to address concerns,” “Org is critical of critics,” etc.
I’m not endorsing any potential meaning, although I have an upvote on his comment.
The more disappointing meta-note to me is that helpful, concrete suggestions have been relatively sparse on this post as a whole. I wrote some suggestions for future epistemic practices, and someone else called for withdrawing the SM recommendation and report. But overall, there seemed to be much more energy invested in litigating than in figuring out a path forward.
I don’t really share this sense (I think that even most of Gregory Lewis’ posts in this thread have had concretely useful advice for HLI, e.g. this one), but let’s suppose for the moment that it’s true. Should we care?
In the last round of posts, four to six months ago, HLI got plenty of concrete and helpful suggestions. A lot of them were unpleasant, stuff like “you should withdraw your cost-effectiveness analysis” and “here are ~10 easy-to-catch problems with the stats you published”, but highly specific and actionable. What came of that? What improvements has HLI made? As far as I can tell, almost nothing has changed, and they’re still fundraising off of the same flawed analyses. There wasn’t even any movement on this unambiguous blunder until you called it out. It seems to me that giving helpful, concrete suggestions to HLI has been tried, and shown to be low impact.
One thing people can do in a thread like this one is talk to HLI, to praise them, ask them questions, or try to get them to do things differently. But another thing they can do is talk to each other, to try and figure out whether they should donate to HLI or not. For that, criticism of HLI is valuable, even if it’s not directed to HLI. This, too, counts as “figuring out a path forward”.