Mostly bio, occasionally forecasting/âepistemics, sometimes stats/âmedicine, too often invective.
Gregory Lewisđ¸
I think the principal challenge for an independent investigation is getting folks with useful information to disclose it, given these people will usually (to some kind and degree) also have âexposureâ to the FTX scandal themselves.
If I was such a person I would expect working with the investigation would be unpleasant, perhaps embarrassing, plausibly acrimonious, and potentially disastrous for my reputation. Whatâs in it for me?
Hereâs the OWID charts for life satisfaction vs. GDP/âcapita. First linear (per the dovecote model):
Now with a log transform to GDP/âcapita (per the MHR):
I think it is visually clear the empirical relationship is better modelled as log-linear rather than linear. Compared to this, I donât think the regression diagnostics suggesting non-inferiority of linear GDP (in the context of model selected from thousands of variables, at least some of which could log-linearly proxy for GDP, cf. Dan_Keyâs comment) count for much.
Besides the impact of GDP (2.5% versus 40%), Iâd expect which other variables end up being selected also to be sensitive to this analysis choice. Unfortunately, as it is the wrong one, Iâd expect (quasi-)omitted variable bias to distort both which variables are included, and their relative contributions, in the dovecote model.
I have previously let HLI have the last word, but this is too egregious.
Study quality: Publication bias (a property of the literature as a whole) and risk of bias (particular to each individual study which comprise it) are two different things.[1] Accounting for the former does not account for the latter. This is why the Cochrane handbook, the three meta-analyses HLI mentions here, and HLIâs own protocol consider distinguish the two.
Neither Cuijpers et al. 2023 nor Tong et al. 2023 further adjust their low risk of bias subgroup for publication bias.[2] I tabulate the relevant figures from both studies below:
So HLI indeed gets similar initial results and publication bias adjustments to the two other meta-analyses they find. Yetâalthough these are not like-for-likeâthese other two meta-analyses find similarly substantial effect reductions when accounting for study quality as they do when assessing at publication bias of the literature as a whole.
There is ample cause for concern here:[3]
Although neither of these studies âadjust for bothâ, one later mentionedâCuijpers et al. 2020 - does. It finds an additional discount to effect size when doing so.[4] So it suggests that indeed âaccounting forâ publication bias does not adequately account for risk of bias en passant.
Tong et al. 2023 - the meta-analysis expressly on PT in LMICs rather than PT generallyâfinds higher prevalence of indicators of lower study quality in LMICs, and notes this as a competing explanation for the outsized effects.[5]
As previously mentioned, in the previous meta-analysis, unregistered trials had a 3x greater effect size than registered ones. All trials on Strongminds published so far have not been registered. Baird et al., which is registered, is anticipated to report disappointing results.
Evidentiary standards: Indeed, the report drew upon a large number of studies. Yet even a synthesis of 72 million (or whatever) studies can be misleading if issues of publication bias, risk of bias in individual studies (and so on) are not appropriately addressed. That an area has 72 (or whatever) studies upon it does not mean it is well-studied, nor would this number (nor any number) be sufficient, by itself, to satisfy any evidentiary standard.
Outlier exclusion: The reportâs approach to outlier exclusion is dissimilar to both Cuijpers et al. 2020 and Tong et al. 2023, and further is dissimilar with respect to features I highlighted as major causes for concern re. HLIâs approach in my original comment.[6] Specifically:
Both of these studies present the analysis with the full data first in their results. Contrast HLIâs report, where only the results with outliers excluded are presented in the main results, and the analysis without exclusion is found only in the appendix.[7]
Both these studies also report the results with the full data as their main findings (e.g. in their respective abstracts). Cuijpers et al. mentions their outlier excluded results primarily in passing (âoutliersâ appears once in the main text); Tong et al. relegates a lot of theirs to the appendix. HLIâs report does the opposite. (cf. fn 7 above)
Only Tong et al. does further sensitivity analysis on the âoutliers excludedâ subgroup. As Jason describes, this is done alongside the analysis where all data included, the qualitative and quantitative differences which result from this analysis choice are prominently highlighted to the reader and extensively discussed. In HLIâs report, by contrast, the factor of 3 reduction to ultimate effect size when outliers are not excluded is only alluded to qualitatively in a footnote (fn 33)[8] of the main reportâs section (3.2) arguing why outliers should be excluded, not included in the reports sensitivity analysis, and only found in the appendix.[9]
Both studies adjust for publication bias only on all data, not on data with outliers excluded, and these are the publication bias findings they present. Contrast HLIâs report.
The Cuijpers et al. 2023 meta-analysis previously mentioned also differs in its approach to outlier exclusion from HLIâs report in the ways highlighted above. The Cochrane handbook also supports my recommendations on what approach should be taken, which is what the meta-analyses HLI cites approvingly as examples of âsensible practiceâ actually do, but what HLIâs own work does not.
The reports (non) presentation of the stark quantitative sensitivity of its analysisâmaterial to its report bottom line recommendationsâto whether outliers are excluded is clearly inappropriate. It is indefensible if, as I have suggested may be the case, the analysis with outliers included was indeed the analysis first contemplated and conducted.[10] It is even worse if it was the publication bias corrections on the full data was what in fact prompted HLI to start making alternative analysis choices which happened to substantially increase the bottom line figures.
Bayesian analysis: Bayesian methods notoriously do not avoid subjective inputsâmost importantly here, what information we attend to when constructing an âinformed priorâ (or, if one prefers, how to weigh the results with a particular prior stipulated).
In any case, they provide no protection from misunderstanding the calculation being performed, and so misinterpreting the results. The Bayesian method in the report is actually calculating the (adjusted) average effect size of psychotherapy interventions in general, not the expected effect of a given psychotherapy intervention. Although a trial on Strongminds which shows it is relatively ineffectual should not update our view much the efficacy of psychotherapy interventions (/âsimilar to Strongminds) as a whole, it should update us dramatically on the efficacy of Strongminds itself.
Although as a methodological error this is a subtle one (at least, subtle enough for me not to initially pick up on it), the results it gave are nonsense to the naked eye (e.g. SM would still be held as a GiveDirectly-beating intervention even if there were multiple high quality RCTs on Strongminds giving flat or negative results). HLI should have seen this themselves, should have stopped to think after I highlighted these facially invalid outputs of their method in early review, and definitely should not be doubling down on these conclusions even now.
Making recommendations: Although there are other problems, those I have repeated here make the recommendations of the report unsafe. This is why I recommended against publication. Specifically:
Although I donât think the Bayesian method the report uses would be appropriate, if it was calculated properly on its own terms (e.g. prediction intervals not confidence intervals to generate the prior, etc.), and leaving everything else the same, the SM bottom line would drop (Iâm pretty sure) by a factor a bit more than 2.
The results are already essentially sensitive to whether outliers are excluded in analysis or not: SM goes from 3.7x â ~1.1x GD on the back of my envelope, again leaving all else equal.
(1) and (2) combined should net out to SM < GD; (1) or (2) combined with some of the other sensitivity analyses (e.g. spillovers) will also likely net out to SM < GD. Even if one still believes the bulk of (appropriate) analysis paths still support a recommendation, this sensitivity should be made transparent.
- ^
E.g. Even if all studies in the field are conducted impeccably, if journals only accept positive results the literature may still show publication bias. Contrariwise, even if all findings get published, failures in allocation/âblinding/âetc. could lead to systemic inflation of effect sizes across the literature. In realityâand hereâyou often have both problems, and they only partially overlap.
- ^
Jason correctly interprets Tong et al. 2023: the number of studies included in their publication bias corrections (117 [+36 w/â trim and fill]) equals the number of all studies, not the low risk of bias subgroup (36 - see table 3). I do have access to Cuijpers et al. 2023, which has a very similar results table, with parallel findings (i.e. they do their publication bias corrections on the whole set of studies, not on a low risk of bias subgroup).
- ^
Me, previously:
HLIâs report does not assess the quality of its included studies, although it plans to do so. I appreciate GRADEing 90 studies or whatever is tedious and time consuming, but skipping this step to crack on with the quantitative synthesis is very unwise: any such synthesis could be hugely distorted by low quality studies.
- ^
From their discussion (my emphasis):
Risk of bias is another important problem in research on psychotherapies for depression. In 70% of the trials (92/â309) there was at least some risk of bias. And the studies with low risk of bias, clearly indicated smaller effect sizes than the ones that had (at least some) risk of bias. Only four of the 15 specific types of therapy had 5 or more trials without risk of bias. And the effects found in these studies were more modest than what was found for all studies (including the ones with risk of bias). When the studies with low risk of bias were adjusted for publication bias, only two types of therapy remained significant (the âCoping with Depressionâ course, and self-examination therapy).
- ^
E.g. from the abstract (my emphasis):
The larger effect sizes found in non-Western trials were related to the presence of wait-list controls, high risk of bias, cognitive-behavioral therapy, and clinician-diagnosed depression (p < 0.05). The larger treatment effects observed in non-Western trials may result from the high heterogeneous study design and relatively low validity. Further research on long-term effects, adolescent groups, and individual-level data are still needed.
- ^
Apparently, all that HLI really meant with âExcluding outliers is thought sensible practice here; two related meta-analyses, Cuijpers et al., 2020c; Tong et al., 2023, used a similar approachâ [my emphasis] was merely â[C]onditional on removing outliers, they identify a similar or greater range of effect sizes as outliers as we do.â (see).
Yeah, right.
I also had the same impression as Jason that HLIâs reply repeatedly strawmans me. The passive aggressive sniping sprinkled throughout and subsequent backpedalling (in fairness, I suspect by people who were not at the keyboard of the corporate account) is less than impressive too. But itâs nearly Christmas, so beyond this footnote Iâll let all this slide.
- ^
Me again (my [re-?]emphasis)
Received opinion is typically that outlier exclusion should be avoided without a clear rationale why the âoutliersâ arise from a clearly discrepant generating process. If it is to be done, the results of the full data should still be presented as the primary analysis
- ^
Said footnote:
If we didnât first remove these outliers, the total effect for the recipient of psychotherapy would be much larger (see Section 4.1) but some publication bias adjustment techniques would over-correct the results and suggest the completely implausible result that psychotherapy has negative effects (leading to a smaller adjusted total effect). Once outliers are removed, these methods perform more appropriately. These methods are not magic detectors of publication bias. Instead, they make inferences based on patterns in the data, and we do not want them to make inferences on patterns that are unduly influenced by outliers (e.g., conclude that there is no effect â or, more implausibly, negative effects â of psychotherapy because of the presence of unreasonable effects sizes of up to 10 gs are present and creating large asymmetric patterns). Therefore, we think that removing outliers is appropriate. See Section 5 and Appendix B for more detail.
The sentence in the main text this is a footnote to says:
Removing outliers this way reduced the effect of psychotherapy and improves the sensibility of moderator and publication bias analyses.
- ^
Me again:
[W]ithout excluding data, SM drops from ~3.6x GD to ~1.1x GD. Yet it doesnât get a look in for the sensitivity analysis, where HLIâs âless favourableâ outlier method involves taking an average of the other methods (discounting by ~10%), but not doing no outlier exclusion at all (discounting by ~70%).
- ^
My remark about âEven if you didnât pre-specify, presenting your first cut as the primary analysis helps for nothing up my sleeve reasonsâ which Dwyer mentions elsewhere was a reference to ânothing up my sleeve numbersâ in cryptography. In the same way picking pi or e initial digits for arbitrary constants reassures the author didnât pick numbers with some ulterior purpose they are not revealing, reporting what oneâs first analysis showed means readers can compare it to where you ended up after making all the post-hoc judgement calls in the garden of forking paths. âOur first intention analysis would give x, but we ended up convincing ourselves the most appropriate analysis gives a bottom line of 3xâ would rightly arouse a lot of scepticism.
Iâve already mentioned I suspect this is indeed what has happened here: HLIâs first cut was including all data, but argued itself into making the choice to exclude, which gave a 3x higher âbottom lineâ. Beyond âYou didnât say youâd exclude outliers in your protocolâ and âbasically all of your discussion in the appendix re. outlier exclusion concerns the results of publication bias corrections on the bottom line figuresâ, I kinda feel HLI not denying it is beginning to invite an adverse inference from silence. If Iâm right about this, HLI should come clean.
So the problem I had in mind was in the parenthetical in my paragraph:
To its credit, the write-up does highlight this, but does not seem to appreciate the implications are crazy: any PT intervention, so long as it is cheap enough, should be thought better than GD, even if studies upon it show very low effect size (which would usually be reported as a negative result, as almost any study in this field would be underpowered to detect effects as low as are being stipulated)
To elaborate: the actual data on Strongminds was a n~250 study by Bolton et al. 2003 then followed up by Bass et al. 2006. HLI models this in table 19:
So an initial effect of g = 1.85, and a total impact of 3.48 WELLBYs. To simulate what the SM data will show once the (anticipated to be disappointing) forthcoming Baird et al. RCT is included, they discount this[1] by a factor of 20.
Thus the simulated effect size of Bolton and Bass is now ~0.1. In this simulated case, the Bolton and Bass studies would be reporting negative results, as they would not be powered to detect an effect size as small as g = 0.1. To benchmark, the forthcoming Baird et al. study is 6x larger than these, and its power calculations have minimal detectable effects g = 0.1 or greater.
Yet, apparently, in such a simulated case we should conclude that Strongminds is fractionally better than GD purely on the basis of two trials reporting negative findings, because numerically the treatment groups did slightly (but not significantly) better than the control ones.
Even if in general we are happy with âhey, the effect is small, but it is cheap, so itâs a highly cost-effective interventionâ, we should not accept this at the point when âsmallâ becomes âtoo small to be statistically significantâ. Analysis method + negative findings =! fractionally better in expectation vs. cash transfers, so I take it as diagnostic the analysis is going wrong.
- ^
I think âthisâ must be the initial effect size/âintercept, as 3.48 * 0.05 ~ 1.7 not 3.8. I find this counter-intuitive, as I think the drop in total effect should be super not sub linear with intercept, but ignore that.
- ^
(@Burner1989 @David Rhys Bernard @Karthik Tadepalli)
I think the fundamental point (i.e. âYou cannot use the distribution for the expected value of an average therapy treatment as the prior distribution for a SPECIFIC therapy treatment, as there will be a large amount of variation between possible therapy treatments that is missed when doing this.â) is on the right lines, although subsequent discussion of fixed/ârandom effect models might confuse the issue. (Cf. my reply to Jason).
The typical output of a meta-analysis is an (~) average effect size estimate (the diamond at the bottom of the forest plot, etc.) The confidence interval given for that is (very roughly)[1] the interval we predict the true average effect likely lies. So for the basic model given in Section 4 of the report, the average effect size is 0.64, 95% CI (0.54 â 0.74). So (again, roughly) our best guess of the âtrueâ average effect size of psychotherapy in LMICs from our data is 0.64, and weâre 95% sure(*) this average is somewhere between (0.54, 0.74).
Clearly, it is not the case that if we draw another study from the same population, we should be 95% confident(*) the effect size of this new data point will lie between 0.54 to 0.74. This would not be true even in the unicorn case thereâs no between study heterogeneity (e.g. all the studies are measuring the same effect modulo sampling variance), and even less so when this is marked, as here. To answer that question, what you want is a prediction interval.[2] This interval is always wider, and almost always significantly so, than the confidence interval for the average effect: in the same analysis with the 0.54-0.74 confidence interval, the prediction interval was â0.27 to 1.55.
Although the full model HLI uses in constructing informed priors is different from that presented in S4 (e.g. it includes a bunch of moderators), they appear to be constructed with monte carlo on the confidence intervals for the average, not the prediction interval for the data. So I believe the informed prior is actually one of the (adjusted) âAverage effect of psychotherapy interventions as a wholeâ, not a prior for (e.g.) âthe effect size reported in a given PT study.â The latter would need to use the prediction intervals, and have a much wider distribution.[3]
I think this ably explains exactly why the Bayesian method for (e.g.) Strongminds gives very bizarre results when deployed as the report does, but they do make much more sense if re-interpreted as (in essence) computing the expected effect size of âa future strongminds-like interventionâ, but not the effect size we should believe StrongMinds actually has once in receipt of trial data upon it specifically. E.g.:
The histogram of effect sizes shows some comparisons had an effect size < 0, but the âinformed priorâ suggests P(ES < 0) is extremely low. As a prior for the effect size of the next study, it is much too confident, given the data, a trial will report positive effects (you have >1/â72 studies being negative, so surely it cannot be <1%, etc.). As a prior for the average effect size, this confidence is warranted: given the large number of studies in our sample, most of which report positive effects, we would be very surprised to discover the true average effect size is negative.
The prior doesnât update very much on data provided. E.g. When we stipulate the trials upon strongminds report a near-zero effect of 0.05 WELLBYs, our estimate of 1.49 WELLBYS goes to 1.26: so we should (apparently) believe in such a circumstance the efficacy of SM is ~25 times greater than the trial data upon it indicates. This is, obviously, absurd. However, such a small update is appropriate if it were to ~the average of PT interventions as a whole: that we observe a new PT intervention has much below average results should cause our average to shift a little towards the new findings, but not much.
In essence, the update we are interested in is not âHow effective should we expect future interventions like Strongminds are given the data on Strongminds efficacyâ, but simply âHow effective should we expect Strongminds is given the data on how effective Strongminds isâ. Given the massive heterogeneity and wide prediction interval, the (correct) informed prior is pretty uninformative, as it isnât that surprised by anything in a very wide range of values, and so on finding trial data on SM with a given estimate in this range, our estimate should update to match it pretty closely.[4]
(This also should mean, unlike the report suggests, the SM estimate is not that ârobustâ to adverse data. Eyeballing it, Iâd guess the posterior should be going down by a factor of 2-3 conditional on the stipulated data versus currently reported results).
- ^
Iâm aware confidence intervals are not credible intervals, and that âthe 95% CI tells you where the true value is with 95% likelihoodâ strictly misinterprets what a confidence interval is, etc. (see) But perhaps âclose enoughâ, so Iâm going to pretend these are credible intervals, and asterisk each time I assume the strictly incorrect interpretation.
- ^
Cf. Cochrane:
The summary estimate and confidence interval from a random-effects meta-analysis refer to the centre of the distribution of intervention effects, but do not describe the width of the distribution. Often the summary estimate and its confidence interval are quoted in isolation and portrayed as a sufficient summary of the meta-analysis. This is inappropriate. The confidence interval from a random-effects meta-analysis describes uncertainty in the location of the mean of systematically different effects in the different studies. It does not describe the degree of heterogeneity among studies, as may be commonly believed. For example, when there are many studies in a meta-analysis, we may obtain a very tight confidence interval around the random-effects estimate of the mean effect even when there is a large amount of heterogeneity. A solution to this problem is to consider a prediction interval (see Section 10.10.4.3).
- ^
Although I think the same mean, so it will give the right âbest guessâ initial estimates.
- ^
Obviously, modulo all the other issues I suggest with both the meta-analysis as a whole, that we in fact would incorporate other sources of information into our actual prior, etc. etc.
What prior to formally pick is trickyâI agree the factors you note would be informative, but how to weigh them (vs. other sources of informative evidence) could be a matter of taste. However, sources of evidence like this could be handy to use as âbenchmarksâ to see whether the prior (/âresults of the meta-analysis) are consilient with them, and if not, explore why.
But I think I can now offer a clearer explanation of what is going wrong. The hints you saw point in this direction, although not quite as you describe.
One thing worth being clear on is HLI is not updating on the actual SM specific evidence. As they model it, the estimated effect on this evidence is an initial effect of g = 1.8, and a total effect of ~3.48 WELLBYs, so this would lie on the right tail, not the left, of the informed prior.[1] They discount the effect by a factor of 20 to generate the data they feed into their Bayesian method. Stipulating data which would be (according to their prior) very surprisingly bad would be in itself a strength, not a concern, of the conservative analysis they are attempting.
Next, we need to distinguish an average effect size from a prediction interval. The HLI does report both (Section 4) for a more basic model of PT in LMICs. The (average, random) effect size is 0.64 (95% CI 0.54 to 0.74), whilst the prediction interval is â0.27 to 1.55. The former is giving you the best guess of the average effect (with a confidence interval), the latter is telling youâif I do another study like those already included, the range I can expect its effect size to be within. By loose analogy: if I sample 100 people and their average height is roughly 5Ⲡ7â (95% CI 5â˛6â to 5â˛8â), the 95% range of the individual heights will range much more widely (say 5Ⲡ0â to 6Ⲡ2âł)
Unsurprisingly (especially given the marked heterogeneity), the prediction interval is much wider than the confidence interval around the average effect size. Crucially, if our ânext studyâ reports an effect size of (say) 0.1, our interpretation typically should not be: âThis study canât be right, the real effect of the intervention it studies must be much closer to 0.6â. Rather, as findings are heterogeneous, it is much more likely a study which (genuinely) reports a below average effect.[2] Back to the loose analogy, we would (typically) assume we got it right if we measured some more people at (e.g.) 6â˛0â and 5â˛4â, even though these are significantly above or below the 95% confidence interval of our average, and only start to doubt measurements much outside our prediction interval (e.g. 3â˛10â, 7â˛7âł).
Now the problem with the informed prior becomes clear: it is (essentially) being constructed with confidence intervals of the average, not prediction intervals for its data from its underlying models. As such, it is a prior not of âWhat is the expected impact of a given PT interventionâ, but rather âWhat is the expected average impact of PT interventions as a wholeâ.[3]
With this understanding, the previously bizarre behaviour is made sensible. For the informed prior should assign very little credence to the average impact of PT overall being ~0.4 per the stipulated Strongminds data, even though it should not be that surprised that a particular intervention (e.g. Strongminds!) has an impact much below average, as many other PT interventions studied also do (cf. Although I shouldnât be surprised if I measure someone as 5â˛2â, I should be very surprised if the true average height is actually 5â˛2â, given my large sample averages 5â˛7â). Similarly, if we are given a much smaller additional sample reporting a much different effect size, the updated average effect should remain pretty close to the prior (e.g. if a handful of new people have heights < 5â˛4â, my overall average goes down a little, but not by much).
Needless to say, the results of such an analysis, if indeed for âaverage effect size of psychotherapy as a wholeâ are completely inappropriate for âexpected effect size of a given psychotherapy interventionâ, which is the use it is put to in the report.[4] If the measured effect size of Strongminds was indeed ~0.4, the fact psychotherapy interventions ~average substantially greater effects of ~1.4 gives very little reason to conclude the effect of Strongminds is in fact much higher (e.g. ~1.3). In the same way, if I measure your height is 5â˛0â, the fact the average heights of other people Iâve measured is 5â˛7â does not mean I should conclude youâre probably about 5â˛6âł.[5]
- ^
Minor: it does lie pretty far along the right tail of the prior (<top 1st percentile?), so maybe one could be a little concerned. Not much, though: given HLI was searching for particularly effective PT interventions in the literature, it doesnât seem that surprising that this effort could indeed find one at the far-ish right tail of apparent efficacy.
- ^
One problem for many types of the examined psychotherapy is that the level of heterogeneity was high, and many of the prediction intervals were broad and included zero. This means that it is difficult to predict the effect size of the next study that is done with this therapy, and that study may just as well find negative effects. The resulting effect sizes differ so much for one type of therapy, that it cannot be reliably predicted what the true effect size is.
- ^
Cf. your original point about a low result looking weird given the prior. Perhaps the easiest way to see this is to consider a case where the intervention is harmful. The informed prior says P (ES < 0) is very close to zero. Yet >1/â72 studies in the sample did have an effect size < 0. So obviously a prior of an intervention should not be that confident in predicting it will not have a -ve effect. But a prior of the average effect of PT interventions should be that confident this average is not in fact negative, given the great majority of sampled studies show substantially +ve effects.
- ^
In a sense, the posterior is not computing the expected effect of StrongMinds, but rather the expected effect of a future intervention like StrongMinds. Somewhat ironically, this (again, simulated) result would be best interpreted as an anti-recommendation: Strongminds performs much below the average we would expect of interventions similar to it.
- ^
It is slightly different for measured height as we usually have very little pure measurement error (versus studies with more significant sampling variance). So youâd update a little less towards the reported study effects vs. the expected value than you would for height measurements vs. the average. But the key points still stand.
- ^
HLI kindly provided me with an earlier draft of this work to review a couple of weeks ago. Although things have gotten better, I noted what I saw as major problems with the draft as-is, and recommended HLI take its time to fix themâeven though this would take a while, and likely miss the window of Giving Tuesday.
Unfortunately, HLI went ahead anyway with the problems I identified basically unaddressed. Also unfortunately (notwithstanding laudable improvements elsewhere) these problems are sufficiently major I think potential donors are ill-advised to follow the recommendations and analysis in this report.
In essence:
Issues of study quality loom large over this literature, with a high risk of materially undercutting the results (they did last time). The reports interim attempts to manage these problems are inadequate.
Pub bias corrections are relatively mild, but only when all effects g > 2 are excluded from the analysisâthey are much starker (albeit weird) if all data is included. Due to this, the choice to exclude âoutliersâ roughly trebles the bottom line efficacy of PT. This analysis choice is dubious on its own merits, was not pre-specified in the protocol, yet is only found in the appendix rather than the sensitivity analysis in the main report.
The bayesian analysis completely stacks the deck in favour of psychotherapy interventions (i.e. an âinformed priorâ which asserts one should be > 99% confident strongminds is more effective than givedirectly before any data on strongminds is contemplated), such that psychotherapy/âstrongminds/âetc, getting recommended is essentially foreordained.
Study quality
It perhaps comes as little surprise that different studies on psychotherapy in LMICs report very different results:[1]
The x-axis is a standardized measure of effect size for psychotherapy in terms of wellbeing.[2] Mostâbut not allâshow a positive effect (g > 0), but the range is vast, HLI excludes effect sizes over 2 as outliers (much more later), but 2 is already a large effect: to benchmark, it is roughly the height difference between male and female populations.
Something like an â(weighted) average effect sizeâ across this set would look promising (~0.6) - to also benchmark, the effect size of cash transfers on (individual) wellbeing is ~0.1. Yet cash transfers (among many other interventions) have much less heterogeneous results: more like â0.1 +/â- 0.1âł, not ~â0.6 multiply-or-divide by an integerâ. It seems important to understand what is going on.
One hope would be this heterogeneity can be explained in terms of the intervention and length of follow-up. Different studies did (e.g.) different sorts of psychotherapy, did more or less of it, and measured the outcomes at different points afterwards. Once we factor these things in to our analysis, this wide distribution seen when looking at the impact of psychotherapy in general sharpens into a clearer picture for any particular psychotherapeutic intervention. One can then deploy this knowledge to assessâin particularâthe likely efficacy of a charity like Strongminds.
The report attempts this enterprise in section 4 of the report. I think a fair bottom line is despite these efforts, the overall picture is still very cloudy: the best model explains ~12% of the variance in effect sizes. But this best model is still better than no model (but more later), so one can still use it to make a best guess for psychotherapeutic interventions, even if there remains a lot of uncertainty and spread.
But there could be another explanation for why thereâs so much heterogeneity: there are a lot of low-quality studies, and low quality studies tend to report inflated effect sizes. In the worst case, the spread of data suggesting psychotherapyâs efficacy is instead a mirage, and the effect size melts under proper scrutiny.
Hence why most systematic reviews do assess the quality of included studies and their risk of bias. Sometimes this is only used to give a mostly qualitative picture alongside the evidence synthesis (e.g. âX% of our studies have a moderate to high risk of biasâ) or sometimes incorporated quantitatively (e.g. âquality scoreâ of studies included as a predictor/âmoderator, grouping by âhigh/âmoderate/âlowâ risk of bias, etc. - although all are controversial).
HLIâs report does not assess the quality of its included studies, although it plans to do so. I appreciate GRADEing 90 studies or whatever is tedious and time consuming, but skipping this step to crack on with the quantitative synthesis is very unwise:[3] any such synthesis could be hugely distorted by low quality studies. And itâs not like this is a mere possibility: I previously demonstrated in the previous meta-analysis that study registration status (one indicator of study quality) explained a lot of heterogeneity, and unregistered studies had on average a three times [!] greater effect size than registered ones.
The report notes it has done some things to help manage this risk. One is cutting âoutliersâ (g > 2, the teal in the earlier histogram), and extensive assessment of publication bias/âsmall study effects. These things do help: all else equal, Iâd expect bigger studies to be methodologically better ones, so adjusting for small study effects does partially âcontrolâ for study quality; Iâd also expect larger effect sizes to arise from lower-quality work, so cutting them should notch up the average quality of the studies that remain.
But I do not think they help enough[4] - these are loose proxies for what we seek to understand. Thus the findings would be unreliable in virtue of this alone until after this is properly looked at. Crucially, the risk that these features could confound the earlier moderator analysis has not been addressed:[5] maybe the relationship of (e.g.) âmore sessions given â greater effectâ is actually due to studies of such interventions tend to be lower quality than the rest. When I looked last time things like âstudy sizeâ or âregistration statusâ explained a lot more of the heterogeneity than (e.g.) all of the intervention moderators combined. I suspect the same will be true this time too.
Publication bias
I originally suggested (6m ago?) that correction for publication bias/âsmall study effects could be ~an integer division, so I am surprised the correction was a bit less: ~30%. Hereâs the funnel plot:[6]
Unsurprisingly, huge amounts of scatter, but the asymmetry, although there, does not leap off the page: the envelope of points is pretty rectangular, but you can persuade yourself itâs a bit of a parallelogram, and thereâs denser part of it which indeed has a trend going down and to the right (so smaller study â bigger effect).
But this only plots effect sizes g < 2 (those red, not teal, in the histogram). If we include all the studies again, this picture looks a lot clearerâthe âlong tailâ of higher effects tends to come from smaller studies, which are clearly asymmetric.
This effect, visible to the naked eye, also emerges in the statistics. The report uses a variety of numerical methods to correct for publication bias (some very sophisticated). All of them adjust the results much further downwards on the full data than when outliers are excluded to varying degrees (table B1, appendix). It would have a stark effect on the resultsâhereâs the âbottom lineâ result if you take a weighted average of all the different methods, with different approaches to outlier exclusionâred is the full data, green is the outlier exclusion method the report uses.
Needless to say, this choice is highly material to the bottom line results: without excluding data, SM drops from ~3.6x GD to ~1.1x GD. Yet it doesnât get a look in for the sensitivity analysis, where HLIâs âless favourableâ outlier method involves taking an average of the other methods (discounting by ~10%), but not doing no outlier exclusion at all (discounting by ~70%).[7]
Perhaps this is fine if outlier inclusion would be clearly unreasonable. But itâs not: cutting data is generally regarded as dubious, and the rationale for doing so here is not compelling. Briefly:
Received opinion is typically that outlier exclusion should be avoided without a clear rationale why the âoutliersâ arise from a clearly discrepant generating process. If it is to be done, the results of the full data should still be presented as the primary analysis (e.g.).
The cut data by and large doesnât look visually âoutlyingâ to me. The histogram shows a pretty smooth albeit skewed distribution. Cutting off the tail of the distribution at various lengths appears ill-motivated.
Given the interest in assessing small study effects, cutting out the largest effects (which also tend to be the smallest studies) should be expected to attenuate the small study effect (as indeed it does). Yet if our working hypothesis is these effects are large mainly because the studies are small, their datapoints are informative to plot this general trend (e.g. for slightly less small studies which have slightly less inflated results).[8]
The strongest argument given is that, in fact, some numerical methods to correct publication bias give absurd results if given the full data: i.e. one gives an adjusted effect size of â0.6, another â0.2. I could buy an adjustment that drives the effect down to roughly zero, but not one which suggests, despite almost all the data being fairly or very positive, we should conclude from these studies the real effect is actually (highly!) negative.
One could have a long argument on what the most appropriate response is: maybe just keep it, as the weighted average across methods is still sensible (albeit disappointing)? Maybe just drop those methods in particular and do an average of those giving sane answers on the full data? Should we keep g < 2 exclusion but drop p-curve analysis, as it (absurdly?) adjusts the effect slightly upwards? Maybe we should reweigh the averaging of different numerical methods by how volatile their results are when you start excluding data? Maybe pick the outlier exclusion threshold which results in the least disagreement between the different methods? Or maybe just abandon numerical correction, and just say âthereâs clear evidence of significant small study effects, which the current state of the art cannot reliably quantify and correctâ?
So a garden of forking paths opens before us. All of these are varying degrees of âarguableâ, and they do shift the bottom line substantially. One reason pre-specification is so valuable is it ties you to a particular path before getting to peek at the results, avoiding any risk a subconscious finger on the scale to push one down a path of still-defensible choices nonetheless favour a particular bottom line. Even if you didnât pre-specify, presenting your first cut as the primary analysis helps for nothing up my sleeve reasons.
It may be the prespecified or initial stab doesnât do a good job of expressing the data, and a different approach does better. Yet making it clear this subsequent analysis is post-hoc cautions a reader about potential risk of bias in analysis.
Happily, HLI did make a protocol for this work, made before they conducted the analysis. Unfortunately, it is silent on whether outlying data would be excluded, or by what criteria. Also unfortunately, because of this (and other things like the extensive discussion in the appendix discussing the value of outlier removal principally in virtue of its impact on publication bias correction), I am fairly sure the analysis with all data included was the first analysis conducted. Only after seeing the initial publication bias corrections did HLI look at the question of whether some data should be excluded. Maybe it should, but if it came second the initial analysis should be presented first (and definitely included in the sensitivity analysis).
Thereâs also a risk the cloud of quantification buries the qualitative lede. Publication bias is known to be very hard to correct, and despite HLI compiling multiple numerical state of the art methods, they starkly disagree on what the correction factor should be (i.e. from <~0 to > 100%). So perhaps the right answer is we basically do not know how much to discount the apparent effects seen in the PT literature given it also appears to be an extremely compromised one, and if forced to give an overall number, any ânumerical bottom lineâ should have even wider error bars because of this.[9]
Bayesian methods
I previously complained that the guestimate/âBOTEC-y approach HLI used in integrating information from the meta-analysis and the strongminds trial data couldnât be right, as it didnât pass various sanity tests: e.g. still recommending SM as highly effective even if you set the trial data to zero effect. HLI now has a much cleverer Bayesian approach to combining sources of information. On the bright side, this is mechanistically much clearer as well as much cleverer. On the downside, the deck still looks pretty stacked.
Starting at the bottom, hereâs how HLIâs Bayesian method compares SM to GD:
The informed prior (in essence) uses the meta-analysis findings with some monte carlo to get an expected effect for an intervention with strongminds-like traits (e.g. same number of sessions, same deliverer, etc.). The leftmost point of the solid line gives the expectation for the prior: so the prior is that SM is ~4x GDs cost effectiveness (dashed line).
The x axis is how much weight one gives to SM-specific data. Of interest, the line slopes down, so the data gives a negative update on SMs cost-effectiveness. This is because HLIâin anticipation of the Baird/âOzler RCT likely showing disappointing resultsâdiscounted the effect derived from the original SM-specific evidence by a factor of 20, so the likelihood is indeed much lower than the prior. Standard theory gives the appropriate weighting of this vs. the prior, so you adjust down a bit, but not a lot, from the prior (dotted line).
Despite impeccable methods, these results are facially crazy. To illustrate:
The rightmost point on the solid line is the result if you completely discount the prior, and only use the stipulated-to-be-bad SM-specific results. SM is still slightly better than GD on this analysis.[10]
If we âfollow Bayesian updatingâ as HLI recommends, the recommendation is surprisingly insensitive to the forthcoming Baird/âOzler RCT having disappointing findings. Eyeballing it, youâd need such a result to be replicated half a dozen times for the posterior to update to SM is roughly on a par with GD.
Setting the forthcoming data to basically showing zero effect will still return SM is 2-3x GD.[11] Iâd guess youâd need the forthcoming RCT to show astonishingly and absurdly negative results (e.g. SM treatment is worse for your wellbeing than bereavement), to get it to approximate equipoise with GD.
Youâd need even stronger adverse findings for the model to update all the way down to SM being ineffectual, rather than merely âless good than GiveDirectlyâ.
I take it most readers would disagree with the model here tooâe.g. if indeed the only RCT on strongminds is basically flat, that should be sufficient demote SM from putative âtop charityâ status.
I think I can diagnose the underlying problem: Bayesian methods are very sensitive to the stipulated prior. In this case, the prior is likely too high, and definitely too narrow/âoverconfident. See this:
Per the dashed and dotted lines in the previous figure, the âGiveDirectly barâ is fractionally below at the blue dashed line (the point estimate of the stipulated-SM data). The prior distribution is given in red. So the expectation (red dashed line) is indeed ~4x further from the origin (see above).
The solid red curve gives the distribution. Eyeballing the integrals reveals the problem: the integral of this distribution from the blue dashed line to infinity gives the models confidence psychotherapy interventions would be more cost-effective than GD. This is at least 99% of the area, if not 99.9% â 99.99%+. A fortiori, this prior asserts it is essentially certain the intervention is beneficial (total effect >0).
I donât think anyone should think that any intervention is P > 0.99 more cost-effective than give directly (or P < 0.0001 or whatever it is in fact harmful) as a prior,[12] but if one did, it would indeed take masses of evidence to change oneâs mind. Hence the very sluggish moves in response to adverse data (the purple line suggests the posterior is also 99%+ confident SM is better than givedirectly).
I think I can also explain the underlying problem of this underlying problem. HLI constructs its priors exclusively from its primary meta-analytic model (albeit adapted to match the intervention of interest, and recalculated excluding any studies done on this intervention to avoid double counting). Besides the extra uncertainty (so spread) likely implied by variety of factors covered in the sensitivity analysis, in real life our prior would be informed by other things too: the prospect entire literatures can be misguided, a general sense (at least for me) that cash transfers are easy to beat in principle, but much harder in practice, and so on.
In reality, our prior-to-seeing-the-metaanalysis prior would be very broad and probably reasonably pessimistic, and (even if Iâm wrong about the shortcomings I suggest earlier), the âupdateâ on reading it would be a bit upwards, and a little narrower, but not by that much. In turn, the âupdateâ on seeing (e.g.) disappointing RCT results for a given PT intervention would be a larger shift downwards, netting out that this was unlikely better than GiveDirectly after all.
If the Bayesian update was meant only to be a neat illustration, I would have no complaint. But instead the bottom line recommendations and assessments rely upon itâthat readers should indeed adopt the supposed prior the report proposes about the efficacy of PT interventions in general. Crisply, I doubt the typical reader seriously believes (e.g.) basically any psychotherapy intervention in LMICs, so long as cost per patient is <$100, is a ~certain bet to beat cash transfers. If not, they should question the reportâs recommendations too.
Summing up
Criticising is easier than doing better. But I think this is a case where a basic qualitative description tells the appropriate story, the sophisticated numerical methods are essentially a âbridge too farâ given the low quality of what they have to work with, and so confuse rather than clarify the matter. In essence:
The literature on PT in LMICs is a complete mess. Insofar as more sense can be made from it, the most important factors appear to belong to the studies investigating it (e.g. their size) rather than qualities of the PT interventions themselves.
Trying to correct the results of a compromised literature is known to be a nightmare. Here, the qualitative evidence for publication bias is compelling. But quantifying what particular value of âa lot?â the correction should be is fraught: numerically, methods here disagree with one another dramatically, and prove highly sensitive to choices on data exclusion.
Regardless of how PT looks in general, Strongminds, in particular, is looking less and less promising. Although initial studies looked good, they had various methodological weaknesses, and a forthcoming RCT with much higher methodological quality is expected to deliver disappointing results.
The evidential trajectory here is all to common, and the outlook typically bleak. It is dubious StrongMinds is a good pick even among psychotherapy interventions (picking one at random which doesnât have a likely-bad-news RCT imminent seems a better bet). Although pricing different interventions is hard, it is even more dubious SM is close to the frontier of âvery well evidencedâ vs. âhas very promising resultsâ plotted out by things like AMF, GD, etc. HLIâs choice to nonetheless recommend SM again this giving season is very surprising. I doubt it will weather hindsight well.
- ^
All of the figures are taken from the report and appendix. The transparency is praiseworthy, although it is a pity despite largely looking at the right things the report often mistakes the right conclusions to draw.
- ^
With all the well-worn caveats about measuring well-being.
- ^
The Cochrane handbook section on meta-analysis is very clear on this (but to make it clearer, I add emphasis)
10.1 Do not start here!
It can be tempting to jump prematurely into a statistical analysis when undertaking a systematic review. The production of a diamond at the bottom of a plot is an exciting moment for many authors, but results of meta-analyses can be very misleading if suitable attention has not been given to formulating the review question; specifying eligibility criteria; identifying and selecting studies; collecting appropriate data; considering risk of bias; planning intervention comparisons; and deciding what data would be meaningful to analyse. Review authors should consult the chapters that precede this one before a meta-analysis is undertaken.
- ^
As a WIP, the data and code for this report is not yet out, but in my previous statistical noodling on the last one both study size and registration status significantly moderated the effect downwards when included together, suggesting indeed the former isnât telling you everything re. study quality.
- ^
The report does mention later (S10.2) controlling a different analysis for study quality, when looking at the effect of sample size itself:
To test for scaling effects, we add sample size as a moderator into our meta-analysis and find that for every extra 1,000 participants in a study the effect size decreases (non-significantly) by â0.09 (95% CI: â0.206, 0.002) SDs. Naively, this suggests that deploying psychotherapy at scale means its effect will substantially decline. However, when we control for study characteristics and quality, the coefficient for sample size decreases by 45% to â0.055 SDs (95% CI: â0.18, 0.07) per 1,000 increase in sample size. This suggests to us that, beyond this finding being non-significant, the effect of scaling can be controlled away with quality variables, more of which that we havenât considered might be included.
I donât think this analysis is included in the appendix or similar, but later text suggests the âstudy qualityâ correction is a publication bias adjustment. This analysis is least fruitful when applied to study scale, as measures of publication bias are measures of study size: so finding the effects of study scale are attenuated when you control for a proxy of study scale is uninformative.
What would be informative is the impact measures of âstudy scaleâ or publication bias have on the coefficients for the primary moderators. Maybe they too could end up âcontrolled away with quality variables, more of which that we havenât considered might be includedâ?
- ^
There are likely better explanations of funnel plots etc. online, but my own attempt is here.
- ^
The report charts a much wiser course on a different âOutlier?â question: whether to include very long follow-up studies, where exclusion would cut the total effect in half. I also think including everything here is fine too, but the reportâs discussion in S4.2 clearly articulates the reason for concern, displays what impact inclusion vs. exclusion has, and carefully interrogates the outlying studies to see whether they have features (beyond that they report âoutlyingâ results) which warrants exclusion. They end up going âhalf-and-halfâ, but consider both full exclusion and inclusion in sensitivity analysis.
- ^
If you are using study size as an (improvised) measure of study quality, excluding the smallest studies because on an informal read they are particularly low quality makes little sense: this is the trend you are interested in.
- ^
A similar type of problem crops up when one is looking at the effect of âdosageâ on PT efficacy.
The solid lines are the fit (blue linear, orange log) on the full data, whilst the dashed lines are fits with extreme values of dosageâsmall or largeâexcluded (purple). The report freely concedes its choices here are very theory led rather than data drivenâand also worth saying getting more of a trend here makes a rod for SM and Friendship Benchâs back, as these deliver smaller numbers of sessions than the average, so adjusting with the dashed lines and not the solid ones reduces the expected effect.
Yet the main message I would take from the scatter plot is the data indeed looks very flat, and there is no demonstrable dose-response relationship of PT. Qualitatively, this isnât great for face validity.
- ^
To its credit, the write-up does highlight this, but does not seem to appreciate the implications are crazy: any PT intervention, so long as it is cheap enough, should be thought better than GD, even if studies upon it show very low effect size (which would usually be reported as a negative result, as almost any study in this field would be underpowered to detect effects as low as are being stipulated):
Therefore, even if the StrongMinds-specific evidence finds a small total recipient effect (as we present here as a placeholder), and we relied solely on this evidence, then it would still result in a cost-effectiveness that is similar or greater than that of GiveDirectly because StrongMinds programme is very cheap to deliver.
- ^
The report describes this clearly itself, but seems to think this is a feature rather than a bug (my emphasis):
Now, one might argue that the results of the Baird et al. study could be lower than 0.4 WELLBYs. But â assuming the same weights are given to the prior and the charity-specific data as in our analysisâeven if the Baird et al. results were 0.05 WELLBYs (extremely small), then the posterior would still be 1.49 * 0.84 + 0.05 * 0.16 = 1.26 WELLBYs; namely, very close to our current posterior (1.31 WELLBYs).
- ^
Iâm not even sure that âP > 0.99 better than GDâ would be warranted as posterior even for a Givewell recommended top charity, and Iâd guess the GW staff who made the recommendation would often agree.
Seemed not relevant enough to the topic, and too apt to be highly inflammatory, to be worthwhile to bring up.
I agreeâall else equalâyouâd rather have a flatter distribution of donors for the diversification (various senses) benefits. I doubt this makes this an important objective all things considered.
The main factor on the other side of the scale is scale itself: a âmegadonorâ can provide a lot of support. This seems to be well illustrated by your original examples (Utility Farm and Rethink). Rethink started later, but grew much 100x larger, and faster too. Iâd be surprised if folks at UF would not prefer Rethinkâs current situation, trajectoryâand fundraising headachesâto their own.
In essence, there should be some trade-off between âaggregate $â and âdiversity of funding sourcesâ (however cashed out) - pricing in (e.g.) financial risks/âvolatility for orgs, negative externalities on the wider ecosystem, etc. I think the trade between âperfectly singular supportâ and âideal diversity of funding sourcesâ would be much less than an integer factor, and more like 20% or so (i.e. maybe better getting a budget of 800k from a reasonably-sized group than 1M from a single donor, but not better than 2M from the same).
I appreciate the recommendation here is to complement existing practice with a cohort of medium sized donors, but the all things considered assessment is important to gauge the value of marginal (or not-so-marginal) moves in this direction. Getting (e.g.) 5000 new people giving 20k a year seems a huge lift to me. Even if that happens, OP still remains the dominant single donor (e.g. it gave roughly the amount this hypothetical cohort would to animal causes alone in 2022). A diffuse âecosystem wideâ benefits of these additional funders struggles by my lights to vindicate the effort (and opportunity costs) of such a push.
Iâm not sure I count as âseniorâ, but I could understand some reluctance even if âall expenses paidâ.
I consider my EAG(x) participation as an act of community service. Although there are diffuse benefits, I do not get that much out of it myself, professionally speaking. This is not that surprising: contacts at EAG (or knowledge at EAG, etc. etc.) matter a lot less on the margin of several years spent working in the field than just starting out. I spend most of my time at EAG trying to be helpfulâtypically, through the medium of several hours of 1-1s each day. I find this fulfilling, but not leisurely.
So from the selfish perspective EAG feels pretty marginal either re. âprofessional developmentâ or âfunâ. Iâd guess many could be dissuaded by small frictions. Non-hub locations probably fit the bill: âOh, I could visit [hub] for EAG, and meet my professional contacts in [hub] whilst Iâm in townâ is a lot more tempting to the minds eye than a dedicated trip for EAG alone.
Hello Jason,
With apologies for delay. I agree with you that I am asserting HLIâs mistakes have further âaggravating factorsâ which I also assert invites highly adverse inference. I had hoped the links I provided provided clear substantiation, but demonstrably not (my bad). Hopefully my reply to Michael makes them somewhat clearer, but in case not, I give a couple of examples below with as best an explanation I can muster.
I will also be linking and quoting extensively from the Cochrane handbook for systematic reviewsâso hopefully even if my attempt to clearly explain the issues fail, a reader can satisfy themselves my view on them agrees with expert consensus. (Rather than, say, âCantankerous critic with idiosyncratic statistical tastes flexing his expertise to browbeat the laity into aquiescenceâ.)
0) Per your remarks, thereâs various background issues around reasonableness, materiality, timeliness etc. I think my views basically agree with yours. In essence: I think HLI is significantly âon the hookâ for work (such as the meta-analysis) it relies upon to make recommendations to donorsâwho will likely be taking HLIâs representations on its results and reliability (cf. HLIâs remarks about its âacademic researchâ, ârigourâ etc.) on trust. Discoveries which threaten the âbottom line numbersâ or overall reliability of this work should be addressed with urgency and robustness appropriate to their gravity. âWeâll put checking this on our to-do listâ seems fine for an analytic choice which might be dubious but of unclear direction and small expected magnitude. As you say, a typo which where corrected reduces the bottom line efficacy by ~ 20% should be done promptly.
The two problems I outlined 6 months ago each should have prompted withdrawal/âsuspension of both the work and the recommendation unless and until they were corrected.[1] Instead, HLI has not made appropriate corrections, and instead persists in misguiding donations and misrepresenting the quality of its research on the basis of work it has partly acknowledged (and which reasonable practicioners would overwhelmingly concur) was gravely compromised.[2]
1.0) Publication bias/âSmall study effects
It is commonplace in the literature for smaller studies to show different (typically larger) effect sizes than large studies. This is typically attributed to a mix of factors which differentially inflate effect size in smaller studies (see), perhaps the main one being publication bias: although big studies are likely to be published âeither wayâ, investigators may not finish (or journals may not publish) smaller studies reporting negative results.
It is extremely well recognised that these effects can threaten the validity of meta-analysis results. If you are producing something (very roughly) like an âaverage effect sizeâ from your included studies, the studies being selected for showing a positive effect means the average is inflated upwards. This bias is very difficult to reliably adjust for or âpatchâ (more later), but it can easily be large enough to mean âActually, the treatment has no effect, and your meta-analysis is basically summarizing methodological errors throughout the literatureâ.
Hence why most work on this topic stresses the importance of arduous efforts in prevention (e.g trying really hard to find âunpublishedâ studies) and diagnosis (i.e. carefully checking for statistical evidence of this problem) rather than âcureâ (see eg.). If a carefully conducted analysis nonetheless finds stark small study effects, thisârather than the supposed ~âaverageâ effectâwould typically be (and should definitely be) the main finding: âThe literature is a complete messâmore, and much better, research neededâ.
As in many statistical matters, a basic look at your data can point you in the right direction. For meta-analysis, this standard is a forest plot:
To orientate: each row is a study (presented in order of increasing effect size), and the horizontal scale is effect size (where to the right = greater effect size favouring the intervention). The horizontal bar for each study is gives the confidence interval for the effect size, with the middle square marking the central estimate (also given in the rightmost column). The diamond right at the bottom is the pooled effect sizeâthe (~~)[3] average effect across studies mentioned earlier.
Here, the studies are all over the map, many of which do not overlap with one another, nor with the pooled effect size estimate. In essence, dramatic heterogeneity: the studies are reporting very different effect sizes from another. Heterogeneity is basically a fact of life in meta-analysis, but a forest plot like this invites curiousity (or concern) about why effects are varying quite this much. [Iâm going to be skipping discussion of formal statistical tests/âmetrics for things like this for clarityâyou can safely assume a) yes, you can provide more rigorous statistical assessment of âhow muchâ besides âeyeballing itâ - although visually obvious things are highly informative, b) the things I mention you can see are indeed (highly) statistically significant etc. etc.]
There are some hints from this forest plot that small study effects could have a role to play. Although very noisy, larger studies (those with narrower horizontal lines lines, because bigger study ~ less uncertainty in effect size) tend to be higher up the plot and have smaller effects. There is a another plot designed to look at this betterâa funnel plot.
To orientate: each study is now a point on a scatterplot, with effect size again on the x-axis (right = greater effect). The y-axis is now the standard error: bigger studies have greater precision, and so lower sampling error, so are plotted higher on the y axis. Each point is a single studyâall being well, the scatter should look like a (symmetrical) triangle or funnel like those being drawn on the plot.
All is not well here. The scatter is clearly asymmetric and sloping to the rightâsmaller studies (towards the bottom of the graph) tend towards greater effect sizes. The lines being drawn on the plot make this even clearer. Briefly:
The leftmost âfunnelâ with shaded wings is centered on an effect size of zero (i.e. zero effect). The white middle triangle are findings which would give a p value of > 0.05, and the shaded wings correspond to a p value between 0.05 (âstatistically significantâ) and 0.01: it is an upward-pointing triangle because bigger studies can detect find smaller differences from zero as âstatistically significantâ than smaller ones. There appears to be clustering in the shaded region, suggestive that studies may be being tweaked to get them âacross the thresholdâ of statistically significant effects.
The rightmost âfunnelâ without shading is centered on the pooled effect estimate (0.5). Within the triangle is where you would expect 95% of the scatter of studies to fall in the absence of heterogeneity (i.e. there was just one true effect size, and the studies varied from this just thanks to sampling error). Around half are outside this region.
The red dashed line is the best fit line through the scatter of studies. If there werenât small study effects, it would be basically vertical. Instead, it slopes off heavily to the right.
Although a very asymmetric funnel plot is not proof positive of publication bias, findings like this demand careful investigation and cautious interpretation (see generally). It is challenging to assess exactly âhow big a deal is it, though?â: statistical adjustiment for biases in the original data is extremely fraught.
But we are comfortably in âbig dealâ territory: this finding credibly up-ends HLIâs entire analysis:
a) There are different ways of getting a âpooled estimateâ (~~average, or ~~ typical effect size): random effects (where you assume the true effect is rather a distribution of effects from which each study samples from), vs. fixed effects (where there is a single value for the true effect size). Random effects are commonly preferred asâin realityâone expects the true effect to vary, but the results are much more vulnerable to any small study effects/âpublication bias (see generally). Comparing the random effect vs. the fixed effect estimate can give a quantitative steer on the possible scale of the problem, as well as guide subsequent analysis.[4] Here, the random effect estimate is 0.52, whilst the fixed one is less than half the size: 0.18.
b) There are other statistical methods you could use (more later). One of the easier to understand (but one of the most conservative) goes back to the red dashed line in the funnel plot. You could extrapolate from it to the point where standard error = 0: so the predicted effect of an infinitely large (so infinitely precise) studyâand so also where the âsmall study effectâ is zero. There are a few different variants of these sorts of âregression methodsâ, but the ones I tried predict effect sizes of such a hypothetical study between 0.17 and 0.05. So, quantitatively, 70-90% cuts of effect size are on the table here.
c) A reason why regression methods methods are conservative as they will attribute as much variation in reported results as possible to differences in study size. Yet there could be alternative explanations for this besides publication bias: maybe smaller studies have different patient populations with (genuinely) greater efficacy, etc.
However, this statistical confounding can go the other way. HLI is not presenting simple meta-analytic results, but rather meta-regressions: where the differences in reported effect sizes are being predicted by differences between and within the studies (e.g. follow-up time, how much therapy was provided, etc.). One of HLIâs findings from this work is that psychotherpy with Strongminds-like traits is ~70% more effective than psychotherapy in general (0.8 vs. 0.46). If this is because factors like âgroup or individual therapyâ correlate with study size, the real story for this could simply be: âStrongminds-like traits are indicators for methodological weaknesses which greatly inflate true effect size, rather than for a more effective therapeutic modality.â In HLIâs analysis, the latter is presumed, giving about a ~10% uplift to the bottom line results.[5]
1.2) A major issue, and a major mistake to miss
So this is a big issue, and would be revealed by standard approaches. HLI instead used a very non-standard approach (see), novelâas far as I can tellâto existing practice and, unfortunately, inappropriate (cf., point 5): it gives ~ a 10-15% discount (although Iâm not sure this has been used in the Strongminds assessment, although it is in the psychotherapy one).
I came across these problems ~6m ago, prompted by a question by Ryan Briggs (someone with considerably greater expertise than my own) asking after the forest and funnel plot. I also started digging into the data in general at the same time, and noted the same key points explained labouriously above: looks like marked heterogeneity and small study effects, they look very big, and call the analysis results into question. Long story short, they said they would take a look at it urgently then report back.
This response is fine, but as my comments then indicated, I did have (and I think reasonably had) HLI on pretty thin ice/ââepistemic probationâ after finding these things out. You have to make a lot of odd choices to end up this far from normal practice, nonetheless still have to make some surprising oversights too, to end up missing problems which would appear to greatly undermine a positive finding for Strongminds.[6]
1.3) Maintaining this major mistake
HLI fell through this thin ice after its follow-up. Their approach was to try a bunch of statistical techniques to adjust for publication bias (excellent technique), do the same for their cash transfers meta-analysis (sure), then using the relative discounts between them to get an adjustment for psychotherapy vs. cash transfers (good, esp. as adding directly into the multi-level meta-regressions would be difficult). Further, they provided full code and data for replication (great). But the results made no sense whatsoever:
To orientate: each row is a different statistical technique applied to the two meta-analyses (more later). The x-axis is the âmultipleâ of Strongminds vs. cash transfers, and the black line is at 9.4x, the previous âstatus quo valueâ. Bars shorter than this means adjusting for publication bias results in an overall discount for Strongminds, and vice-versa.
The cash transfers funnel plot looks like this:
Compared to the psychotherapy one, it basically looks fine: the scatter looks roughly like a funnel, and no massive trend towards smaller studies = bigger effects. So how could so many statistical methods discount the âobvious small study effectâ meta-analysis less than the âno apparent small study effectâ meta-analysis, to give an increased multiple? As I said at the time, the results look like nonsense to the naked eye.
One problem was a coding error in two of the statistical methods (blue and pink bars). The bigger problem is how the comparisons are being done is highly misleading.
Take a step back from all the dividing going on to just look at the effect sizes. The basic, nothing fancy, random effects model applied to the psychotherapy data gives an effect size of 0.5. If you take the average across all the other model variants, you get ~0.3, a 40% drop. For the cash transfers meta-analysis, the basic model gives 0.1, and the average of all the other models is ~0.9, a 10% drop. So in fact you are seeingâas you shouldâbigger discounts when adjusting the psychotherapy analysis vs. the cash transfers meta-analysis. This is lost by how the divisions are being done, which largely âplay offâ multiple adjustments against one another. (see, pt.2). What the graph should look like is this:
Two things are notable: 1) the different models tend to point to a significant drop (~30-40% on average) in effect size; 2) there is a lot of variation in the discountâfrom ~0 to ~90% (so visual illustration about why this is known to be v. hard to reliably âadjustâ). I think these results oblige something like the following:
Re. write-up: At least including the forest and funnel plots, alongside a description of why they are concerning. Should also include some âbest guessâ correction from the above, and noting this has a (very) wide range. Probably warrants âback to the drawing boardâ given reliability issues.
Re. overall recommendation: At least a very heavy astericks placed besides the recommendation. Should also highlight both the adjustment and uncertainty in front facing materials (e.g. âtentative suggestionâ vs. ârecommendationâ). Probably warrants withdrawal.
Re. general reflection: I think a reasonable evaluatorâbeyond directional effectsâwould be concerned about the ânearâ(?) miss property of having a major material issue not spotted before pushing a strong recommendation, âphase 1 complete/âmission accomplishedâ etc. - especially when found by a third party many months after initial publication. They might also be concerned about the direction of travel. When published, the multiplier was 12x; with spillovers, it falls to 9.5%; with spillovers and the typo corrected, it falls to 7.5x; with a 30% best guess correction for publication bias, weâre now at 5.3x. Maybe any single adjustment is not recommendation-reversing, but in concert they are, and the track record suggests the next one is more likely to be further down rather than back up.
What happened instead 5 months ago was HLI would read some more and discuss among themselves whether my take on the comparators was the right one (I am, and it is not reasonably controversial, e.g. 1, 2, cf. fn4). Although âlooking at publication bias is part of their intended ârefiningâ of the Strongminds assessment, thereâs been nothing concrete done yet.
Maybe I should have chased, but the exchange on this (alongside the other thing) made me lose faith that HLI was capable of reasonably assessing and appropriately responding to criticisms of their work when material to their bottom line.
2) The cost effectiveness guestimate.
[Readers will be relieved ~no tricky stats here]
As I was looking at the meta-analysis, I added my attempt at âadjustedâ effect sizes of the same into the CEA to see what impact they had on the results. To my surprise, not very much. Hence my previous examples about âEven if the meta-analysis has zero effect the CEA still recommends Strongminds as several times GDâ, and âYou only get to equipoise with GD if you set all the effect sizes in the CEA to near-zero.â
I noted this alongside my discussion around the meta-analysis 6m ago. Earlier remarks from HLI suggested they accepted these were diagnostic of something going wrong with how the CEA is aggregating information (but fixing it would be done but not as a priority); more recent ones suggest more âdoubling downâ.
In any case, they are indeed diagnostic for a lack of face validity. You obviously would, in fact, be highly sceptical if the meta-analysis of psychotherapy in general was zero (or harmful!) that nonetheless a particular psychotherapy intervention was extremely effective. The (pseudo-)bayesian gloss on why is that the distribution of reported effect sizes gives additional information on the likely size of the ârealâ effects underlying them. (cf. heterogeneity discussed above) A bunch of weird discrepancies among them, if hard to explain by intervention characteristics, increases the suspicion of weird distortions, rather than true effects, underlie the observations. So increasing discrepancy between indirect and direct evidence should reduce effect size beyond impacts on any weighted average.
It does not help the findings as-is are highly discrepant and generally weird. Among many examples:
Why are the strongminds like trials in the direct evidence having among the greatest effects of any of the studies includedâand ~1.5x-2x the effect of a regression prediction of studies with strongminds-like traits?
Why are the most strongminds-y studies included in the meta-analysis marked outliersâeven after âcorrectionâ for small study effects?
What happened between the original Strongminds Phase 2 and the Strongminds RCT to up the intevention efficacy by 80%?
How come the only study which compares psychotherapy to a cash transfer comparator is also the only study which gives a negative effect size?
I donât know what the magnitude of the directional âadjustmentâ would be, as this relies on specific understanding of the likelier explanations for the odd results (Iâd guess a 10%+ downward correction assuming Iâm wrong about everything elseâobviously, much more if indeed âthe vast bulk in effect variation can be explained by sample size +/â- registration status of the study). Alone, I think it mainly points to the quantative engine needing an overhaul and the analysis being known-unreliable until it is.
In any case, it seems urgent and important to understand and fix. The numbers are being widely used and relied upon (probably all of them need at least a big public astericks pending developing more reliable technique). It seems particularly unwise to be reassured by âWell sure, this is a downward correction, but the CEA still gives a good bottom line multipleâ, as the bottom line number may not be reasonable, especially conditioned on different inputs. Even more so to persist in doing so 6m after being made aware of the problem.
- ^
These are mentioned in 3a and 3b of my reply to Michael. Point 1 there (kind of related to 3a) would on its own warrant immediate retraction, but that is not a case (yet) of âmaintainedâ error.
- ^
So in terms of âepistemic probationâ, I think this was available 6m ago, but closed after flagrant and ongoing âviolationsâ.
- ^
One quote from the Cochrane handbook feels particularly apposite:
Do not start here!
It can be tempting to jump prematurely into a statistical analysis when undertaking a systematic review. The production of a diamond at the bottom of a plot is an exciting moment for many authors, but results of meta-analyses can be very misleading if suitable attention has not been given to formulating the review question; specifying eligibility criteria; identifying and selecting studies; collecting appropriate data; considering risk of bias; planning intervention comparisons; and deciding what data would be meaningful to analyse. Review authors should consult the chapters that precede this one before a meta-analysis is undertaken.
- ^
In the presence of heterogeneity, a random-effects meta-analysis weights the studies relatively more equally than a fixed-effect analysis (see Chapter 10, Section 10.10.4.1). It follows that in the presence of small-study effects, in which the intervention effect is systematically different in the smaller compared with the larger studies, the random-effects estimate of the intervention effect will shift towards the results of the smaller studies. We recommend that when review authors are concerned about the influence of small-study effects on the results of a meta-analysis in which there is evidence of between-study heterogeneity (I2 > 0), they compare the fixed-effect and random-effects estimates of the intervention effect. If the estimates are similar, then any small-study effects have little effect on the intervention effect estimate. If the random-effects estimate has shifted towards the results of the smaller studies, review authors should consider whether it is reasonable to conclude that the intervention was genuinely different in the smaller studies, or if results of smaller studies were disseminated selectively. Formal investigations of heterogeneity may reveal other explanations for funnel plot asymmetry, in which case presentation of results should focus on these. If the larger studies tend to be those conducted with more methodological rigour, or conducted in circumstances more typical of the use of the intervention in practice, then review authors should consider reporting the results of meta-analyses restricted to the larger, more rigorous studies.
- ^
This is not the only problem in HLIâs meta-regression analysis. Analyses here should be pre-specified (especially if intended as the primary result rather than some secondary exploratory analysis), to limit risks of inadvertently cherry-picking a model which gives a preferred result. Cochrane (see):
Authors should, whenever possible, pre-specify characteristics in the protocol that later will be subject to subgroup analyses or meta-regression. The plan specified in the protocol should then be followed (data permitting), without undue emphasis on any particular findings (see MECIR Box 10.11.b). Pre-specifying characteristics reduces the likelihood of spurious findings, first by limiting the number of subgroups investigated, and second by preventing knowledge of the studiesâ results influencing which subgroups are analysed. True pre-specification is difficult in systematic reviews, because the results of some of the relevant studies are often known when the protocol is drafted. If a characteristic was overlooked in the protocol, but is clearly of major importance and justified by external evidence, then authors should not be reluctant to explore it. However, such post-hoc analyses should be identified as such.
HLI does not mention any pre-specification, and there is good circumstantial evidence of a lot of this work being ad hoc re. âStrongminds-like traitsâ. HLIâs earlier analysis on psychotherapy in general, using most (?all) of the same studies as in their Strongminds CEA (4.2, here), had different variables used in a meta-regression on intervention properties (table 2). It seems likely the change of model happened after study data was extracted (the lack of significant prediction and including a large number of variables for a relatively small number of studies would be further concerns). This modification seems to favour the intervention: I think the earlier model, if applied to Strongminds, gives an effect size of ~0.6.
- ^
Briggs comments have a similar theme, suggestive that my attitude does not solely arise from particular cynicism on my part.
8%, but perhaps expected drift of a factor of two either way if I thought about it for a few hours vs. a few minutes.
Hello Michael,
Thanks for your reply. In turn:
1:
HLI has, in fact, put a lot of weight on the d = 1.72 Strongminds RCT. As table 2 shows, you give a weight of 13% to itâjoint highest out of the 5 pieces of direct evidence. As there are ~45 studies in the meta-analytic results, this means this RCT is being given equal or (substantially) greater weight than any other study you include. For similar reasons, the Strongminds phase 2 trial is accorded the third highest weight out of all studies in the analysis.
HLIâs analysis explains the rationale behind the weighting of âusing an appraisal of its risk of bias and relevance to StrongMindsâ present core programmeâ. Yet table 1A notes the quality of the 2020 RCT is âunknownâ - presumably because Strongminds has âonly given the results and some supporting details of the RCTâ. I donât think it can be reasonable to assign the highest weight to an (as far as I can tell) unpublished, not-peer reviewed, unregistered study conducted by Strongminds on its own effectiveness reporting an astonishing effect sizeâbefore it has even been read in full. It should be dramatically downweighted or wholly discounted until then, rather than included at face value with a promise HLI will followup later.
Risk of bias in this field in general is massive: effect sizes commonly melt with improving study quality. Assigning ~40% of a weighted average of effect size to a collection of 5 studies, 4 [actually 3, more later] of which are (marked) outliers in effect effect, of which 2 are conducted by the charity is unreasonable. This can be dramatically demonstrated from HLIâs own data:
One thing I didnât notice last time I looked is HLI did code variables on study quality for the included studies, although none of them seem to be used for any of the published analysis. I have some good news, and some very bad news.
The good news is the first such variable I looked at, ActiveControl, is a significant predictor of greater effect size. Studies with better controls report greater effects (roughly 0.6 versus 0.3). This effect is significant (p = 0.03) although small (10% of the variance) and difficultâat least for meâto explain: I would usually expect worse controls to widen the gap between it and the intervention group, not narrow it. In any case, this marker of study quality definitely does not explain away HLIâs findings.
The second variable I looked at was âUnpubOr(pre?)regâ.[1] As far as I can tell, coding 1 means something like âthe study was publicly registeredâ and 0 means it wasnât (Iâm guessing 0.5 means something intermediate like retrospective registration or similar) - in any case, this variable correlates extremely closely (>0.95) to my own coding of whether a study mentions being registered or not after reviewing all of them myself. If so, using it as a moderator makes devastating reading:[2]
To orientate: in âModel resultsâ the intercept value gives the estimated effect size when the âunpubâ variable is zero (as I understand it, ~unregistered studies), so d ~ 1.4 (!) for this set of studies. The row below gives the change in effect if you move from âunpub = 0â to âunpub = 1â (i.e. ~ registered vs. unregistered studies): this drops effect size by 1, so registered studies give effects of ~0.3. In other words, unregistered and registered studies give dramatically different effects: study registration reduces expected effect size by a factor of 3. [!!!]
The other statistics provided deepen the concern. The included studies have a very high level of heterogeneity (~their effect sizes vary much more than they should by chance). Although HLI attempted to explain this variation with various meta-regressions using features of the intervention, follow-up time, etc., these models left the great bulk of the variation unexplained. Although not like-for-like, here a single indicator of study quality provides compelling explanation for why effect sizes differ so much: it explains three-quarters of the initial variation.[3]
This is easily seen in a grouped forest plotâthe top group is the non registered studies, the second group the registered ones:
This pattern also perfectly fits the 5 pieces of direct evidence: Bolton 2003 (ES = 1.13), Strongminds RCT (1.72), and Strongminds P2 (1.09) are, as far as I can tell, unregistered. Thurman 2017 (0.09) was registered. Bolton 2007 is also registered, and in fact has an effect size of ~0.5, not 1.79 as HLI reports.[4]
To be clear, I do not think HLI knew of this before I found it out just now. But results like this indicate i) the appraisal of the literature in this analysis gravely off-the-markâstudy quality provides the best available explanation for why some trials report dramatically higher effects than others; ii) the result of this oversight is a dramatic over-estimation of likely efficacy of Strongminds (as a ready explanation for the large effects reported in the most ârelevant to strongmindsâ studies is that these studies were not registered and thus prone to ~200%+ inflation of effect size); iii) this is a very surprising mistake for a diligent and impartial evaluator to make: one would expect careful assessment of study qualityâand very sceptical evaluation where this appears to be lackingâto be foremost, especially given the subfield and prior reporting from Strongminds both heavily underline it. This pattern, alas, will prove repetitive.
I also think a finding like this should prompt an urgent withdrawal of both the analysis and recommendation pending further assessment. In honesty, if this doesnât, Iâm not sure what ever could.
2:
Indeed excellent researchers overlook things, and although I think both the frequency and severity of things HLI mistakes or overlooks is less-than-excellent, one could easily attribute this to things like âinexperienceâ, âtrying to do a lot in a hurryâ, âlimited staff capacityâ, and so on.
Yet this cannot account for how starkly asymmetric the impact of these mistakes and oversights are. HLIâs mistakes are consistently to Strongmindâs benefit rather than its detriment, and HLI rarely misses a consideration which could enhance the âmultipleâ, it frequently misses causes of concern which both undermine both strength and reliability of this recommendation. HLIâs award from Givewell deepens my concerns here, as it is consistent with a very selective scepticism: HLI can carefully scruitinize charity evaluations by others it wants to beat, but fails to mete out remotely comparable measure to its own which it intends for triumph.
I think this can also explain how HLI responds to criticism, which I have found by turns concerning and frustrating. HLI makes some splashy claim (cf. âmission accomplishedâ, âconfident recommendationâ, etc.). Someone else (eventually) takes a closer look, and finds the surprising splashy claim, rather than basically checking out âmost reasonable ways you slice itâ, it is highly non-robust, and only follows given HLI slicing it heavily in favour of their bottom line in terms of judgement or analysisâthe latter of which often has errors which further favour said bottom line. HLI reliably responds, but the tenor of this response is less âscientific discourseâ and more âlawyer for defenceâ: where it can, HLI will too often further double down on calls it makes where I aver the typical reasonable spectator would deem at best dubious, and at worst tendentious; where it canât, HLI acknowledges the shortcoming but asserts (again, usually very dubiously) that it isnât that a big deal, so it will deprioritise addressing it versus producing yet more work with the shortcomings familiar to those which came before.
3:
HLIâs meta-analysis in no way allays or rebuts the concerns SimonM raised re. Strongmindsâindeed, appropriate analysis would enhance many of them. Nor is it the case that the meta-analytic work makes HLIâs recommendation robust to shortcomings in the Strongminds-specific evidenceâindeed, the cost effectiveness calculator will robustly recommend Strongminds as superior (commonly, several times superior) to GiveDirectly almost no matter what efficacy results (meta-analytic or otherwise) are fed into it. On each.
a) Meta-analysis could help contextualize the problems SimonM identifies in the Strongminds specific data. For example, a funnel plot which is less of a âfunnelâ but more of a ski-slope (i.e. massive small study effects/ârisk of publication bias), and a contour/âp-curve suggestive of p-hacking would suggest the fieldâs literature needs to be handled with great care. Finding âstrongminds relevantâ studies and direct evidence are marked outliers even relative to this pathological literature should raise alarm given this complements the object-level concerns SimonM presented.
This is indeed true, and these features were present in the studies HLI collected, but HLI failed to recognise it. It may never have if I hadnât gotten curious and did these analyses myself. Said analysis is (relative to the much more elaborate techniques used in HLIâs meta-analysis) simple to conductâmy initial âworkâ was taking the spreadsheet and plugging it into a webtool out of idle curiosity.[5] Again, this is a significant mistake, adds a directional bias in favour of Strongminds, and is surprising for a diligent and impartial evaluator to make.
b) In general, incorporating meta-analytic results into what is essentially a weighted average alongside direct evidence does not clean either it or the direct evidence of object level shortcomings. If (as here) both are severely compromised, the result remains unreliable.
The particular approach HLI took also doesnât make the finding more robust, as the qualitative bottom line of the cost-effectiveness calculation is insensitive to the meta-analytic result. As-is, the calculator gives strongminds as roughly 12x better than GiveDirectly.[6] If you set both meta-analytic effect sizes to zero, the calculator gives Strongminds as ~7x better than GiveDirectly. So the five pieces of direct evidence are (apparently) sufficient to conclude SM is an extremely effective charity. Obviously this isâand HLI has previously acceptedâfacially invalid output.
It is not the only example. It is extremely hard for any reduction of efficacy inputs to the model to give a result that Strongminds is worse than Givedirectly. If we instead leave the meta-analytic results as they were but set all the effect sizes of the direct evidence to zero (in essence discounting them entirelyâwhich I think is approximately what should have been done from the start), we get ~5x better than GiveDirectly. If we set all the effect sizes of both meta-analysis and direct evidence to 0.4 (i.e. the expected effects of registered studies noted before), we get ~6x better than Givedirectly. If we set the meta-analytic results to 0.4 and set all the direct evidence to zero we get ~3x GiveDirectly. Only when one sets all the effect sizes to 0.1 - lower than all but ~three of the studies in the meta-analysisâdoes one approach equipoise.
This result should not surprise on reflection: the CEAâs result is roughly proportional to the ~weighted average of input effect sizes, so an initial finding of â10xâ Givedirectly or similar would require ~a factor of 10 cut to this average to drag it down to equipoise. Yet this âfeatureâ should be seen as a bug: in the same way there should be some non-zero value of the meta-analytic results which should reverse a âmany times better than Givedirectlyâ finding, there should be some non-tiny value of effect sizes for a psychotherapy intervention (or psychotherapy interventions in general) which results in it not being better than GiveDirectly at all.
This does help explain the somewhat surprising coincidence the first charity HLI fully assessed would be one it subsequently announces as the most promising interventions in global health and wellbeing so-far found: rather than a discovery from the data, this finding is largely preordained by how the CEA stacks the deck. To be redundant (and repetitive): i) the cost-effectiveness model HLI is making is unfit-for-purpose, given can produce these absurd results; ii) this introduces a large bias in favour of Strongminds; iii) it is a very surprising mistake for a diligent and impartial evaluator to makeâthese problems are not hard to find.
Theyâre even easier for HLI to find once theyâve been alerted to them. I did, months ago, alongside other problems, and suggested the cost-effectiveness analysis and Strongminds recommendation be withdrawn. Although it should have happened then, perhaps if I repeat myself it might happen now.
4:
Accusations of varying types of bad faith/âmotivated reasoning/âintellectual dishonesty should indeed be made with careâbesides the difficulty in determination, pragmatic considerations raise the bar still higher. Yet I think the evidence of HLI having less of a finger but more of a fist on the scale throughout its work overwhelms even charitable presumptions made by a saint on its behalf. In footballing terms, I donât think HLI is a player cynically diving to win a penalty, but it is like the manager after the game insisting âtheir goal was offside, and my player didnât deserve a red, and.. (etc.)â - highly inaccurate and highly biased. This is a problem when HLI claims itself an impartial referee, especially when it does things akin to awarding fouls every time a particular player gets tackled.
This is even more of a problem precisely because of the complex and interdisciplinary analysis HLI strives to do. No matter the additional analytic arcana, work like this will be largely fermi estimates, with variables being plugged in with little more to inform them than intuitive guesswork. The high degree of complexity provides a vast garden of forking paths available. Although random errors would tend to cancel out, consistent directional bias in model choice, variable selection, and numerical estimates lead to greatly inflated âbottom linesâ.
Although the transparency in (e.g.) data is commendable, the complex analysis also makes scruitiny harder. I expect very few have both the expertise and perseverence to carefully vet HLI analysis themselves; I also expect the vast majority of money HLI has moved has come from those largely taking its results on trust. This trust is ill-placed: HLIâs work weathers scruitiny extremely poorly; my experience is very much âthe more you see, the worse it looksâ. I doubt many donors following HLIâs advice, if they took a peak behind the curtain, would be happy with what they would discover.
If HLI is falling foul of an entrenched status quo, it is not particular presumptions around interventions, nor philosophical abstracta around population ethics, but rather those that work in this community (whether published elsewhere or not) should be even-handed, intellectually honest and trustworthy in all cases; rigorous and reliable commensurate to its expected consequence; and transparently and fairly communicated. I think going against this grain underlies (I suspect) why I am not alone in my concerns, and why HLI has not had the warmest reception. The hope this all changes for the better is not entirely forlorn. But things would have to change a lot, and quicklyâand the track record thus far does not spark joy.
- ^
Really surprised I missed this last time, to be honest. Especially because it is the only column title in the spreadsheet highlighted in red.
- ^
Given I will be making complaints about publication bias, file drawer effects, and garden of forking path issues later in the show, one might wonder how much of this applies to my own criticism. How much time did I spend dredging through HLIâs work looking for something juicy? Is my file drawer stuffed with analyses I hoped would show HLI in a bad light, actually showed it in a good one, so I donât mention them?
Depressingly, the answer is ânot muchâ and ânoâ respectively. Regressing against publication registration was the second analysis I did on booting up the data again (regressing on active control was the first, mentioned in text). My file drawer subsequent to this is full of checks and double-checks for alternative (and better for HLI) explanations for the startling result. Specifically, and in order:
- I used the no_FU (no follow-ups) data initially for convenienceâthe full data can include multiple results of the same study at different follow-up points, and these clustered findings are inappropriate to ignore in a simple random effects model. So I checked both by doing this anyway then using a multi-level model to appropriately manage this structure to the data. No change to the key finding.
- Worried that (somehow) I was messing up or misinterpreting the metaregression, I (re)constructed a simple forest plot of all the studies, and confirmed indeed the unregistered ones were visibly off to the right. I then grouped a forest plot by registration variable to ensure it closely agreed with the meta-regression (in main text). It does.
- I then checked the first 10 studies coded by the variable I think is trial registration to check the registration status of those studies matched the codes. Although all fit, I thought the residual risk I was misunderstanding the variable was unacceptably high for a result significant enough to warrant a retraction demand. So I checked and coded all 46 studies by âregistered or not?â to make sure this agreed with my presumptive interpretation of the variable (in text). It does.
- Adding multiple variables to explain an effect geometrically expands researcher degrees of freedom, thus any unprincipled ad hoc investigation by adding or removing them has very high false discovery rates (I suspect this is a major problem with HLIâs own meta-regression work, but compared to everything else it merits only a passing mention here). But I wanted to check if I could find ways (even if unprincipled and ad hoc) to attenuate a result as stark as âunregistered studies have 3x the registered onesâ.
- I first tried to replicate HLIâs meta-regression work (exponential transformations and all) to see if the registration effect would be attenuated by intervention variables. Unfortunately, I was unable to replicate HLIâs regression results from the information provided (perhaps my fault). In any case, simpler versions I constructed did not give evidence for this.
- I also tried throwing in permutations of IPT-or-not (these studies tend to be unregistered, maybe this is the real cause of the effect?), active control-or-not (given it had a positive effect size, maybe it cancels out registration?) and study Standard Error (a proxyâalbeit a controversial oneâfor study size/âprecision/âquality, so if registration was confounded by it, this slightly challenges interpretation). The worst result across all the variations I tried was to drop the effect size of registration by 20% (~ â1 to â0.8), typically via substitution with SE. Omitted variable bias and multiple comparisons mean any further interpretation would be treacherous, but insofar as it provides further support: adding in more proxies for study quality increases explanatory power, and tends to even greater absolute and relative drops in effect size comparing âhighestâ versus âlowestâ quality studies.
That said, the effect size is so dramatic to be essentially immune to file-drawer worries. Even if I had a hundred null results I forgot to mention, this finding would survive a Bonferroni correction.
- ^
Obviously âis the study registered or notâ? is a crude indicator of overal quality. Typically, one would expect better measurement (perhaps by including further proxies for underlying study quality) would further increase the explanatory power of this factor. In other words, although these results look really bad, in reality it is likely to be even worse.
- ^
HLIâs write up on Bolton 2007 links to this paper (I did double check to make sure there wasnât another Bolton et al. 2007 which could have been confused with thisâno other match I could find). It has a sample size of 314, not 31 as HLI reportsâI presume a data entry error, although it less than reassuring that this erroneous figure is repeated and subsequently discussed in the text as part of the appraisal of the study: one reason given for weighing it so lightly is its âvery smallâ sample size.
Speaking of erroneous figures, hereâs the table of results from this study:
I see no way to arrive at an effect size of d = 1.79 from these numbers. The right comparison should surely be the pre-post difference of GIP versus control in the intention to treat analysis. These numbers give a cohenâs d ~ 0.5.
I donât think any other reasonable comparison gets much higher numbers, and definitely not > 3x higher numbersâthe differences between any of the groups are lower than the standard deviations, so should bound estimates like Cohenâs d to < 1.
[Re. file drawer, I guess this counts as a spot check (this is the only study I carefully checked data extraction), but not a random one: I did indeed look at this study in particular because it didnât match the âonly unregistered studies report crazy-high effectsâ - an ES of 1.79 is ~2x any other registered study.]
- ^
Re. my worries of selective scepticism, HLI did apply these methods in their meta-analysis of cash transfers, where no statistical suggestion of publication bias or p-hacking was evident.
- ^
This does depend a bit on whether spillover effects are being accounted for. This seems to cut the multiple by ~20%, but doesnât change the qualitative problems with the CEA. Happy to calculate precisely if someone insists.
- 3 Dec 2023 20:44 UTC; 197 points) 's comment on TalkÂing through deÂpresÂsion: The cost-effecÂtiveÂness of psyÂchotherÂapy in LMICs, reÂvised and expanded by (
- 16 Jul 2023 19:03 UTC; 74 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
- 25 Jul 2023 18:19 UTC; 19 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
- ^
HLIâbut if for whatever reason theyâre unable or unwilling to receive the donation at resolution, Strongminds.
The âresolution criteriaâ are also potentially ambiguous (my bad). I intend to resolve any ambiguity stringently against me, but you are welcome to be my adjudicator.
[To add: Iâd guess ~30-something% chance I end up paying out: d = 0.4 is at or below pooled effect estimates for psychotherapy generally. I am banking on significant discounts with increasing study size and quality (as well as other things I mention above I take as adverse indicators), but even if I price these right, I expect high variance.
I set the bar this low (versus, say, d = 0.6 - at the ~ 5th percentile of HLIâs estimate) primarily to make a strong rod for my own back. Mordantly criticising an org whilst they are making a funding request in a financially precarious position should not be done lightly. Although Iâd stand by my criticism of HLI even if the trial found Strongminds was even better than HLI predicted, I would regret being quite as strident if the results were any less than dramatically discordant.
If so, me retreating to something like âMeh, they got luckyâ/ââSure I was (/âkinda) wrong, but you didnât deserve to be rightâ seems craven after over-cooking remarks potentially highly adverse to HLIâs fundraising efforts. Fairer would be that I suffer some financial embarrassment, which helps compensate HLI for their injury from my excess.
Perhaps I could have (or should have) done something better. But in fairness to me, I think this is all supererogatory on my part: I do not think my comment is the only example of stark criticism on this forum, but it might be unique in its author levying an expected cost of over $1000 on themselves for making it.]
[Own views]
I think we can be pretty sure (cf.) the forthcoming strongminds RCT (the one not conducted by Strongminds themselves, which allegedly found an effect size of d = 1.72 [!?]) will give dramatically worse results than HLIâs evaluation would predictâi.e. somewhere between ânullâ and â2x cash transfersâ rather than âseveral times better than cash transfers, and credibly better than GW top charities.â [Iâll donate 5k USD if the Ozler RCT reports an effect size greater than d = 0.4 â 2x smaller than HLIâs estimate of ~ 0.8, and below the bottom 0.1% of their monte carlo runs.]
This will not, however, surprise those who have criticised the many grave shortcomings in HLIâs evaluationâmistakes HLI should not have made in the first place, and definitely should not have maintained once they were made aware of them. See e.g. Snowden on spillovers, me on statistics (1, 2, 3, etc.), and Givewell generally.
Among other things, this would confirm a) SimonM produced a more accurate and trustworthy assessment of Strongminds in their spare time as a non-subject matter expert than HLI managed as the centrepiece of their activity; b) the ~$250 000 HLI has moved to SM should be counted on the ânegativeâ rather than âpositiveâ side of the ledger, as I expect this will be seen as a significant and preventable misallocation of charitable donations.
Regrettably, it is hard to square this with an unfortunate series of honest mistakes. A better explanation is HLIâs institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment for an intervention where some results were much better for their agenda than others (cf.). I am sceptical this only applies to the SM evaluation, and I am pessimistic this will improve with further financial support.
- 10 Jul 2023 21:20 UTC; 47 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
- 10 Jul 2023 22:22 UTC; 45 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
I suspect the âedge casesâ illustrate a large part of the general problem: there are a lot of grey areas here, where finding the right course requires a context-specific application of good judgement. E.g. what âcountsâ as being (too?) high status, or seeking to start a ânot seriousâ (enough?) relationship etc. etc. is often unclear in non-extreme casesâeven to the individuals directly involved themselves. I think I agree with most of the factors noted by the OP as being pro tanto cautions, but aliasing them into a bright line classifier for what is or isnât contraindicated looks generally unsatisfactory.
This residual ambiguity makes life harder, as if you canât provide a substitute for good judgement, guidance and recommendations (rather than rulings) may not give great prospects for those with poorer or compromised judgement to bootstrap their way to better decisions. The various fudge factors give ample opportunity for motivated reasoning (âI know generally this would be inappropriate, but I license myself to do it in this particular circumstanceâ), and sexual attraction is not an archetypal precipitant for wisdom and restraint. Third parties weighing in on perceived impropriety may be less self-serving, but potentially more error-prone, and definitely a lot more acrimoniousâI doubt many welcome public or public-ish inquiries or criticism upon the intimate details of their personal lives (âOh yeah? Maybe before you have a go at me you should explain {what you did/âwhat one of your close friends did/ârumours about what someone at your org did/âetc.}, which was far worse and your silence then makes you a hypocrite for calling me out now.â/â âI donât recall us signing up to âthe EA communityâ, but we definitely didnât sign up for collective running commentary and ceaseless gossip about our sex lives. Kindly consider us âEA-adjacantâ or whatever, and mind your own businessâ/âetc.)
FWIW I haveâfor quite a while, and in a few different respectsânoted that intermingling personal and professional lives is often fraught, and encouraged caution and circumspection for things which narrow the distance between them still further. EA-land can be a chimera of a journal club, a salutatorian model UN, a church youth group, and a swingers partyâthese aspects are not the most harmonious in concert. There is ample evidenceâeven more ample recentlyâthat âencouraging cautionâ or similar doesnât cut it. I donât think the OP has the right answer, but I do not have great ideas myself: it is much easier to criticise than do better.
The issue re comparators is less how good dropping outliers or fixed effects are as remedies to publication bias (or how appropriate either would be as an analytic choice here all things considered), but the similarity of these models to the original analysis.
We are not, after all, adjusting or correcting the original metaregression analysis directly, but rather indirectly inferring the likely impact of small study effects on the original analysis by reference to the impact it has in simpler models.
The original analysis, of course, did not exclude outliers, nor follow-ups, and used random effects, not fixed effects. So of Models 1-6, model 1 bears the closest similarity to the analysis being indirectly assessed, so seems the most appropriate baseline.
The point about outlier removal and fixed effects reducing the impact of small study effects is meant to illustrate cycling comparators introduces a bias in assessment instead of just adding noise. Of models 2-6, we would expect 2, 4,5 and 6 to be more resilient to small study effects than model 1, because they either remove outliers, use fixed effects, or both (Model 3 should be ~ a wash). The second figure provides some (further) evidence of this, as (e.g.) the random effects models (thatched) strongly tend to report greater effect sizes than the fixed effect ones, regardless of additional statistical method.
So noting the discount for a statistical small study effect correction is not so large versus comparators which are already less biased (due to analysis choices contrary to those made in the original analysis) misses the mark.
If the original analysis had (somehow) used fixed effects, these worries would (largely) not apply. Of course, if the original analysis had used fixed effects, the effect size would have been a lot smaller in the first place.
--
Perhaps also worth noting isâwith a discounted effect sizeâthe overall impact of the intervention now becomes very sensitive to linear versus exponential decay of effect, given the definite integral of the linear method scales with the square of the intercept, whilst for exponential decay the integral is ~linear with the intercept. Although these values line up fairly well with the original intercept value of ~ 0.5, they diverge at lower values. If (e.g.) the intercept is 0.3, over a 5 year period the exponential method (with correction) returns ~1 SD years (vs.1.56 originally), whilst the linear method gives ~0.4 SD years (vs. 1.59 originally).
(And, for what it is worth, if you plug in corrected SE or squared values in to the original multilevel meta-regressions, PET/âPEESE style, you do drop the intercept by around these amounts either vs. follow-up alone or the later models which add other covariates.)
I have now had a look at the analysis code. Once again, I find significant errors andâonce againâcorrecting these errors is adverse to HLIâs bottom line.
I noted before the results originally reported do not make much sense (e.g. they generally report increases in effect size when âcontrollingâ for small study effects, despite it being visually obvious small studies tend to report larger effects on the funnel plot). When you use appropriate comparators (i.e. comparing everything to the original model as the baseline case), the cloud of statistics looks more reasonable: in general, they point towards discounts, not enhancements, to effect size: in general, the red lines are less than 1, whilst the blue ones are all over the place.
However, some findings still look bizarre even after doing this. E.g. Model 13 (PET) and model 19 (PEESE) not doing anything re. outliers, fixed effects, follow-ups etc, still report higher effects than the original analysis. These are both closely related to the eggers test noted before: why would it give a substantial discount, yet these a mild enhancement?
Happily, the code availability means I can have a look directly. All the basic data seems fine, as the various âbasicâ plots and meta-analyses give the right results. Of interest, the Egger test is still pointing the right wayâand even suggests a lower intercept effect size than last time (0.13 versus 0.26):
PET gives highly discordant findings:
You not only get a higher intercept (0.59 versus 0.5 in the basic random effects model), but the coefficient for standard error is negative: i.e. the regression line it draws slopes the opposite way to Eggers, so it predicts smaller studies give smaller, not greater, effects than larger ones. Whatâs going on?
The moderator (i.e. ~independent variable) is âcorrectedâ SE. Unfortunately, this correction is incorrect (line 17 divides (n/â2)^2 by itself, where the first bracket should be +, not *), so it âcorrectsâ a lot of studies to SE = 1 exactly:
When you use this in a funnel plot, you get this:
Thus these aberrant results (which happened be below the mean effect size) explain why the best fit line now points in the opposite direction. All the PET analyses are contaminated by this error, and (given PEESE squares these values) so are all the PEESE analyses. When debugged, PET shows an intercept lower than 0.5, and the coefficient for SE pointing in the right direction:
Hereâs the table of corrected estimates applied to models 13 â 24: as you can see, correction reduces the intercept in all models, often to substantial degrees (I only reported to 2 dp, but model 23 was marginally lower). Unlike the original analysis, here the regression slopes generally point in the right direction.
The same error appears to be in the CT analyses. I havenât done the same correction, but I would guess the bizarre readings (e.g. the outliers of 70x or 200x etc. when comparing PT to CT when using these models) would vanish once it is corrected.
So, when correcting the PET and PEESE results, and use the appropriate comparator (Model 1, I forgot to do this for models 2-6 last time), we now get this:
Now interpretation is much clearer. Rather than âall over the place, but most of the models basically keep the estimate the sameâ, it is instead âacross most reasonable ways to correct or reduce the impact of small study effects, you see substantial reductions in effect (the avg across the models is ~60% of the originalânot a million miles away from my â50%?â eyeball guess.) Moreover, the results permit better qualitative explanation.
On the first level, we can make our model fixed or random effects, fixed effects are more resilient to publication bias (more later), and we indeed find changing from random effects to fixed effect (i.e. model 1 to model 4) reduces effect size by a bit more than 2.
On the second level, we can elect for different inclusion criteria: we could remove outliers, or exclude follow-ups. The former would be expected to partially reduce small study effects (as outliers will tend to be smaller studies reporting surprisingly high effects), whilst the later does not have an obvious directional effectâalthough one should account for nested outcomes, this would be expected to distort the weights rather than introduce a bias in effect size. Neatly enough, we see outlier exclusion does reduce effect size (Model 2 versus Model 1) but not followups or not (model 3 versus model 1). Another neat example of things lining up is you would expect FE to give a greater correction than outlier removal (as FE is strongly discounting smaller studies across the board, rather than removing a few of the most remarkable ones), and this is what we see (Model 2 vs. Model 4)
Finally, one can deploy a statistical technique to adjust for publication bias. There are a bunch of methods to do this: PET, PEESE, Ruckerâs limit, P curve, and selection models. All of these besides the P curve give a discount to the original effect size (model 7, 13,19,25,37 versus model 31).
We can also apply these choices in combination, but essentially all combinations point to a significant downgrade in effect size. Furthermore, the combinations allow us to better explain discrepant findings. Only models 3, 31, 33, 35, 36 give numerically higher effect sizes. As mentioned before, model 3 only excludes follow-ups, so would not be expected to be less vulnerable to small study effects. The others are all P curve analyses, and P curves are especially sensitive to heterogeneity: the two P curves which report discounts are those with outliers removed (Model 32, 35), supporting this interpretation.
With that said, onto Joelâs points.
1. Discarding (betterâinvestigating) bizarre results
I think if we discussed this beforehand and I said âOkay, youâve made some good points, Iâm going to run all the typical tests and publish their results.â would you have said have advised me to not even try, and instead, make ad hoc adjustments. If so, Iâd be surprised given thatâs the direction Iâve taken you to be arguing I should move away from.
You are correct I would have wholly endorsed permuting all the reasonable adjustments and seeing what picture emerges. Indeed, I would be (and am) happy with âthrowing everything inâ even if some combinations canât really work, or doesnât really make much sense (e.g. outlier rejection + trim and fill).
But I would have also have urged you to actually understand the results you are getting, and querying results which plainly do not make sense. That weâre still seeing the pattern of âInitial results reported donât make sense, and I have to repeat a lot of the analysis myself to understand why (and, along the way, finding the real story is much more adverse than HLI presents)â is getting depressing.
The error itself for PET and PEESE is no big dealââI pressed the wrong button once when coding and it messed up a lot of my downstream analysisâ can happen to anyone. But these results plainly contradicted both the naked eye (they not only give weird PT findings but weird CT findings: by inspection the CT is basically a negative control for pub bias, yet PET-PEESE typically finds statistically significant discounts), the closely-related Eggerâs test (disagreeing with respect to sign), and the negative coefficients for the models (meaning they are sloping in the opposite direction) are printed in the analysis code.
I also find myself inclined to less sympathy here because I didnât meticulously inspect every line of analysis code looking for trouble (my file drawer is empty). I knew the results being reported for these analysis could not be right, so I zeroed in on it expecting there was an error. I was right.
2. Comparators
When I do this, and again remove anything that doesnât produce a discount for psychotherapy, the average correction leads to a 6x cost-effectiveness ratio of PT to CT. This is a smaller shift than you seem to imply.
9.4x â ~6x is a drop of about one third, I guess we could argue about what increment is large or small. But more concerning is the direction of travel: taking the âCT (all)â comparator.
If we do not do my initial reflex and discard the PT favouring results, then we see adding the appropriate comparator and fixing the statistical error ~ halves the original multiple. If we continue excluding the âsurely notâ +ve adjustments, weâre still seeing a 20% drop with the comparator, and a further 10% increment with the right results for PT PET/âPEESE.
How many more increments are there? Thereâs at least one moreâthe CT PET/âPEESE results are wrong, and theyâre giving bizarre results in the spreadsheet. Although I would expect diminishing returns to further checking (i.e. if I did scour the other bits of the analysis, I expect the cumulative error is smaller or neutral), but the âlimit valueâ of what this analysis would show if there were no errors doesnât look great so far.
Maybe it would roughly settle towards the average of ~ 60%, so 9.4*0.6 = 5.6. Of course, this would still be fine by the lights of HLIâs assessment.
3. Cost effectiveness analysis
My complete guess is that if StrongMinds went below 7x GiveDirectly weâd qualitatively soften our recommendation of StrongMinds and maybe recommend bednets to more donors. If it was below 4x weâd probably also recommend GiveDirectly. If it was below 1x weâd drop StrongMinds. This would change if /â when we find something much more (idk: 1.5-2x?) cost-effective and better evidenced than StrongMinds.
However, I suspect this is beating around the bushâas I think the point Gregory is alluding to is âlook at how much their effects appear to wilt with the slightest scrutiny. Imagine what Iâd find with just a few more hours.â
If thatâs the case, I understand whyâbut thatâs not enough for me to reshuffle our research agenda. I need to think thereâs a big, clear issue now to ask the team to change our plans for the year. Again, Iâll be doing a full re-analysis in a few months.
Thank you for the benchmarks. However, I mean to beat both the bush and the area behind it.
The first things first, I have harped on about the CEA because it is is bizarre to be sanguine about significant corrections because âthe CEA still gives a good multipleâ when the CEA itself gives bizarre outputs (as noted before). With these benchmarks, it seems this analysis, on its own terms, is already approaching action relevance: unless you want to stand behind cycling comparators (which the spreadsheet only does for PT and not CT, as I noted last time), then this + the correction gets you below 7x. Further, if you want to take SM effects as relative to the meta-analytic results (rather take their massively outlying values), you get towards 4x (e.g. drop the effect size of both meta-analyses by 40%, then put the SM effect sizes at the upper 95% CI). So thereâs already a clear motive to investigate urgently in terms of what you already trying to do.
The other reason is the general point of âWell, this important input wilts when you look at it closelyâmaybe this behaviour generalisesâ. Sadly, we donât really need to âimagineâ what I would find with a few more hours: I just did (and on work presumably prepared expecting I would scrutinise it), and I think the results speak for themselves.
The other parts of the CEA are non-linear in numerous ways, so it is plausible that drops of 50% in intercept value lead to greater than 50% drops in the MRA integrated effect sizes if correctly ramified across the analysis. More importantly, the thicket of the guestimate gives a lot of forking paths availableâgiven it seems HLI clearly has had a finger on the scale, you may not need many more relatively gentle (i.e. 10%-50%) pushes upwards to get very inflated âbottom line multipliersâ.
4. Use a fixed effects model instead?
As Ryan notes, fixed effects are unconventional in general, but reasonable in particular when confronted with considerable small study effects. I thinkâeven if one had seen publication bias prior to embarking on the analysisâsticking with random effects would have been reasonable.
- 8 Jul 2023 7:33 UTC; 111 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
- 16 Jul 2023 19:03 UTC; 74 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
Thanks for this, Joel. I look forward to reviewing the analysis more fully over the weekend, but I have three major concerns with what you have presented here.
1. A lot of these publication bias results look like nonsense to the naked eye.
Recall the two funnel plots for PT and CT (respectively):
I think weâre all seeing the same important differences: the PT plot has markers of publication bias (asymmetry) and P hacking (clustering at the P<0.05 contour, also the p curve) visible to the naked eye; the CT studies do not really show this at all. So heuristically, we should expect statistical correction for small study effects to result in:
In absolute terms, the effect size for PT should be adjusted downwards
In comparative terms, the effect size for PT should be adjusted downwards more than the CT effect size.
If a statistical correction does the opposite of these things, I think we should say its results are not just âsurprisingâ but âunbelievableâ: it just cannot be true that, given the data we see being fed into the method it should lead us to conclude this CT literature is more prone to small-study effects than this PT one; nor (contra the regression slope in the first plot), the effect size for PT should be corrected upwards.
Yet many of the statistical corrections you have done tend to fail one or both of these basically-diagnostic tests of face validity. Across all the different corrections for PT, on average the result is a 30% increase in PT effect size (only trim and fill and selection methods give families of results where the PT effect size is reduced). Although (mostly) redundant, these are also the only methods which give a larger drop to PT than CT effect size.
As comments everywhere on this post have indicated, heterogeneity is tricky. If (generally) different methods all gave discounts, but they were relatively small (with the exception of one method like a Trim and Fill which gave a much steeper one), I think the conclusions you drew above would be reasonable. However, for these results, the ones that donât make qualitative sense should be discarded, and the the key upshot should be: âAlthough a lot of statistical corrections give bizarre results, the ones which do make sense also tend to show significant discounts to the PT effect sizeâ.
2. The comparisons made (and the order of operations to get to them) are misleading
What is interesting though, is although in % changes correction methods tend to give an increase to PT effect size, the effect sizes themselves tend to be lower: the average effect size across analyses is 0.36, ~30% lower than the pooled estimate of 0.5 in the funnel plot (in contrast, this is 0.09 - versus 0.1, for CT effect size).
This is the case because the % changes are being measured, not against the single reference value of 0.5 in the original model, but the equivalent model in terms of random/âfixed, outliers/ânot, etc. but without any statistical correction technique. For example: row 13 (Model 10) is Trim-and-Fill correction for a fixed effect model using the full data. For PT, this effect size is 0.19. The % difference is calculated versus row 7 (Model 4), a fixed effect model without Trim-and-Fill (effect = 0.2) not the original random effects analysis (effect = 0.5). Thus the % of reference effect is 95% not 40%. In general, comparing effect sizes to row 4 (Model ID 1) generally gets more sensible findings, and also generally more adverse ones. re. PT pub bias correction:
In terms of (e.g.) assessing the impact of Trim and Fill in particular, it makes sense to compare like with like. Yet presumably what we care about to ballparking the estimate of publication bias in generalâand for the comparisons made in the spreadsheet mislead. Fixed effect models (ditto outlier exclusion, but maybe not follow-ups) are already an (~improvised) means of correcting for small study effects, as they weigh them in the pooled estimate much less than random effects models. So noting Trim-and-Fill only gives a 5% additional correction in this case buries the lede: you already halved the effect by moving to a fixed effect model from a random effect model, and the most plausible explanation why fixed effect modelling limits distortion by small study effects.
This goes some way to explaining the odd findings for statistical correction above: similar to collider/âcollinearity issues in regression, you might get weird answers of the impact of statistical techniques when you are already partly âcontrolling forâ small study effects. The easiest example of this is combining outlier removal with trim and fillâthe outlier removal is basically doing the âtrimâ part already.
It also indicates an important point your summary misses. One of the key stories in this data is: âGenerally speaking, when you start using techniquesâalone or in combinationâwhich reduce the impact of publication bias, you cut around 30% of the effect size on average for PT (versus 10%-ish for CT)â.
3. Cost effectiveness calculation, again
âCost effectiveness versus CTâ is a unhelpful measure to use when presenting these results: we would first like to get a handle on the size of the small study effect in the overall literature, and then see what ramifications it has for the assessment and recommendations of strongminds in particular. Another issue is these results doesnât really join up with the earlier cost effectiveness assessment in ways which complicate interpretation. Two examples:
On the guestimate, setting the meta-regressions to zero effect still results in ~7x multiples for Strongminds versus cash transfers. This spreadsheet does a flat percentage of the original 9.4x bottom line (so a â0% of previous effectâ correction does get the multiple down to zero). Being able to get results which give <7x CT overall is much more sensible than what the HLI CEA does, but such results could not be produced if we corrected the effect sizes and plugged them back into the original CEA.
Besides results being incongruous, the methods look incongruous too. The outliers being excluded in some analyses include strong-minds related papers later used in the overall CE calculation to get to the 9.4 figure. Ironically, exclusion would have been the right thing to do originally, as including the papers help derive the pooled estimate and then again as independent inputs into the CEA double counts them. Alas two wrongs do not make a right: excluding them in virtue of outlier effects seems to imply either: i) these papers should be discounted generally (so shouldnât be given independent weight in the CEA); ii) they are legit, but are such outliers the meta-analysis is actually uninformative to assess the effect of the particular interventions they investigate.
More important than this, though, is the âpercentage of what?â issue crops up again: the spreadsheet uses relative percentage change to get a relative discount vs. CT, but it uses the wrong comparator to calculate the percentages.
Lets look at row 13 again, where we are conducting a fixed effects analysis with trim-and-fill correction. Now we want to compare PT and CT: does PT get discounted more than CT? As mentioned before, for PT, the original random effects model gives an effect size of 0.5, and with TânâF+Fixed effects the effect size is 0.19. For CT, the original effect size is 0.1, and with TânâF +FE, it is still 0.1. In relative terms, as PT only has 40% of the previous effect size (and CT 100% of the effect size), this would amount to 40% of the previous âmultipleâ (i.e. 3.6x).
Instead of comparing them to the original estimate (row 4), it calculates the percentages versus a fixed effect but not TânâF analysis for PT (row 7). Although CT here is also 0.1, PT in this row has an effect size of 0.2, so the PT percentage is (0.19/â0.2) 95% versus (0.1/â0.1) 100%, and so the calculated multiple of CT is not 3.6 but 9.0.
The spreadsheet is using the wrong comparison, as we care about whether the multiple between PT and CT is sensitive to different analyses, relative sensitivity to one variation (TânâF) conditioned on another (fixed effect modelling). Especially when weâre interested in small study effects and the conditioned on effect likely already reduces those.
If one recalculates the bottom line multiples using the first model as the comparator, the results are a bit less weird, but also more adverse to PT. Note the effect is particularly reliable for TânâF (ID 7-12) and selection measures (ID 37-42), which as already mentioned are the analysis methods which give qualitatively believable findings.
Of interest, the spreadsheet only makes this comparator error for PT: for CT, whether all or lumped (column I and L) makes all of its percentage comparisons versus the original model (ID 1). I hope (and mostly expect) this is a click-and-drag spreadsheet error (or perhaps one of my understanding), rather than my unwittingly recovering an earlier version of this analysis.
Summing up
I may say more next week, but my impressions are
In answer to the original post title, I think the evidence for Strongminds is generally weak, equivocal, likely compromised, and definitely difficult to interpret.
Many, perhaps most (maybe all?) of the elements used in HLIâs recommendation of strongminds do not weather scrutiny well. E.g.
Publication bias issues discussed in the comments here.
The index papers being noted outliers even amongst this facially highly unreliable literature.
The cost effectiveness guestimate not giving sensible answers when you change its inputs.
I think HLI should withdraw their recommendation of Strongminds, and mostly go âback to the drawing boardâ on their assessments and recommendations. The current recommendation is based on an assessment with serious shortcomings in many of its crucial elements. I regret I suspect if I looked into other things I would see still more causes of concern.
The shortcomings in multiple elements also make criticism challenging. Although HLI thinks the publication bias is not big enough of an effect to withdraw the recommendation, it what publication bias would be big enough, or indeed in general what evidence would lead them to change their minds. Their own CEA is basically insensitive to the meta-analysis, giving âSM = 7x GDâ even if the effect size was corrected all the way to zero. Above Joel notes even at âonlyâ SM = 3-4GD it would still generally be their top recommendation. So by this logic, the only decision-relevance this meta-analysis has is confirming the effect isnât massively negative. I doubt this is really true, but HLI should have a transparent understanding (and, ideally, transparent communication) of what their bottom line is actually responsive to.
One of the commoner criticisms of HLI is it is more a motivated reasoner than an impartial evaluator. Although its transparency in data (and now code) is commendable, overall this episode supports such an assessment: the pattern which emerges is a collection of dubious-to-indefensible choices made in analysis, which all point in the same direction (i.e. favouring the Strongminds recommendation); surprising incuriousity about the ramifications or reasonableness of these analytic choices; and very little of this being apparent from the public materials, emerging instead in response to third party criticism or partial replication.
Although there are laudable improvements contained in Joelâs summary above, unfortunately (per my earlier points) I take it as yet another example of this overall pattern. The reasonable reaction to âYour publication bias corrections are (on average!) correcting the effect size upwards, and the obviously skewed funnel plot less than the not obviously skewed oneâ is not âWell, isnât that surprisingâI guess thereâs no clear sign of trouble with pub bias in our recommendation after allâ, but âThis doesnât make any senseâ.
I recommend readers do not rely upon HLIs recommendations or reasoning without carefully scrutinising the underlying methods and data themselves.
- 8 Jul 2023 7:33 UTC; 111 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
- 16 Jul 2023 19:03 UTC; 74 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
- 10 Jul 2023 22:22 UTC; 45 points) 's comment on The HapÂpier Lives InÂstiÂtute is fundÂing conÂstrained and needs you! by (
An update:
This RCT (which should have been the Baird RCTâmy apologies for mistakenly substituting Sarah Baird with her colleague Berk Ozler as first author previously) is now out.
I was not specific on which effect size would count, but all relevant[1] effect sizes reported by this study are much lower than d = 0.4 - around d = 0.1. I roughly[2] calculate the figures below.
In terms of âSD-years of depression avertedâ or similar, there are a few different ways you could slice it (e.g. which outcome you use, whether you linearly interpolate, do you extend the effects out to 5 years, etc). But when I play with the numbers I get results around 0.1-0.25 SD-years of depression averted per person (as a sense check, this lines up with an initial effect of ~0.1, which seems to last between 1-2 years).
These are indeed âdramatically worse results than HLIâs [2021] evaluation would predictâ. They are also substantially worse than HLIâs (much lower) updated 2023 estimates of Strongminds. The immediate effects of 0.07-0.16 are ~>5x lower than HLIâs (2021) estimate of an immediate effect of 0.8; they are 2-4x lower than HLIâs (2023) informed prior for Strongminds having an immediate effect of 0.39. My calculations of the total effect over time from Baird et al. of 0.1-0.25 SD-years of depression averted are ~10x lower than HLIâs 2021 estimate of 1.92 SD-years averted, and ~3x lower than their most recent estimate of ~0.6.
Baird et al. also comment on the cost-effectiveness of the intervention in their discussion (p18):
Iâm not sure anything more really needs to be said at this point. But much more could be, and I fear Iâll feel obliged to return to these topics before long regardless.
The report describes the outcomes on p.10:
Measurements were taken following treatment completion (âRapid resurveyâ), then at 12m and 24m thereafer (midline and endline respectively).
I use both primary indicators and the discrete values of the underlying scores they are derived from. I havenât carefully looked at the other secondary outcomes nor the human capital variables, but besides being less relevant, I do not think these showed much greater effects.
I.e. I took the figures from Table 6 (comparing IPT-G vs. control) for these measures and plugged them into a webtool for Cohenâs h or d as appropriate. This is rough and ready, although my calculations agree with the effect sizes either mentioned or described in text. They also pass an âeye testâ of comparing them to the cmfs of the scores in figure 3 - these distributions are very close to one another, consistent with small-to-no effect (one surprising result of this study is IPT-G + cash lead to worse outcomes than either control or IPT-G alone):
One of the virtues of this study is it includes a reproducibility package, so Iâd be happy to produce a more rigorous calculation directly from the provided data if folks remain uncertain.