The best meta-analysis for deterioration (i.e. negative effects) rates of guided self-help (k = 18, N = 2,079) found that deterioration was lower in the intervention condition, although they did find a moderating effect where participants with low education didn’t see this decrease in deterioration rates (but nor did they see an increase)[1].
So, on balance, I think it’s very unlikely that any of the dropped-out participants were worse-off for having tried the programme, especially since the counterfactual in low-income countries is almost always no treatment. Given that your interest is top-line cost-effectiveness, then only counting completed participants for effect size estimates likely underestimates cost-effectiveness if anything, since churned participants would be estimated at 0.
Yes, this makes sense if I understand you correctly. If we set the effect size to 0 for all the dropouts, while having reasonable grounds for thinking it might be slightly positive, this would lead to underestimate top-line cost effectiveness.
I’m mostly reacting to the choice of presenting the results of the completer subgroup which might be conflated with all participants in the program. Even the OP themselves seem to mix this up in the text.
Context: To offer a few points of comparison, two studies of therapy-driven programs found that 46% and 57.5% of participants experienced reductions of 50% or more, compared to our result of 72%. For the original version of Step-by-Step, it was 37.1%. There was an average PHQ-9 reduction of 6 points compared to our result of 10 points.
As far as I can tell, they are talking about completers in this paragraph, not participants. @RachelAbbott could you clarify this?
When reading the introduction again I think it’s pretty balanced now (possibly because it was updated in response to the concerns). Again, thank you for being so receptive to feedback @RachelAbbott!
The best meta-analysis for deterioration (i.e. negative effects) rates of guided self-help (k = 18, N = 2,079) found that deterioration was lower in the intervention condition, although they did find a moderating effect where participants with low education didn’t see this decrease in deterioration rates (but nor did they see an increase)[1].
So, on balance, I think it’s very unlikely that any of the dropped-out participants were worse-off for having tried the programme, especially since the counterfactual in low-income countries is almost always no treatment. Given that your interest is top-line cost-effectiveness, then only counting completed participants for effect size estimates likely underestimates cost-effectiveness if anything, since churned participants would be estimated at 0.
Ebert, D. D. et al. (2016) Does Internet-based guided-self-help for depression cause harm? An individual participant data meta-analysis on deterioration rates and its moderators in randomized controlled trials, Psychological Medicine, vol. 46, pp. 2679–2693.
Yes, this makes sense if I understand you correctly. If we set the effect size to 0 for all the dropouts, while having reasonable grounds for thinking it might be slightly positive, this would lead to underestimate top-line cost effectiveness.
I’m mostly reacting to the choice of presenting the results of the completer subgroup which might be conflated with all participants in the program. Even the OP themselves seem to mix this up in the text.
As far as I can tell, they are talking about completers in this paragraph, not participants. @RachelAbbott could you clarify this?
When reading the introduction again I think it’s pretty balanced now (possibly because it was updated in response to the concerns). Again, thank you for being so receptive to feedback @RachelAbbott!