Thanks very much for this thoughtful comment and for taking the time to read and provide feedback on the report. Sorry about the delay in replying—I was ill for most of last week.
1. Yes, you’re absolutely right. The current bounds are very wide and they represent extreme, unlikely scenarios. We’re keen to develop probabilistic models in future cost-effectiveness analyses to produce e.g. 90% confidence intervals and carry out sensitivity analyses, probably using Guesstimate or R. We didn’t have time to do so for this project but this is high on our list of methodological improvements.
2. Estimating the retention rates is challenging so it’s helpful for us to know that you think our values are too high. We based this primarily on our retention rate for StrongMinds, but adjusted downwards. It’s possible we anchored on this too much. However, it’s not clear to me that our values are too high. In particular, if our best-guess retention rate for AfH is too high, then this is probably also true for StrongMinds. Since we’re using StrongMinds as a benchmark, this might not change our conclusions very much.
The total benefits are calculated somewhat confusingly and I appreciate you haven’t had the chance to look at the CEA in detail. If x is the effect directly post-treatment and r is the retention rate, we calculated the total benefits as
12x+∞∑i=1rix=x(1+r)2(1−r)
That is, we assume half a year of full effect, and then discount each year that follows by r each time. We calculated it in this way because for StrongMinds, we had 6 month follow-up data. However, it’s not clear that this approach is best in this case. It might have been better to:
Assume 0.15 years at full effect
Since the study has only an 8 week follow-up, as you mention
Assume somewhere in between 0.15 and 0.5 years at full effect
Since the effects still looked very good at 8 week follow-up (albeit with no control) and evidence from interventions such as StrongMinds that suggest longer-lasting effects still seems somewhat relevant
Finally, I think there are good reasons to prefer AfH over CBT in high-income countries, even if our CEA suggests they are similarly cost-effectiveness in terms of depression. (Though they might not be strong enough to convince you that AfH and e.g. StrongMinds are similarly cost-effective.)
AfH aims to improve well-being broadly, not just by treating mental health problems.
Although much—perhaps most—of the benefits of AfH’s courses come from reduction in depression, some of the benefits to e.g. happiness, life satisfaction and pro-social behaviour aren’t captured by measuring depression
Our CEA is very conservative in some respects
The effect sizes we used (after our Bayesian analysis) are about 30% as large as reported in the study
If CBT effects aren’t held to similar levels of scrutiny, then we can’t compare cost-effectiveness fairly
We think that the wider benefits of AfH’s scale-up could be very large
We focused just on the scale-up of the Exploring What Matters courses because this is easiest to measure
The happiness movement that AfH is leading and growing could be very beneficial, e.g. widely sharing materials on AfH’s website, bringing (relatively small) benefits to a large number of people
That said, I think it’s worth reconsidering our retention rates when we review this funding opportunity. Thanks for your input.
3. This is correct. We did not account for the opportunity cost of facilitators’ or participants’ time. As always, there are many factors and given time constraints, we couldn’t account for all of them. We thought that these costs would be small compared to the benefits of the course so we didn’t prioritise their inclusion. I don’t think we explicitly mentioned the opportunity cost of time in the report though, so thanks for pointing this out.
On retention rates: Your general methods seem to make sense, since one would expect gradual tapering off of benefits, but your inputs seem even more optimistic than I originally thought.
I’m not sure Strong Minds is a great benchmark for retention rates, partly because of the stark differences in context (rural Uganda vs UK cities), and partly because IIRC there were a number of issues with SM’s study, e.g. a non-randomised allocation and evidence of social desirability bias in outcome measurement, plus of course general concerns related to the fact it was a non-peer-reviewed self-evaluation. Perhaps retention rates of effects from UK psychotherapy courses of similar duration/intensity would be more relevant? But I haven’t looked at the SM study for about a year, and I haven’t looked into other potential benchmarks, so perhaps yours was a sensible choice.
Also not a great benchmark in a UK context, but Haushofer and colleagues recently did a study* of Problem Management+ in Uganda that found no benefits at the end of a year (paper forthcoming), even though it showed effectiveness at the 3 month mark in a previous study in Kenya.
*Haushofer, J., Mudida, R., & Shapiro, J. (2019). The Comparative Impact of Cash Transfers and Psychotherapy on Psychological and Economic Well-being. Working Paper. Available upon request.
Yes, feeling much better now fortunately! Thanks for these thoughts and studies, Derek.
Given our time constraints, we did make some judgements relatively quickly but in a way that seemed reasonable for the purposes of deciding whether to recommend AfH. So this can certainly be improved and I expect your suggestions to be helpful in doing so. This conversation has also made me think it would be good to explore six monthly/quarterly/monthly retention rates rather than annual ones—thanks for that. :)
Our retention rates for StrongMinds were also based partly on this study, but I wasn’t involved in that analysis so I’m not sure on the details of the retention rates there.
Thanks very much for this thoughtful comment and for taking the time to read and provide feedback on the report. Sorry about the delay in replying—I was ill for most of last week.
1. Yes, you’re absolutely right. The current bounds are very wide and they represent extreme, unlikely scenarios. We’re keen to develop probabilistic models in future cost-effectiveness analyses to produce e.g. 90% confidence intervals and carry out sensitivity analyses, probably using Guesstimate or R. We didn’t have time to do so for this project but this is high on our list of methodological improvements.
2. Estimating the retention rates is challenging so it’s helpful for us to know that you think our values are too high. We based this primarily on our retention rate for StrongMinds, but adjusted downwards. It’s possible we anchored on this too much. However, it’s not clear to me that our values are too high. In particular, if our best-guess retention rate for AfH is too high, then this is probably also true for StrongMinds. Since we’re using StrongMinds as a benchmark, this might not change our conclusions very much.
The total benefits are calculated somewhat confusingly and I appreciate you haven’t had the chance to look at the CEA in detail. If x is the effect directly post-treatment and r is the retention rate, we calculated the total benefits as
That is, we assume half a year of full effect, and then discount each year that follows by r each time. We calculated it in this way because for StrongMinds, we had 6 month follow-up data. However, it’s not clear that this approach is best in this case. It might have been better to:
Assume 0.15 years at full effect
Since the study has only an 8 week follow-up, as you mention
Assume somewhere in between 0.15 and 0.5 years at full effect
Since the effects still looked very good at 8 week follow-up (albeit with no control) and evidence from interventions such as StrongMinds that suggest longer-lasting effects still seems somewhat relevant
Finally, I think there are good reasons to prefer AfH over CBT in high-income countries, even if our CEA suggests they are similarly cost-effectiveness in terms of depression. (Though they might not be strong enough to convince you that AfH and e.g. StrongMinds are similarly cost-effective.)
AfH aims to improve well-being broadly, not just by treating mental health problems.
Although much—perhaps most—of the benefits of AfH’s courses come from reduction in depression, some of the benefits to e.g. happiness, life satisfaction and pro-social behaviour aren’t captured by measuring depression
Our CEA is very conservative in some respects
The effect sizes we used (after our Bayesian analysis) are about 30% as large as reported in the study
If CBT effects aren’t held to similar levels of scrutiny, then we can’t compare cost-effectiveness fairly
We think that the wider benefits of AfH’s scale-up could be very large
We focused just on the scale-up of the Exploring What Matters courses because this is easiest to measure
The happiness movement that AfH is leading and growing could be very beneficial, e.g. widely sharing materials on AfH’s website, bringing (relatively small) benefits to a large number of people
That said, I think it’s worth reconsidering our retention rates when we review this funding opportunity. Thanks for your input.
3. This is correct. We did not account for the opportunity cost of facilitators’ or participants’ time. As always, there are many factors and given time constraints, we couldn’t account for all of them. We thought that these costs would be small compared to the benefits of the course so we didn’t prioritise their inclusion. I don’t think we explicitly mentioned the opportunity cost of time in the report though, so thanks for pointing this out.
Thanks Aidan! Hope you’re feeling better now.
Most of your comments sound about right.
On retention rates: Your general methods seem to make sense, since one would expect gradual tapering off of benefits, but your inputs seem even more optimistic than I originally thought.
I’m not sure Strong Minds is a great benchmark for retention rates, partly because of the stark differences in context (rural Uganda vs UK cities), and partly because IIRC there were a number of issues with SM’s study, e.g. a non-randomised allocation and evidence of social desirability bias in outcome measurement, plus of course general concerns related to the fact it was a non-peer-reviewed self-evaluation. Perhaps retention rates of effects from UK psychotherapy courses of similar duration/intensity would be more relevant? But I haven’t looked at the SM study for about a year, and I haven’t looked into other potential benchmarks, so perhaps yours was a sensible choice.
Also not a great benchmark in a UK context, but Haushofer and colleagues recently did a study* of Problem Management+ in Uganda that found no benefits at the end of a year (paper forthcoming), even though it showed effectiveness at the 3 month mark in a previous study in Kenya.
*Haushofer, J., Mudida, R., & Shapiro, J. (2019). The Comparative Impact of Cash Transfers and Psychotherapy on Psychological and Economic Well-being. Working Paper. Available upon request.
Yes, feeling much better now fortunately! Thanks for these thoughts and studies, Derek.
Given our time constraints, we did make some judgements relatively quickly but in a way that seemed reasonable for the purposes of deciding whether to recommend AfH. So this can certainly be improved and I expect your suggestions to be helpful in doing so. This conversation has also made me think it would be good to explore six monthly/quarterly/monthly retention rates rather than annual ones—thanks for that. :)
Our retention rates for StrongMinds were also based partly on this study, but I wasn’t involved in that analysis so I’m not sure on the details of the retention rates there.