They did not have a placebo-receiving control group. For example some kind of unstructured talking-group etc. Ideally an intervention known as „useless“ but sounding plausible. So we do not know, which effects are due to regression to the middle, social desirable answers etc. This is basically enough to make their research rather useless. And proper control groups are common for quiete a while.
No „real“ evaluation of the results. Only depending on what their patients said, but not checking, if this is correct (children going to school more often…). Not even for a subgroup.
They had the impression, that patients answered in a social desirable way—and adressed that problem completely inadequate. Arguing social desirable answers would happen only at the end of the treatment, but not near the end of the treatment. ?! So they simply took near-end numbers for granted. ?!
If their depression treatment is as good as they claim, then it is magnitudes better, than ALL common treatments in high-income countries. And much cheaper. And faster. And with less specialized instuctors… ?! And did they invent something new? Nope. They took an already existing treatment—and now it works SO much better? This seems implausible to me.
As far as I know SoGive is reviewing strongminds research. They should be able to back (or reject) my comments here.
They did not have a placebo-receiving control group.
All the other points you mentioned seem very relevant, but I somewhat disagree with the importance of a placebo control group, when it comes to estimating counterfactual impact. If the control group is assigned to standard of care, they will know they are receiving no treatment and thus not experience any placebo effects (but unlike you write, regression-to-the-mean is still expected in that group), while the treatment group experiences placebo+”real effect from treatment”. This makes it difficult to do causal attribution (placebo vs treatment), but otoh it is exactly what happens in real life when the intervention is rolled out!
If there is no group psychotherapy, the would-be patients receive standard of care, so they will not experience the placebo effect either. Thus a non-placebo design is estimating precisely what we are considering doing in real life: give an intervention to people, who will know that they are being treated and who would just have received standard of care (in the context of Uganda, this presumably means receiving nothing?).
Ofc, there are issues with blinding the evaluators; whether StrongMinds has done so is unclear to me. All of your other points seem fairly strong though.
Thx for commenting. I have to agree with you and disagree somewhat with my earlier comment. (#placebo). Actually placebo-effects are fine and if a placebo helps people: Great!
And yes, getting a specific treatment effect + the placebo-effect is better (and more like in real life), than getting no treatment at all.
“Still: I thought it be good to make this comment right now, so people see my opinion.”
I think it would have been better to wait until you had time to give proper arguments for your views. I agree with Stephen that the above comment wasn’t helpful or constructive.
I think the follow up is much more helpful, but I found the original helpful too. I think it may be possible to say the same content less rudely, but “I think strong minds research is poor” is still a useful comment to me.
Please don´t get me wrong. I do not like the research from strongminds for the above mentioned reasons (I am sure nobody got me wrong on this). And for some other reasons. But that does mean, that their therapy-work is bad or inefficient. Even if they overestimate their effects by a factor of 4 (it might be 20, it might be 2 - I just made those numbers up) it would still be very valuable work.
They did not have a placebo-receiving control group. For example some kind of unstructured talking-group etc. Ideally an intervention known as „useless“ but sounding plausible. So we do not know, which effects are due to regression to the middle, social desirable answers etc. This is basically enough to make their research rather useless. And proper control groups are common for quiete a while.
No „real“ evaluation of the results. Only depending on what their patients said, but not checking, if this is correct (children going to school more often…). Not even for a subgroup.
They had the impression, that patients answered in a social desirable way—and adressed that problem completely inadequate. Arguing social desirable answers would happen only at the end of the treatment, but not near the end of the treatment. ?! So they simply took near-end numbers for granted. ?!
If their depression treatment is as good as they claim, then it is magnitudes better, than ALL common treatments in high-income countries. And much cheaper. And faster. And with less specialized instuctors… ?! And did they invent something new? Nope. They took an already existing treatment—and now it works SO much better? This seems implausible to me.
As far as I know SoGive is reviewing strongminds research. They should be able to back (or reject) my comments here.
All the other points you mentioned seem very relevant, but I somewhat disagree with the importance of a placebo control group, when it comes to estimating counterfactual impact. If the control group is assigned to standard of care, they will know they are receiving no treatment and thus not experience any placebo effects (but unlike you write, regression-to-the-mean is still expected in that group), while the treatment group experiences placebo+”real effect from treatment”. This makes it difficult to do causal attribution (placebo vs treatment), but otoh it is exactly what happens in real life when the intervention is rolled out!
If there is no group psychotherapy, the would-be patients receive standard of care, so they will not experience the placebo effect either. Thus a non-placebo design is estimating precisely what we are considering doing in real life: give an intervention to people, who will know that they are being treated and who would just have received standard of care (in the context of Uganda, this presumably means receiving nothing?).
Ofc, there are issues with blinding the evaluators; whether StrongMinds has done so is unclear to me. All of your other points seem fairly strong though.
Thx for commenting. I have to agree with you and disagree somewhat with my earlier comment. (#placebo). Actually placebo-effects are fine and if a placebo helps people: Great!
And yes, getting a specific treatment effect + the placebo-effect is better (and more like in real life), than getting no treatment at all.
“Still: I thought it be good to make this comment right now, so people see my opinion.”
I think it would have been better to wait until you had time to give proper arguments for your views. I agree with Stephen that the above comment wasn’t helpful or constructive.
I think the follow up is much more helpful, but I found the original helpful too. I think it may be possible to say the same content less rudely, but “I think strong minds research is poor” is still a useful comment to me.
I disagree. I should also say that the follow up looked very different when I commented on it; it was extensively edited after I had commented.
Please don´t get me wrong. I do not like the research from strongminds for the above mentioned reasons (I am sure nobody got me wrong on this). And for some other reasons. But that does mean, that their therapy-work is bad or inefficient. Even if they overestimate their effects by a factor of 4 (it might be 20, it might be 2 - I just made those numbers up) it would still be very valuable work.
I think that somewhere there is “placebo’s effect” involved. People may think something is helpful but it is not.
Just recently have read the https://www.health.harvard.edu/mental-health/the-power-of-the-placebo-effect article about it. A bit shocked to be honest
P.S. I do not want to offend anybody.