No worries! I should say that I’ve spent less than 3 hours looking through SM/HLI documents around this, so I’m not highly confident about most of these points. I have a lot of respect for anyone who is trying their best to make as much impact in the world as they can—thank you for all your work thus far, and thank you for engaging with all the questions!
I should also clarify that the digging was largely prompted by HLI’s strong endorsements:
We’re now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money.
And while this was a result of over 3 years and 10,000 hours of work, I generally aim to be more hesitant to take such strong claims at face value.
But I mention this because I want to emphasise that even if it’s the case that after this conversation, I decide that I’m not personally quite convinced that StrongMinds is the single most cost effective way to help other people, it doesn’t mean this is a reflection of the effort you have and continue to put into SM! It doesn’t necessarily mean SM isn’t a great charity. It doesn’t mean it’s not possible for StrongMinds to be the best charity in the future, or be the best under different philosophical assumptions. It’s just really hard to be the most cost effective charity.
And I’m mindful that this conversation has been possible precisely because of your shared commitment to transparency and willingness to engage, which I have a lot of respect for. We are both on the same team of wanting to do as much good as we can, and I hope you interpret this barrage (apologies!) of questions in that light.
Lastly, I’m also happy to continue via email and update folks later with a summary, if you think that would be helpful for getting answers that you may not be able to answer immediately etc.
With that in mind, some followups:
1) Just re-flagging the question RE: bias, though as you pointed out, this may be better suited for a researcher on the team / someone who was more in-the-weeds with the research:
a) What’s the justification behind the 94% figure not being found to be invalid when the 99% was? Was it based on different methodology between the two pilots, or something else? As far as I can tell, the difference in methodology RE: recording PHQ-9 scores was that in phase 1 these were scored weekly from week 5-16, with a post-assessment scoring at week 17, and for phase 2, these were done biweekly from week 2-12, with a post-assessment at week 14. It’s not clear that this difference leads to bias in one but not the other.
b) Also curious about the separate analysis that came to 92%, which states: “Since this impact figure was collected at a regular IPT group meeting, as had been done bi-weekly throughout the 12- week intervention, it is unlikely that any bias influenced the figure.” I don’t quite understand how collection at a regular IPT group meeting makes bias unlikely—could you clarify this? Presumably participants knew in advance how many weeks the intervention would be?
2) I took the 10% from StrongMinds’ 2017 report (pg 2), not an HLI analysis (though if HLI independently came to that conclusion or have reviewed it and agreed I’d be interested too):
While both the Phase 1 and 2 patients had 95% depression-free rates at the completion of formal sessions, our Impact Evaluation reports and subsequent experience has helped us to understand that those rates were somewhat inflated by social desirability bias, roughly by a factor of approximately ten percentage points. This was due to the fact that their Mental Health Facilitator administered the PHQ-9 at the conclusion of therapy. StrongMinds now uses external data collectors to conduct the post-treatment evaluations. Thus, for effective purposes, StrongMinds believes the actual depression-free rates for Phase 1 and 2 to be more in the range of 85%.
[emphasis added]
I couldn’t find a justification of this figure in that report or any of the preceding reports. (admittedly I just very quickly searched for various combinations of 10/85/95% and didn’t read the entire report)
3) Makes sense—looking forward to the results of the RCT! I assume it will be independent and pre-registered?
4) Thanks! Just looked in a bit more detail—in Appendix A (pg 30) it says:
“Use of lay community workers as the IPT-G facilitators, or Mental Health Facilitators(MHFs)
MHFs require at least a high-school diploma, and are employed and salaried by StrongMinds. They receive two weeks of training by a certified IPT-G expert and receive on-going supervision and guidance by a mental health professional. Since they are community members themselves, they are well-received by the depressed patients. The IPT-G training curriculum includes modules on mental illness in general, depression, interpersonal psychotherapy, management of suicidality, and the goals and objectives for each weekly session of the 16 total sessions that are held. The training extensively uses role-playing to recreate group meeting settings.”
“StrongMinds completed training its initial cadre of four MHFs in March, 2014. The training lasted 10 days and was conducted by two Ugandans certified in IPT-G by Columbia University. In addition, the training was monitored long-distance via Skype by our senior technical advisor who is an international expert on IPT-G from Columbia University.”
“In Phase One of the 2014-2015 pilot in Uganda, our 4 MHFs were supervised by the two Ugandan IPT-G experts noted above. In Phase Two, StrongMinds hired a full-time Mental Health Supervisor (MHS) who both conducted IPT groups and supervised the 4 MHFs. This MHS was actually a member of the 2002 RCT in Uganda and has over ten years of IPT-G experience”
a) Just confirming that “lay counsellor” is the same as “IPT-G facilitator” and “mental health facilitator”? If not, what are the differences? How much training do they get and what’s their role in the intervention etc.
b) How does StrongMinds select for empathy? E.g. questionnaire, interview, etc.
c) What does it mean to be a “certified IPT-G expert”? For example, it sounds like there are a lot of various levels of certification. From a quick google the best match I found for the description of the training was this, which suggests a “certified IPT-G expert” is someone who has completed this specific 6-day course, i.e., with a level A certification? (Happy to be corrected—just couldn’t find any details of this). If true, am I understanding correctly that the StrongMinds lay counsellors / mental health facilitators take a 10 day training course which is delivered by someone who has taken a 6-day course? Do the certified IPT-G experts play a role in SM other than the initial training of lay counsellors?
d) What does it mean to be a “mental health supervisor”? What’s their role in SM?
e) [Minor] I just realised the appendix said MHFs require at least a high school diploma, contra what you said earlier: “in fact, they don’t even need to have a high school degree”. I assume this was just a more recent change in policy. Not a big deal, just clarifying.
5) I had another question, which came up as I was going through the tax returns Joel linked to work out the cost per client reached—in the tax return it says
STRONGMINDS IS A SOCIAL ENTERPRISE THAT PROVIDES LIFECHANGING DEPRESSION TREATMENT TO LOW-INCOME INDIVIDUALS IN SUB-SAHARAN AFRICA.
This didn’t exist in the tax returns before 2019, but came up every year from 2019 onwards.
a) Was there a change in model in terms of revenue streams or business model for StrongMinds—if so, what changed?
b) You’ll probably cover this in some of the other questions, but how do the partnerships work? Do partners pay you for the year of training? What does this training look like?
c) Are there other revenue streams that StrongMinds have outside of donors / grants? (To be clear—I don’t have an issue with StrongMinds being a social enterprise, just wanting to clarify so I have the facts right!)
Thank you! I appreciate your curiosity, and I’m not put off by the questions or anything; it’s just many of them are not in my area of expertise, and this happens to be a pretty busy time of year at StrongMinds. It may take some time to fully gather what you’re asking for. We aren’t a large research institute by any means, so our clinical team is relatively small. Additionally, some of the work you are referencing is nearly a decade old, so we have shifted some of the ways we operate to be more effective or better based on our learnings. That said, I will dig back in when I can to help answer your additional questions via email or direct message.
To answer the remaining four from your original note to close the loop:
5) Since HLI generated the $170 figure, they have the best information on that particular breakdown, but I am collecting the most recent info on our CPP for another question, and I will share that with you later this week when I have the updated numbers.
6) As mentioned above, we are currently in the process of assessing the right questions and framework for an RCT looking at the results and impact of our therapy model. We are hoping to be able to launch the RCT late in 2023.
7) We switched our model to teletherapy to continue to treat clients during the pandemic lockdowns. It was not ideal, but we wanted to continue reaching as many women as possible despite the challenges. Though it proved tricky in some cases to reach our target demographic, we did find that some women preferred the flexibility teletherapy offered them. For the most part, we have switched back to our original model, but we still see some groups via teletherapy in Uganda. All research shared publicly from our initial year using teletherapy can be found here.
8) We track individuals that attend most of their therapy sessions, as we saw that the effects of therapy were still statistically significant and that attending additional sessions did not produce incremental impact. Due to the individual roles and responsibilities of the women that attend, it’s sometimes challenging for them to make it to all 12 sessions.
No worries! I should say that I’ve spent less than 3 hours looking through SM/HLI documents around this, so I’m not highly confident about most of these points. I have a lot of respect for anyone who is trying their best to make as much impact in the world as they can—thank you for all your work thus far, and thank you for engaging with all the questions!
I should also clarify that the digging was largely prompted by HLI’s strong endorsements:
And while this was a result of over 3 years and 10,000 hours of work, I generally aim to be more hesitant to take such strong claims at face value.
But I mention this because I want to emphasise that even if it’s the case that after this conversation, I decide that I’m not personally quite convinced that StrongMinds is the single most cost effective way to help other people, it doesn’t mean this is a reflection of the effort you have and continue to put into SM! It doesn’t necessarily mean SM isn’t a great charity. It doesn’t mean it’s not possible for StrongMinds to be the best charity in the future, or be the best under different philosophical assumptions. It’s just really hard to be the most cost effective charity.
And I’m mindful that this conversation has been possible precisely because of your shared commitment to transparency and willingness to engage, which I have a lot of respect for. We are both on the same team of wanting to do as much good as we can, and I hope you interpret this barrage (apologies!) of questions in that light.
Lastly, I’m also happy to continue via email and update folks later with a summary, if you think that would be helpful for getting answers that you may not be able to answer immediately etc.
With that in mind, some followups:
1) Just re-flagging the question RE: bias, though as you pointed out, this may be better suited for a researcher on the team / someone who was more in-the-weeds with the research:
a) What’s the justification behind the 94% figure not being found to be invalid when the 99% was? Was it based on different methodology between the two pilots, or something else? As far as I can tell, the difference in methodology RE: recording PHQ-9 scores was that in phase 1 these were scored weekly from week 5-16, with a post-assessment scoring at week 17, and for phase 2, these were done biweekly from week 2-12, with a post-assessment at week 14. It’s not clear that this difference leads to bias in one but not the other.
b) Also curious about the separate analysis that came to 92%, which states: “Since this impact figure was collected at a regular IPT group meeting, as had been done bi-weekly throughout the 12- week intervention, it is unlikely that any bias influenced the figure.” I don’t quite understand how collection at a regular IPT group meeting makes bias unlikely—could you clarify this? Presumably participants knew in advance how many weeks the intervention would be?
2) I took the 10% from StrongMinds’ 2017 report (pg 2), not an HLI analysis (though if HLI independently came to that conclusion or have reviewed it and agreed I’d be interested too):
I couldn’t find a justification of this figure in that report or any of the preceding reports. (admittedly I just very quickly searched for various combinations of 10/85/95% and didn’t read the entire report)
3) Makes sense—looking forward to the results of the RCT! I assume it will be independent and pre-registered?
4) Thanks! Just looked in a bit more detail—in Appendix A (pg 30) it says:
“Use of lay community workers as the IPT-G facilitators, or Mental Health Facilitators(MHFs)
MHFs require at least a high-school diploma, and are employed and salaried by StrongMinds. They receive two weeks of training by a certified IPT-G expert and receive on-going supervision and guidance by a mental health professional. Since they are community members themselves, they are well-received by the depressed patients. The IPT-G training curriculum includes modules on mental illness in general, depression, interpersonal psychotherapy, management of suicidality, and the goals and objectives for each weekly session of the 16 total sessions that are held. The training extensively uses role-playing to recreate group meeting settings.”
In Appendix E (pg 33) it says:
“StrongMinds completed training its initial cadre of four MHFs in March, 2014. The training lasted 10 days and was conducted by two Ugandans certified in IPT-G by Columbia University. In addition, the training was monitored long-distance via Skype by our senior technical advisor who is an international expert on IPT-G from Columbia University.”
“In Phase One of the 2014-2015 pilot in Uganda, our 4 MHFs were supervised by the two Ugandan IPT-G experts noted above. In Phase Two, StrongMinds hired a full-time Mental Health Supervisor (MHS) who both conducted IPT groups and supervised the 4 MHFs. This MHS was actually a member of the 2002 RCT in Uganda and has over ten years of IPT-G experience”
a) Just confirming that “lay counsellor” is the same as “IPT-G facilitator” and “mental health facilitator”? If not, what are the differences? How much training do they get and what’s their role in the intervention etc.
b) How does StrongMinds select for empathy? E.g. questionnaire, interview, etc.
c) What does it mean to be a “certified IPT-G expert”? For example, it sounds like there are a lot of various levels of certification. From a quick google the best match I found for the description of the training was this, which suggests a “certified IPT-G expert” is someone who has completed this specific 6-day course, i.e., with a level A certification? (Happy to be corrected—just couldn’t find any details of this). If true, am I understanding correctly that the StrongMinds lay counsellors / mental health facilitators take a 10 day training course which is delivered by someone who has taken a 6-day course? Do the certified IPT-G experts play a role in SM other than the initial training of lay counsellors?
d) What does it mean to be a “mental health supervisor”? What’s their role in SM?
e) [Minor] I just realised the appendix said MHFs require at least a high school diploma, contra what you said earlier: “in fact, they don’t even need to have a high school degree”. I assume this was just a more recent change in policy. Not a big deal, just clarifying.
5) I had another question, which came up as I was going through the tax returns Joel linked to work out the cost per client reached—in the tax return it says
This didn’t exist in the tax returns before 2019, but came up every year from 2019 onwards.
a) Was there a change in model in terms of revenue streams or business model for StrongMinds—if so, what changed?
b) You’ll probably cover this in some of the other questions, but how do the partnerships work? Do partners pay you for the year of training? What does this training look like?
c) Are there other revenue streams that StrongMinds have outside of donors / grants? (To be clear—I don’t have an issue with StrongMinds being a social enterprise, just wanting to clarify so I have the facts right!)
(commenting in personal capacity etc)
Thank you! I appreciate your curiosity, and I’m not put off by the questions or anything; it’s just many of them are not in my area of expertise, and this happens to be a pretty busy time of year at StrongMinds. It may take some time to fully gather what you’re asking for. We aren’t a large research institute by any means, so our clinical team is relatively small. Additionally, some of the work you are referencing is nearly a decade old, so we have shifted some of the ways we operate to be more effective or better based on our learnings. That said, I will dig back in when I can to help answer your additional questions via email or direct message.
To answer the remaining four from your original note to close the loop:
5) Since HLI generated the $170 figure, they have the best information on that particular breakdown, but I am collecting the most recent info on our CPP for another question, and I will share that with you later this week when I have the updated numbers.
6) As mentioned above, we are currently in the process of assessing the right questions and framework for an RCT looking at the results and impact of our therapy model. We are hoping to be able to launch the RCT late in 2023.
7) We switched our model to teletherapy to continue to treat clients during the pandemic lockdowns. It was not ideal, but we wanted to continue reaching as many women as possible despite the challenges. Though it proved tricky in some cases to reach our target demographic, we did find that some women preferred the flexibility teletherapy offered them. For the most part, we have switched back to our original model, but we still see some groups via teletherapy in Uganda. All research shared publicly from our initial year using teletherapy can be found here.
8) We track individuals that attend most of their therapy sessions, as we saw that the effects of therapy were still statistically significant and that attending additional sessions did not produce incremental impact. Due to the individual roles and responsibilities of the women that attend, it’s sometimes challenging for them to make it to all 12 sessions.
Thanks again for the questions!