We believe the EA community’s confidence in the existing research on mental health charities hasn’t been high enough to use it to make significant funding decisions.
Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions.
In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months.
The series will cover literature reviews of academic and EA literature on mental health and moral weights.
We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds’ internally reported data.
We will provide a view on how impactful we judge StrongMinds to be.
What we will publish
From March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will include
Methodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the “standard deviation—years of depression averted” framework used by Happier Lives Institute.
Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy.
We will focus specifically on how the effect decays after therapy, and publish a separate report on this.
Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind’s context.
Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions.
Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does.
We hope these will empower others to check our work, do their own analyses of the topic, and take the work further.
How will this enable higher impact donations?
In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving “significant resources″ to mental health interventions, with 24% of those believing it should be a “top priority” or “near top priority” and 4% selecting it as their “top cause”. Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health.
Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive’s founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and StrongMinds has received a recommendation by Giving What We Can. There are also GiveWell conversation notes with StrongMinds (among 826 other interviews at time of writing, so this is a very weak signal) as well as some donations in a strictly personal capacity (with appropriate caveats about it being a riskier donation made by each donor) from staff members at Center for Effective Altruism and GiveWell.
Despite this high level of engagement, we’re not sure how much EA money has flowed through to direct work in mental health related causes. We wouldn’t be too surprised if it was <$3M/year, which is a small fraction in the context of $200m+ institutional grants by Givewell and OpenPhil to global health causes. This resource allocation probably doesn’t reflect the degree of interest that the EA community seems to have in the area—we think that instead reflects institutional funders’ (entirely appropriate) sceptical priors and level of caution, and the maintenance of a high bar for evidence in the EA global health space.
Our aim is to provide an analysis of a mental health organisation which can help the EA community make well-justified decisions about mental health interventions alongside more traditional global health intervention. Importantly, we do not aim toadvocate for StrongMinds specifically, nor mental health and subjective well being measures more generally—we only aim to empower the community with tools and analysis to consider mental health interventions on an even footing alongside more traditional global health interventions. Our theory of change is that there is a potentially large potential funding pool available in the community, which may face bottlenecks due to uncertainty about mental health interventions, and that reducing this uncertainty could increase donor efficiency in grant allocations.
We think the groundwork made by Happier Lives Institute, especially the StrongMinds Cost Effectiveness Report has been a massive step forward in regards to providing in-depth analyses of mental health interventions. When we reviewed HLI’s work, we found their analysis to be incredibly useful and of high quality. We modelled much of our own analysis off of this research. Our ultimate conclusions may differ somewhat from the conclusions of HLI, as our meta-analysis will be independent, and weigh some sources of evidence differently. However, we agreed with and borrowed for our own analysis their core methodology of determining an initial effect size, assuming exponential decay of that effect size, and then integrating over time to find the area under that curve to derive the endline morally important metric.
It would be fair to ask: given that HLI has done a pretty powerful analysis already, why is a second analysis necessary? We believe that further analysis by SoGive can add further value, even aside from that we may reach different conclusions and that an independent review provides comfort, for three reasons:
First, the Happier Lives Institute advocates for evaluating everything in terms of subjective well being. While HLI commits only to the use of well being measures, and does not commit to psychotherapy in particular, it’s understandable for a donor to wonder whether the recommendation of psychotherapy interventions was a foregone conclusion. In response to recent criticisms, Joel from HLI commented that:
The implication, which othersendorsed in the comments, seems to be that HLI’s analysis is biased because of a perceived relationship with StrongMinds or an entrenched commitment to mental health as a cause area which compromises the integrity of our research … I think the concern is that we’ve already decided what we think is true, and we aim to prove it.
We do not ourselves believe HLI’s analysis to have been strongly influenced by a desire to inflate psychotherapy, but we think it would understandably reassure everyone to see that analysts who are not philosophically committed to subjective well being metrics have checked and independently verified the claims. Unlike HLI, we have a history of recommending a diversity of non-mental health organisations in global health.
Through a combination of checking and repackaging HLI analysis, as well as conducting our own independent analysis, we aim to make the impact of psychotherapy comparable side by side to non-mental health related outcomes, using moral weights frameworks that are intuitive to adjust for someone who doesn’t necessarily want to use SWB measures for all moral judgements. You’ll be able to input your own moral weights for how a given subjective well being increased compares with income increased or lives saved. Second: We have some concerns that existing analysis by HLI suffers the problem of being fundamentally sound, but difficult to understand and audit. For instance, in his critique of StrongMinds, Simon_M commented:
I’m going to leave aside discussing HLI here. Whilst I think they have some of the deepest analysis of StrongMinds, I am still confused by some of their methodology, it’s not clear to me what their relationship to StrongMinds is.
While there has been some critical engagement with HLI’s analysis, we’re worried that there may be a gap where only a few full time analysts are finding the work accessible, while grantmakers with less time for analysis are left to wonder whether to trust the conclusions. We also found it difficult to replicate a few of the numbers involved in the work, as not all calculations were explicitly explained, so we couldn’t double check all findings and rule out the possibility of errors and inconsistencies (though these are unlikely to dramatically change the headline conclusion). We aim to stress legibility, explaining every step of the analysis, to create a report that the “average” EA can more easily understand and audit.
Third, SoGive’s analysis will take a slightly different approach. Where HLI’s report on StrongMinds is a higher level academic meta-analysis, we plan to additionally do some in-depth exploration picking apart the individual papers which are the most relevant to StrongMinds, laying bare any methodological flaws which will influence our inferences about generalizability of the findings. We’re also hoping to present more in depth analysis from StrongMind’s internal M&E, although this is pending agreements from StrongMinds about how much work they can put in and what the privacy factors are regarding providing their data to us.
It is our hope that in providing this research, we will provide clarity about mental health interventions, how effective they are, how to think about moral weights surrounding them, and how to efficiently allocate resources when comparing them with other types of global health intervention.
About SoGive: SoGive does EA research and supports major donors. If you are a major donor seeking support with your donations, we’d be keen to work with you. Feel free to contact Sanjay on sanjay@sogive.org.
Why SoGive is publishing an independent evaluation of StrongMinds
Executive summary
We believe the EA community’s confidence in the existing research on mental health charities hasn’t been high enough to use it to make significant funding decisions.
Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions.
In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months.
The series will cover literature reviews of academic and EA literature on mental health and moral weights.
We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds’ internally reported data.
We will provide a view on how impactful we judge StrongMinds to be.
What we will publish
From March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will include
Methodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the “standard deviation—years of depression averted” framework used by Happier Lives Institute.
Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy.
We will focus specifically on how the effect decays after therapy, and publish a separate report on this.
Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind’s context.
Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions.
Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does.
We hope these will empower others to check our work, do their own analyses of the topic, and take the work further.
How will this enable higher impact donations?
In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving “significant resources″ to mental health interventions, with 24% of those believing it should be a “top priority” or “near top priority” and 4% selecting it as their “top cause”. Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health.
Some EA energy has now gone into this area—for example, Charity Entrepreneurship incubated Canopie, Mental Health Funder’s Circle, and played a role in incubating Happier Lives Institute. They additionally launched Kaya Guides and Vida Plena last year. We also had a talk from Friendship Bench at last year’s EA Global.
Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive’s founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and StrongMinds has received a recommendation by Giving What We Can. There are also GiveWell conversation notes with StrongMinds (among 826 other interviews at time of writing, so this is a very weak signal) as well as some donations in a strictly personal capacity (with appropriate caveats about it being a riskier donation made by each donor) from staff members at Center for Effective Altruism and GiveWell.
Despite this high level of engagement, we’re not sure how much EA money has flowed through to direct work in mental health related causes. We wouldn’t be too surprised if it was <$3M/year, which is a small fraction in the context of $200m+ institutional grants by Givewell and OpenPhil to global health causes. This resource allocation probably doesn’t reflect the degree of interest that the EA community seems to have in the area—we think that instead reflects institutional funders’ (entirely appropriate) sceptical priors and level of caution, and the maintenance of a high bar for evidence in the EA global health space.
We think that the effective altruism community is hesitating from entering mental health in a bigger way because we do not have widespread community knowledge that enables us to evaluate a mental health intervention to a similar level of methodological rigour as other global health interventions. Consider for example that GiveWell’s evaluation of the Against Malaria Foundation includes not only academic data, but a breakdown of cost effectiveness by country, with an understanding of other philanthropic actors in the space, while AMF themselves document every single net distribution. It’s common to see estimates to the effect that some intervention is >100x or even >1000x better than an established global health intervention, but the difference in the level of evidence for one end of the comparison makes it difficult to practise the equal application of rigor. Given that cost effectiveness estimates tend to regress on deeper analysis, it’s often justified to favor interventions with more evidence even given a lower headline estimate. In short, we think that what’s missing from this space is confidence in the analysis.
Our aim is to provide an analysis of a mental health organisation which can help the EA community make well-justified decisions about mental health interventions alongside more traditional global health intervention. Importantly, we do not aim to advocate for StrongMinds specifically, nor mental health and subjective well being measures more generally—we only aim to empower the community with tools and analysis to consider mental health interventions on an even footing alongside more traditional global health interventions. Our theory of change is that there is a potentially large potential funding pool available in the community, which may face bottlenecks due to uncertainty about mental health interventions, and that reducing this uncertainty could increase donor efficiency in grant allocations.
We think the groundwork made by Happier Lives Institute, especially the StrongMinds Cost Effectiveness Report has been a massive step forward in regards to providing in-depth analyses of mental health interventions. When we reviewed HLI’s work, we found their analysis to be incredibly useful and of high quality. We modelled much of our own analysis off of this research. Our ultimate conclusions may differ somewhat from the conclusions of HLI, as our meta-analysis will be independent, and weigh some sources of evidence differently. However, we agreed with and borrowed for our own analysis their core methodology of determining an initial effect size, assuming exponential decay of that effect size, and then integrating over time to find the area under that curve to derive the endline morally important metric.
It would be fair to ask: given that HLI has done a pretty powerful analysis already, why is a second analysis necessary? We believe that further analysis by SoGive can add further value, even aside from that we may reach different conclusions and that an independent review provides comfort, for three reasons:
First, the Happier Lives Institute advocates for evaluating everything in terms of subjective well being. While HLI commits only to the use of well being measures, and does not commit to psychotherapy in particular, it’s understandable for a donor to wonder whether the recommendation of psychotherapy interventions was a foregone conclusion. In response to recent criticisms, Joel from HLI commented that:
We do not ourselves believe HLI’s analysis to have been strongly influenced by a desire to inflate psychotherapy, but we think it would understandably reassure everyone to see that analysts who are not philosophically committed to subjective well being metrics have checked and independently verified the claims. Unlike HLI, we have a history of recommending a diversity of non-mental health organisations in global health.
Through a combination of checking and repackaging HLI analysis, as well as conducting our own independent analysis, we aim to make the impact of psychotherapy comparable side by side to non-mental health related outcomes, using moral weights frameworks that are intuitive to adjust for someone who doesn’t necessarily want to use SWB measures for all moral judgements. You’ll be able to input your own moral weights for how a given subjective well being increased compares with income increased or lives saved.
Second: We have some concerns that existing analysis by HLI suffers the problem of being fundamentally sound, but difficult to understand and audit. For instance, in his critique of StrongMinds, Simon_M commented:
While there has been some critical engagement with HLI’s analysis, we’re worried that there may be a gap where only a few full time analysts are finding the work accessible, while grantmakers with less time for analysis are left to wonder whether to trust the conclusions. We also found it difficult to replicate a few of the numbers involved in the work, as not all calculations were explicitly explained, so we couldn’t double check all findings and rule out the possibility of errors and inconsistencies (though these are unlikely to dramatically change the headline conclusion). We aim to stress legibility, explaining every step of the analysis, to create a report that the “average” EA can more easily understand and audit.
Third, SoGive’s analysis will take a slightly different approach. Where HLI’s report on StrongMinds is a higher level academic meta-analysis, we plan to additionally do some in-depth exploration picking apart the individual papers which are the most relevant to StrongMinds, laying bare any methodological flaws which will influence our inferences about generalizability of the findings. We’re also hoping to present more in depth analysis from StrongMind’s internal M&E, although this is pending agreements from StrongMinds about how much work they can put in and what the privacy factors are regarding providing their data to us.
It is our hope that in providing this research, we will provide clarity about mental health interventions, how effective they are, how to think about moral weights surrounding them, and how to efficiently allocate resources when comparing them with other types of global health intervention.
About SoGive: SoGive does EA research and supports major donors. If you are a major donor seeking support with your donations, we’d be keen to work with you. Feel free to contact Sanjay on sanjay@sogive.org.