Thanks for this interesting analysis! Do you have a link to Foster’s analysis of MindEase’s impact?
How do you think the research on MindEase’s impact compares to that of GiveWell’s top charities? Based on your description of Hildebrandt’s analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets. Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster’s estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)
Hi Robert, Foster’s analysis currently isn’t publicly available, but more details from Foster’s analysis are available in TPP’s full report on Mind Ease. To my knowledge, it was not stronger evidence that resulted in a lower efficacy estimate from Foster’s research, but skepticism of the longer-term persistence of effects from anxiety reduction methods as well as analyzing the Mind Ease app as it is in the present—not incorporating the potential emergence of stronger evidence of the app’s efficacy, as well as future work on the app. As one would expect, Mind Ease plans on providing even stronger efficacy data and further developing its app as time goes on.
As mentioned in our writeup, TPP used the lower estimate “because if Mind Ease still looks attractive in this “worst case,” then it would definitely look attractive with weights that are less tilted towards Foster’s estimate,” rather than skewing based on the strength of a particular estimate. I think it’s quite possible that Mind Ease’s expected impact is considerably higher than the example conservative estimate shared in this writeup. Using different framings, such as the Happier Lives Institute’s findings regarding the cost-effectiveness of mental health interventions, can also result in much higher estimates of Mind Ease’s expected impact compared to GiveWell top charities.
I personally haven’t spent much time looking over GiveWell’s evaluations, but Hauke’s full report “avoid[s] looking at individual studies and focus[es] mostly on meta-analyses and systematic reviews of randomized controlled trials.” I’d expect that someone’s assessment of Mind Ease’s impact will generally follow their opinion of how cost effective digital mental health interventions are compared to getting no treatment (Hauke’s report mentions that most people suffering from depression and anxiety do not receive timely treatment or any treatment at all, particularly in developing countries).
Just to add, for the record, that we released most of Hauke’s work because it was a meta-analysis that we hope contributes to the public good. We haven’t released either Hauke or Derek’s analyses of Mind Ease’s proprietary data. Though, of course, their estimates and conclusions based on their analyses are discussed at a high level in the case study.
Thanks for this interesting analysis! Do you have a link to Foster’s analysis of MindEase’s impact?
How do you think the research on MindEase’s impact compares to that of GiveWell’s top charities? Based on your description of Hildebrandt’s analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets. Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster’s estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)
Hi Robert, Foster’s analysis currently isn’t publicly available, but more details from Foster’s analysis are available in TPP’s full report on Mind Ease. To my knowledge, it was not stronger evidence that resulted in a lower efficacy estimate from Foster’s research, but skepticism of the longer-term persistence of effects from anxiety reduction methods as well as analyzing the Mind Ease app as it is in the present—not incorporating the potential emergence of stronger evidence of the app’s efficacy, as well as future work on the app. As one would expect, Mind Ease plans on providing even stronger efficacy data and further developing its app as time goes on.
As mentioned in our writeup, TPP used the lower estimate “because if Mind Ease still looks attractive in this “worst case,” then it would definitely look attractive with weights that are less tilted towards Foster’s estimate,” rather than skewing based on the strength of a particular estimate. I think it’s quite possible that Mind Ease’s expected impact is considerably higher than the example conservative estimate shared in this writeup. Using different framings, such as the Happier Lives Institute’s findings regarding the cost-effectiveness of mental health interventions, can also result in much higher estimates of Mind Ease’s expected impact compared to GiveWell top charities.
I personally haven’t spent much time looking over GiveWell’s evaluations, but Hauke’s full report “avoid[s] looking at individual studies and focus[es] mostly on meta-analyses and systematic reviews of randomized controlled trials.” I’d expect that someone’s assessment of Mind Ease’s impact will generally follow their opinion of how cost effective digital mental health interventions are compared to getting no treatment (Hauke’s report mentions that most people suffering from depression and anxiety do not receive timely treatment or any treatment at all, particularly in developing countries).
Just to add, for the record, that we released most of Hauke’s work because it was a meta-analysis that we hope contributes to the public good. We haven’t released either Hauke or Derek’s analyses of Mind Ease’s proprietary data. Though, of course, their estimates and conclusions based on their analyses are discussed at a high level in the case study.