Hello Henry. It may look like we’re just leaning on 2 RCTs, but we’re not! If you read further down in the ‘cash transfers vs treating depression’ section, we mention that we compared cash transfers to talk therapy on the basis of a meta-analysis of each.
The evidence base for therapy is explained in full in Section 4 of our StrongMinds cost-effectiveness analysis. We use four direct studies and a meta-analysis of 39 indirect studies (n > 38,000). You can see how much weight we give to each source of evidence in Table 2, reproduced below. To be clear, we don’t take the results from StrongMinds’ own trials at face value. We basically use an average figure for their effect size, even though they find a high figure themselves.
Also, what’s wrong with the self-reports? People are self-reporting how they feel. How else should we determine how people feel? Should we just ignore them and assume that we know best? Also, we’re comparing self-reports to other self-reports, so it’s unclear what bias we need to worry about.
Regarding the issues of comparing saving lives to improving lives, we’ve just written a whole report on how to think about that. We’re hoping that, by bringing these difficult issues to the surface—rather than glossing over them, which is what normally happens—people can make better-informed decisions. We’re very much on your side: we think people should be thinking harder about which does more good.
I haven’t looked in detail at how Give Well evaluates evidence, so maybe you’re no worse here, but I don’t think “weighted average of published evidence” is appropriate when one has concerns about the quality of published evidence. Furthermore, I think some level of concern about the quality of published evidence should be one’s baseline position—I.e. a weighted average is only appropriate when there are unusually strong reasons to think the published evidence is good.
I’m broadly supportive of the project of evaluating impacts on happiness.
You’re right that we should be concerned with the quality of published evidence. I discounted psychotherapy’s effect by 17% for having a higher risk of effect inflation than cash transfers, see Appendix C of McGuire & Plant (2021). However, this was the first pass at a fundamental problem in science, and I recognize we could do better here.
We’re planning on revisiting this analysis and improving our methods – but we’re currently prioritizing finding new interventions more than improving our analyses of old ones. Unfortunately, we currently don’t have the research capacity to do both well!
Hello Henry. It may look like we’re just leaning on 2 RCTs, but we’re not! If you read further down in the ‘cash transfers vs treating depression’ section, we mention that we compared cash transfers to talk therapy on the basis of a meta-analysis of each.
The evidence base for therapy is explained in full in Section 4 of our StrongMinds cost-effectiveness analysis. We use four direct studies and a meta-analysis of 39 indirect studies (n > 38,000). You can see how much weight we give to each source of evidence in Table 2, reproduced below. To be clear, we don’t take the results from StrongMinds’ own trials at face value. We basically use an average figure for their effect size, even though they find a high figure themselves.
Also, what’s wrong with the self-reports? People are self-reporting how they feel. How else should we determine how people feel? Should we just ignore them and assume that we know best? Also, we’re comparing self-reports to other self-reports, so it’s unclear what bias we need to worry about.
Regarding the issues of comparing saving lives to improving lives, we’ve just written a whole report on how to think about that. We’re hoping that, by bringing these difficult issues to the surface—rather than glossing over them, which is what normally happens—people can make better-informed decisions. We’re very much on your side: we think people should be thinking harder about which does more good.
I haven’t looked in detail at how Give Well evaluates evidence, so maybe you’re no worse here, but I don’t think “weighted average of published evidence” is appropriate when one has concerns about the quality of published evidence. Furthermore, I think some level of concern about the quality of published evidence should be one’s baseline position—I.e. a weighted average is only appropriate when there are unusually strong reasons to think the published evidence is good.
I’m broadly supportive of the project of evaluating impacts on happiness.
Hi David,
You’re right that we should be concerned with the quality of published evidence. I discounted psychotherapy’s effect by 17% for having a higher risk of effect inflation than cash transfers, see Appendix C of McGuire & Plant (2021). However, this was the first pass at a fundamental problem in science, and I recognize we could do better here.
We’re planning on revisiting this analysis and improving our methods – but we’re currently prioritizing finding new interventions more than improving our analyses of old ones. Unfortunately, we currently don’t have the research capacity to do both well!