Thanks for posting that. I’m really excited about HLI’s work in general, and especially the work on the kinds of effects you are trying to estimate in this post!
I personally don’t have a clear picture of how much $ / WELLBY is considered good (whereas GiveWell’s estimates for their leading charities is around 50-100 $ / QALY). Do you have a table or something like that on your website, summarizing your results for charities you found to be highly effectively, for reference?
Unfortunately, we do not have a clear picture yet of how many WELLBYs per dollar is a good deal. Cash transfers are the first intervention we (and I think anyone) have analyzed in this manner. Figuring this out is my priority and I will soon review the cost effectiveness of other interventions which should give more context. To give a sneak peak, cataracts surgery is looking promising in terms of cost effectiveness compared to cash transfers.
I realized my previous reply might have been a bit misleading so I am adding this as a bit of an addendum.
There are previous calculations which include WELLBY like calculations such as Michael’s comparison of StrongMinds to GiveDirectly in his 2018 Mental Health cause profile or in Origins of Happiness / Handbook for WellBeing Policy Making in the UK. Why do we not compare our effects to these previous efforts? Most previous estimates looked at correlational effects and give no clear estimate of the total effect through time.
An aside follows: An example of these results communicated well is Micah Kaats thesis (which I think was related to HRI’s WALY report). They show the relationship of different maladies to life satisfaction and contextualize it with different common effects of life events.
Moving from standard deviations to points on a 0-11 scale is a further difficulty.
Something else worth noting is that different estimation methods can lead to systematically different effect sizes.In the same thesis, Kaats shows that fixed effects model tend to have lower effects.
As Michael did in his report with StrongMinds, and Clark et al., did for two studies (moving to a better neighborhood and building cement floors in Mexico—p. 207) in Origins of happiness, there have been estimates of cost effectiveness that take duration of effects into consideration, but they address only single studies. We wish to have a good understanding of evidence base as a whole before presenting estimates.
To further explain this last point, I have the view that the more scrutiny is applied to an effect the more it diminishes (can’t cite a good discussion of this at the moment). Comparing the cost effectiveness of a single study to our synthesis could give the wrong impression. In our synthesis we try hard to include all the relevant studies where it’s plausible that the first study we come across of an alternative well-being enhancing intervention is exceptionally optimistic in its effects.
Just on the different effect sizes from different methods, where do/would RCT methods fit in with the four discussed by Kaats?
FWIW, I agree that a meta-analysis of RCTs isn’t a like-for-like to a single RCT. That said, when(if?) we exhaust the existing SWB literature relevant t cost-effectiveness we should present everything we find (which shouldn’t hard as there’s not much!).
Thanks for posting that. I’m really excited about HLI’s work in general, and especially the work on the kinds of effects you are trying to estimate in this post!
I personally don’t have a clear picture of how much $ / WELLBY is considered good (whereas GiveWell’s estimates for their leading charities is around 50-100 $ / QALY). Do you have a table or something like that on your website, summarizing your results for charities you found to be highly effectively, for reference?
Thanks again!
Hello,
Glad to hear you’re excited!
Unfortunately, we do not have a clear picture yet of how many WELLBYs per dollar is a good deal. Cash transfers are the first intervention we (and I think anyone) have analyzed in this manner. Figuring this out is my priority and I will soon review the cost effectiveness of other interventions which should give more context. To give a sneak peak, cataracts surgery is looking promising in terms of cost effectiveness compared to cash transfers.
I see, thanks for the teaser :)
I was under the impression that you have rough estimate for some charities (e.g. StrongMinds). Looking forward to see your future work on that.
Those estimates are still in the works, but stay tuned!
I realized my previous reply might have been a bit misleading so I am adding this as a bit of an addendum.
There are previous calculations which include WELLBY like calculations such as Michael’s comparison of StrongMinds to GiveDirectly in his 2018 Mental Health cause profile or in Origins of Happiness / Handbook for WellBeing Policy Making in the UK. Why do we not compare our effects to these previous efforts? Most previous estimates looked at correlational effects and give no clear estimate of the total effect through time.
An aside follows: An example of these results communicated well is Micah Kaats thesis (which I think was related to HRI’s WALY report). They show the relationship of different maladies to life satisfaction and contextualize it with different common effects of life events.
Moving from standard deviations to points on a 0-11 scale is a further difficulty.
Something else worth noting is that different estimation methods can lead to systematically different effect sizes.In the same thesis, Kaats shows that fixed effects model tend to have lower effects.
While this may make it seem as if the the non fixed effects estimates are over-estimates. That’s only if you “are willing to assume the absence of dynamic causal relationships”—whether that’s reasonable will depend on the outcome.
As Michael did in his report with StrongMinds, and Clark et al., did for two studies (moving to a better neighborhood and building cement floors in Mexico—p. 207) in Origins of happiness, there have been estimates of cost effectiveness that take duration of effects into consideration, but they address only single studies. We wish to have a good understanding of evidence base as a whole before presenting estimates.
To further explain this last point, I have the view that the more scrutiny is applied to an effect the more it diminishes (can’t cite a good discussion of this at the moment). Comparing the cost effectiveness of a single study to our synthesis could give the wrong impression. In our synthesis we try hard to include all the relevant studies where it’s plausible that the first study we come across of an alternative well-being enhancing intervention is exceptionally optimistic in its effects.
Just on the different effect sizes from different methods, where do/would RCT methods fit in with the four discussed by Kaats?
FWIW, I agree that a meta-analysis of RCTs isn’t a like-for-like to a single RCT. That said, when(if?) we exhaust the existing SWB literature relevant t cost-effectiveness we should present everything we find (which shouldn’t hard as there’s not much!).
Thank you for following up and clarifying that.