I realized my previous reply might have been a bit misleading so I am adding this as a bit of an addendum.
There are previous calculations which include WELLBY like calculations such as Michael’s comparison of StrongMinds to GiveDirectly in his 2018 Mental Health cause profile or in Origins of Happiness / Handbook for WellBeing Policy Making in the UK. Why do we not compare our effects to these previous efforts? Most previous estimates looked at correlational effects and give no clear estimate of the total effect through time.
An aside follows: An example of these results communicated well is Micah Kaats thesis (which I think was related to HRI’s WALY report). They show the relationship of different maladies to life satisfaction and contextualize it with different common effects of life events.
Moving from standard deviations to points on a 0-11 scale is a further difficulty.
Something else worth noting is that different estimation methods can lead to systematically different effect sizes.In the same thesis, Kaats shows that fixed effects model tend to have lower effects.
As Michael did in his report with StrongMinds, and Clark et al., did for two studies (moving to a better neighborhood and building cement floors in Mexico—p. 207) in Origins of happiness, there have been estimates of cost effectiveness that take duration of effects into consideration, but they address only single studies. We wish to have a good understanding of evidence base as a whole before presenting estimates.
To further explain this last point, I have the view that the more scrutiny is applied to an effect the more it diminishes (can’t cite a good discussion of this at the moment). Comparing the cost effectiveness of a single study to our synthesis could give the wrong impression. In our synthesis we try hard to include all the relevant studies where it’s plausible that the first study we come across of an alternative well-being enhancing intervention is exceptionally optimistic in its effects.
Just on the different effect sizes from different methods, where do/would RCT methods fit in with the four discussed by Kaats?
FWIW, I agree that a meta-analysis of RCTs isn’t a like-for-like to a single RCT. That said, when(if?) we exhaust the existing SWB literature relevant t cost-effectiveness we should present everything we find (which shouldn’t hard as there’s not much!).
I realized my previous reply might have been a bit misleading so I am adding this as a bit of an addendum.
There are previous calculations which include WELLBY like calculations such as Michael’s comparison of StrongMinds to GiveDirectly in his 2018 Mental Health cause profile or in Origins of Happiness / Handbook for WellBeing Policy Making in the UK. Why do we not compare our effects to these previous efforts? Most previous estimates looked at correlational effects and give no clear estimate of the total effect through time.
An aside follows: An example of these results communicated well is Micah Kaats thesis (which I think was related to HRI’s WALY report). They show the relationship of different maladies to life satisfaction and contextualize it with different common effects of life events.
Moving from standard deviations to points on a 0-11 scale is a further difficulty.
Something else worth noting is that different estimation methods can lead to systematically different effect sizes.In the same thesis, Kaats shows that fixed effects model tend to have lower effects.
While this may make it seem as if the the non fixed effects estimates are over-estimates. That’s only if you “are willing to assume the absence of dynamic causal relationships”—whether that’s reasonable will depend on the outcome.
As Michael did in his report with StrongMinds, and Clark et al., did for two studies (moving to a better neighborhood and building cement floors in Mexico—p. 207) in Origins of happiness, there have been estimates of cost effectiveness that take duration of effects into consideration, but they address only single studies. We wish to have a good understanding of evidence base as a whole before presenting estimates.
To further explain this last point, I have the view that the more scrutiny is applied to an effect the more it diminishes (can’t cite a good discussion of this at the moment). Comparing the cost effectiveness of a single study to our synthesis could give the wrong impression. In our synthesis we try hard to include all the relevant studies where it’s plausible that the first study we come across of an alternative well-being enhancing intervention is exceptionally optimistic in its effects.
Just on the different effect sizes from different methods, where do/would RCT methods fit in with the four discussed by Kaats?
FWIW, I agree that a meta-analysis of RCTs isn’t a like-for-like to a single RCT. That said, when(if?) we exhaust the existing SWB literature relevant t cost-effectiveness we should present everything we find (which shouldn’t hard as there’s not much!).
Thank you for following up and clarifying that.