I don’t have much time to spend on this, but here are a few thoughts based on a quick skim of the paper.
The study was done by some of the world’s leading experts in wellbeing and the study design seems okay-ish (‘waitlist randomisation’). The main concern with internal validity, which the authors acknowledge, is that changes in the biomarkers, while mostly heading in the right direction, were far from statistically significant. This could indicate that the effects reported on other measures were due to some factor other than actual SWB improvement, e.g. social desirability bias. But biomarkers are not a great metric, and measures were taken to address these concerns, so I find it plausible that the effects in the study population were (nearly) as large as reported.
However: - The participants were self-selected, largely from people who were already involved with Action for Happiness (“The charity aims to help people take action to create more happiness, with a focus on pro-social behaviour to bring happiness to others around them”), and largely in the UK. They also had to register online. It’s unclear how useful it would be for other populations. - It’s quite an intensive program, involving weekly 2–2.5 hour group meetings with a trained facilitator two volunteer facilitators. (“Each of these sessions builds on a thematic question, for example, what matters in life, how to find meaning at work, or how to build happier communities.”) This may limit its scalability and accessibility to certain groups. - Follow-up was only for 2 months, the duration of the course itself. (This limitation seems to be due to the study design: the control group was people taking the course 8 weeks later.) - The effect sizes for depression and anxiety were smaller than for CBT, so it may still not be the best option for mental health treatment (though the CBT studies were done in populations with a diagnosed mental disorder, so direct comparison is hard; and subgroup analyses showed that people with lower baseline wellbeing benefited most from the program). - For clarity, the average effect size for life satisfaction was about 1 point on a 10-point scale. This is good compared to most wellbeing interventions, but that might say more about how ineffective most other interventions are than about how good this one is.
So at the risk of sounding too negative: it’s hardly surprising that people who are motivated enough to sign up for and attend a course designed to make them happier do in fact feel a bit happier while taking the course. It seems important to find out how long these effects endure, and whether the course is suitable for a broader range of people.
The course is manualised and scalable: each course is led by two volunteers – screened
by Action for Happiness for motivation and skills, and once approved, provided with structured
resources – as facilitators on an unpaid basis in their local communities. Recruitment of course
leaders follows a carefully documented, standardised process: each candidate completes a
Leader Registration process sharing their motivation and skills and is given clear instructions
on what is required. Once potential course leaders have a co-leader, venue, and dates in mind,
they complete a Course Application process. The team at Action for Happiness reviews this
application and, if all criteria are met, arranges a call to discuss next steps. Once a course is
fully approved, course leaders receive on-going guidance and support. There is also a post-
course follow-up process.
Not sure if that’s what you had understood and meant with ‘trained facilitator’ (just wanted to make it clear that it doesn’t mean licensed behavioral therapist or something).
Thanks—“trained facilitator” might be a bit misleading. Still, it looks like there were two volunteer course leaders for each course, selected in part for their unspecified “skills”, who were given “on-going guidance and support” to facilitate the sessions, and who have to arrange a venue etc themselves, then go through a follow-up process when it’s over. So it’s not a trivial amount of overhead for an average of 13 participants.
To look at treatment effect persistence, we exploit data points at follow-up in an extended
sample. As all respondents have been treated at follow-up, we cannot estimate causal effects,
so that results are exploratory.
Thanks—I missed that on my skim. But the “extended” follow-up is only for another two months. It does seem to indicate that effects persist for at least that period, without any trend towards baseline, which is promising (though without a control group the counterfactual is impossible to establish with confidence). I wonder why they didn’t continue to collect data beyond this period.
I don’t have much time to spend on this, but here are a few thoughts based on a quick skim of the paper.
The study was done by some of the world’s leading experts in wellbeing and the study design seems okay-ish (‘waitlist randomisation’). The main concern with internal validity, which the authors acknowledge, is that changes in the biomarkers, while mostly heading in the right direction, were far from statistically significant. This could indicate that the effects reported on other measures were due to some factor other than actual SWB improvement, e.g. social desirability bias. But biomarkers are not a great metric, and measures were taken to address these concerns, so I find it plausible that the effects in the study population were (nearly) as large as reported.
However:
- The participants were self-selected, largely from people who were already involved with Action for Happiness (“The charity aims to help people take action to create more happiness, with a focus on pro-social behaviour to bring happiness to others around them”), and largely in the UK. They also had to register online. It’s unclear how useful it would be for other populations.
- It’s quite an intensive program, involving weekly 2–2.5 hour group meetings with a trained facilitator two volunteer facilitators. (“Each of these sessions builds on a thematic question, for example, what matters in life, how to find meaning at work, or how to build happier communities.”) This may limit its scalability and accessibility to certain groups.
- Follow-up was only for 2 months, the duration of the course itself. (This limitation seems to be due to the study design: the control group was people taking the course 8 weeks later.)
- The effect sizes for depression and anxiety were smaller than for CBT, so it may still not be the best option for mental health treatment (though the CBT studies were done in populations with a diagnosed mental disorder, so direct comparison is hard; and subgroup analyses showed that people with lower baseline wellbeing benefited most from the program).
- For clarity, the average effect size for life satisfaction was about 1 point on a 10-point scale. This is good compared to most wellbeing interventions, but that might say more about how ineffective most other interventions are than about how good this one is.
So at the risk of sounding too negative: it’s hardly surprising that people who are motivated enough to sign up for and attend a course designed to make them happier do in fact feel a bit happier while taking the course. It seems important to find out how long these effects endure, and whether the course is suitable for a broader range of people.
This is how they describe their facilitators:
Not sure if that’s what you had understood and meant with ‘trained facilitator’ (just wanted to make it clear that it doesn’t mean licensed behavioral therapist or something).
Thanks—“trained facilitator” might be a bit misleading. Still, it looks like there were two volunteer course leaders for each course, selected in part for their unspecified “skills”, who were given “on-going guidance and support” to facilitate the sessions, and who have to arrange a venue etc themselves, then go through a follow-up process when it’s over. So it’s not a trivial amount of overhead for an average of 13 participants.
Thanks for your thoughts!
Yes, regarding persistence they also note:
Thanks—I missed that on my skim. But the “extended” follow-up is only for another two months. It does seem to indicate that effects persist for at least that period, without any trend towards baseline, which is promising (though without a control group the counterfactual is impossible to establish with confidence). I wonder why they didn’t continue to collect data beyond this period.