Notes on supporting Happier Lives Institute

This is a cross-posted comment from Clearer Thinking regranting competition on Manifold Markets (with a couple of minor edits and typo corrections)

Intro

What it is about?

The article describes my take on why I think the Happier Lives Institute should receive funding through the Clearer Thinking regranting round (Clearer Thinking organized a tournament on Manifold Markets to help them crowd-evaluate which projects should receive funding).

Abstract

This is by no means a comprehensive review of the Happier Lives Institute (HLI). I have been exposed to HLI work relatively recently. Think of it as an interface to quickly understand what the Happier Lives Institute does and a subjective assessment of the potential value threads they are creating.

In the following text, I am arguing that HLI brings value in two dimensions. One is their work increasing well-being directly – they evaluate and support the most cost-effective organizations globally. Another is their work applying and stress-testing the Subjective Well-Being framework (SWB). I think having an alternative, thoroughly researched framework like SWB has a high expected value for the EA community. Most EA org rely on QALY+ framework. This work therefore can help diversify worldviews and help calibrate judgments of the main EA organizations.

Why I am writing this?

I am posting this on the Forum because some ideas may apply more broadly e.g. examining what is behind relatively low engagement within the EA community in supporting projects tackling increasing well-being directly. I would also love to hear feedback. Notes on my reasoning or the Happier Lives Institute’s approach to increasing global well-being are welcome.

Epistemic status

I must be biased because I was voting yes on this market during the Clearer Thinking tournament on Manifold Markets. I had prior exposure to HLI and when I saw the chances of HLI receiving a grant at 40% I thought the prediction is way off.

Since then I spent a couple of days researching the topic. I watched a couple of HLI YouTube lectures and read a couple of their EA Forum posts. I am not very knowledgeable about the internal mechanics of frameworks like SWB and QALY+. I have decent knowledge and long exposure to topics like psychotherapy, well-being, and evidence-based therapies.

Who may be interested in reading this?

  • People interested in the well-being discourse.

  • People who are skeptical about EA organizations tackling increasing well-being directly

  • People who don’t know HLI or don’t understand the value they are bringing

How to navigate through this document?

All the sections (marked by the titles) stand on their own and don’t need knowledge from previous sections to be understood. Feel free to skip around.

Abbreviations that are used in the text:

  • HLI – Happier Lives Institute

  • SWB – subjective well-being framework

  • QALY – Quality-adjusted life years framework


Utilitarianism and wellbeing

In many definitions of utilitarianism, well-being is the central, defining term. Take some generic one from Wikipedia: “Utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for all individuals”

Well-being, however, is notoriously hard to define and measure. Perhaps that’s why this area is relatively neglected within the EA community. Also, in the past, established frameworks like QALY+ didn’t render opportunities in the space particularly impactful. At least in the intuitive sense, it seems bizarre that EA couldn’t identify interventions that are attempting to increase well-being directly. Intuitively, it seems there should be projects out there with a high expected value tackling the problem directly.

Speculatively thinking there may be one more reason for the lack of interest in the community. People within EA seem highly analytical – the majority consists of engineers, economists, and mathematicians. Could demography like this mean that people on average score lower in the emotional intelligence skills bucket? – therefore making the community less interested in projects optimizing the space.

Happier Lives Institute as an organization

In simplest terms, the Happier Lives Institute is like a GiveWell that specializes in well-being. They are working with the most cost-effective opportunities to increase global well-being.

Michael Plant, its founder, is an active member of the EA forum since 2015. He has written 26 posts gathering more than 5.6k karma. He seems to be interested in the subject matter at least since 2016 when he wrote the first post on the Forum asking Is effective altruism overlooking human happiness and mental health? I argue it is. His lectures on the subject matter seem clear, methodical, and follow the best epistemological practices in the community. He was Peter Singer’s research assistant for two years, and Singer is an advisor to the institute.

The Clearer Thinking regrant is sponsoring the salary of Dr. Lily Yu. She seems to have relevant experience at the intersection of science, health, entrepreneurship, and grant-making.

Neglected

The cause area seems to be neglected within EA. Besides HLI I am aware of EA Psychology Lab, and Effective Self-help – but none of these organizations do as comprehensive work as HLI.

Subjective well-being framework

Even if the only value proposed by HLI was to research and donate to the most cost-effective opportunities to increase global well-being, I think it would be an outstanding organization to support.

However, HLI also works and stress-tests the Subjective Well-being framework (SWB) – work that the whole EA community can benefit from. Michael Plant describes the SWB methodology in this article and this lecture. Most leading EA orgs like Open Philanthropy and GiveWell use a different approach – the QALY+ framework.

I think the big chunk of the HLI’s value lies in running the alternative to QALY+ framework and challenging its assumptions. Michael Plant does this in the essay A philosophical review of Open Philanthropy’s Cause Prioritisation Framework. I am not gonna attempt to summarize this topic here (please see the links above for details), but I am gonna highlight a couple of the most interesting threads.

“It’s worth pointing out that QALYs and DALYs, the standard health metrics that OP, GiveWell, and others have relied on in their cause prioritization framework, are likely to be misleading because they rely on individuals’ assessments of how bad they expect various health conditions would be, not on observations of how much those health conditions alter the subjective wellbeing of those who have them (Dolan and Metcalfe, 2012) … our affective forecasts (predictions of how others, or our later selves, feel) are subject to focusing illusions, where we overweight the importance of easy-to-visualise details, and immune neglect, where we forget that we will adapt to some things and not others, amongst other biases (Gilbert and Wilson 2007).” Link

Also worth noting is that the SWB framework demonstrates a lot of potential in areas previously ignored by EA organizations:

“at the Happier Lives Institute conducted two meta-analyses to compare the cost-effectiveness, in low-income countries, of providing psychotherapy to those diagnosed with depression compared to giving cash transfers to very poor families. We did this in terms of subjective measures of wellbeing and found that therapy is 9x more cost-effective” Link

HLI also looked at Open Philanthropy and GiveWell’s backed interventions to compare QALY+ with SWB results.

“I show that, if we understand good in terms of maximising self-reported LS [Life satisfaction], alleviating poverty is surprisingly unpromising whereas mental health interventions, which have so far been overlooked, seem more effective” Link

But how the reasoning of this type can influence organizations like Open Philanthropy or GiveWell? Here Michael Plant describes how grant-making decisions can vary based on the weight given to different frameworks. The example describes value loss assessment based on the age of death.

“Perhaps the standard view of the badness of death is deprivationism, which states that the badness of death consists in the wellbeing the person would have had, had they lived. On this view, it’s more important to save children than adults, all else equal, because children have more wellbeing to lose.

Some people have an alternative view that saving adults is more valuable than saving children. Children are not fully developed, they do not have a strong psychological connection to their future selves, nor do they have as many interests that will be frustrated if they die. The view in the philosophical literature that captures this intuition is called the time-relative interest account (TRIA).

A third view is Epicureanism, named after the ancient Greek philosopher Epicurus, on which death is not bad for us and so there is no value in living longer rather than shorter.” Link

Prioritizing each of these approaches means different grant-making decisions (are we valuing kids’ or adults’ lives more?). Plant thinks that GiveWell does an insufficient job in their modeling:

“On what grounds are the donor preferences [60% of their weight on this marker] is the most plausible weights … The philosophical literature is rich with arguments for and against each of the views on the badness of death (again, Gamlund and Solberg, 2019 is a good overview). We should engage with those arguments, rather than simply polling people… [Open Philantropy] do not need to go ‘all-in’ on a single philosophical view. Instead, they could divide up their resources across deprivationism, TRIA, and Epicureanism in accordance with their credence in each view.” Link

Personal reasons

I also see the value in promoting evidence-based approaches to therapy because of my personal background. I grew up in Poland, a country that had a rough 19th and 20th centuries: Partitions, uprisings, wars, the holocaust, change of borders, communism, transformation. Generational trauma is still present in my country.

I went through four types of therapies and only later stumbled upon evidence-based approaches. From my experience, it seems critical to pick the right therapies. Their effectiveness varies widely. Approaches like Cognitive behavioral therapies (CBT) or Third wave therapies tend to be more effective. (Third-wave therapies are evidence-based approaches based on CBT foundation, but recreated using human instead of animal models).

In my country, but also in many others, ineffective and unscientific approaches are still largely present, often dominating. It seems valuable to have an organization with a high epistemic culture that assesses and promotes evidence-based interventions.

Counter-arguments

I think the work of HLI would be compromised if the SWB framework had major flaws. Reading Michael Plant’s article on the subject makes me think that SWB is well researched and heavily discussed approach however I don’t know much about its internal mechanics and didn’t investigate its potential flaws.

Summary

I see the value of HLI supporting interventions increasing global well-being directly. But I also see value in their work with the SWB framework. I think having an alternative, thoroughly researched framework like SWB has a high expected value for the whole community. I think the regrant will help HLI stress-test their assumptions and run it on more organizations. The work of HLI could have an impact on leading EA organizations like Open Philanthropy or GiveWell – potentially helping recalibrate their recommendations and assessments.