In your view, what would it look like for EA to pay sufficient attention to mental health?
To me, it looks like thereâs a fair amount of engagement on this:
Peter Singer obviously cares about the issue, and heâs a major force in EA by himself.
Michael Plantâs last post got a positive writeup in Future Perfect and serious engagement from a lot of people on the Forum and on Twitter (including Alexander Berger, who probably has more influence over neartermist EA funding than any other person); Alex was somewhat negative on the post, but at least he read it.
Will MacAskill invited three very prominent figures to run an EA Forum AMA on psychedelics as a promising mental health intervention.
Founders Pledge released a detailed cause area report on mental health, which makes me think that a lot of their members are trying to fund this area.
EA Global has featured several talks on mental health.
I canât easily find engagement with mental health from Open Phil or GiveWell, but this doesnât seem like an obvious sign of neglect, given the variety of other health interventions they havenât closely engaged with.
Iâm limited here by my lack of knowledge w/âr/ât funding constraints for orgs like StrongMinds and the Happier Lives Institute. If either org way really funding-constrained, Iâd consider them to be promising donation targets for people concerned about global health, but I also think that those people â if they look anywhere outside of GiveWell â have a good chance of finding these orgs, thanks to their strong presence on the Forum and in other EA spaces.
Iâve only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Lukeâs specific concerns about subjective wellbeing separately in a reply to his comment.
TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.
I raised the âSWB and mental health might really matterâ concerns in meetings with GiveWell staff about once a year since 2015. Before 2021, my experience was that they more or less dismissed my concerns, even though they didnât seem familiar with the relevant literature. When I asked what their specific doubts were, these were vague and seemed to change each time (âweâre not sure you can measure feelingsâ, âweâre worried about experimenter demand effectâ, etc.). Iâd typically point out their concerns had already been addressed in the literature, but that still didnât seem to make them more interested. (I donât recall anyone ever mentioning âitem response theoryâ, which Luke raises as his objection.) In the end, I got the impression that GiveWell staff thought I was a crank and were hoping I would just go away.
GiveWellâs public engagement has been almost non-existent. When HLI published, in August 2020, a document explaining how GiveWell could (re)estimate their own âmoral weightsâ using SWB, GiveWell didnât comment on this (a Founders Pledge researcher did, however, provide detailed comments). The first and only time GiveWell has responded publicly about this was in December 2020, where they set out their concerns in relation to our cash transfer vs therapy meta-analyses; Iâve replied to those comments (many of which expressed quite non-specific doubts) but not yet received a follow-up.
The response I was hoping forâindeed, am still hoping forâwas the one Will et al. gave above, namely, âWeâre really interested in serious critiques. What do you think weâre getting wrong, why, and what difference would it make if you were right? Would you like us to fund you to work on this?â Obviously, you wouldnât expect an organisation to engage with critiques that are practically unimportant and from non-credible sources. In this case, however, I was raising fundamental concerns that, if true, could substantially alter the priorities, both for GiveWell and EA more broadly. And, for context, at the time I initially highlighted these points I was doing a philosophy PhD supervised by Hilary Greaves and Peter Singer and the measurement of wellbeing was a big part of my thesis.
There has been quite good engagement from other EAs and EAs orgs, as Aaron Gertler notes above. I can add to those that, for instance, Founders Pledge have taken SWB on board in their internal decision-making and have since made recommendations in mental health. However, GiveWellâs lack of engagement has really made things difficult because EAs defer so much to GiveWell: a common question I get is âah, but what does GiveWell think?â People assume that, because GiveWell didnât take something seriously, that was strong evidence they shouldnât either. This frustration was compounded by the fact that because there isnât a clear, public statement of what GiveWellâs concerns were, I could neither try to address their concerns nor placate the worries of others by saying something like âGiveWellâs objection is X. We donât share that because of Yâ.
This is pure speculation on my part, but I wonder if GiveWell (and perhaps Open Phil too) developed an âugh fieldâ around subjective wellbeing and mental health. They didnât look into it initially because they were just too damn busy. But then, after a while, it became awkward to start engaging with because that would require admitting they should have done so years ago, so they just ignored it. I also suspect thereâs been something of an information cascade where someone originally looked at all this (see my reply to Luke above), decided it wasnât interesting, and then other staff members just took that on trust and didnât revisit itâeveryone knew an idea could be safely ignored even if they werenât sure why.
Since 2021, however, things have been much better. In late 2020, as mentioned, HLI published a blog post showing how SWB could be used to (re)estimate GiveWellâs âmoral weightsâ. I understand that some of GiveWellâs donors asked them for an opinion on this and that pushed them to engage with it. HLI had a productive conversation with GiveWell in February 2021 (see GiveWellâs notes) where, curiously, no specific objections to SWB were raised. GiveWell are currently working on a blog post responding to our moral weights piece and they kindly shared a draft with us in July asking for our feedback. Theyâve told us they plan to publish reports on SWB and psychotherapy in the next 3-6 months.
Regarding Open Phil, it seemed pointless to engage unless GiveWell came on board, because Open Phil also defer strongly to GiveWellâs judgements, as Alex Berger has recently stated. However, we recently had some positive engagement from Alex on Twitter, and a member of his team contacted HLI for advice after reading our report and recommendations on global mental health. Hence, we are now starting to see some serious engagement, but itâs rather overdue and still less fulsome than Iâd want.
Really sad to hear about this, thanks for sharing. And thank you for keeping at it despite the frustrations. I think you and the team at HLI are doing good and important work.
To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.
I think expecting orgs and donors to change direction is certainly a very high bar. But I donât think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.
FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didnât pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.
At least for me, I donât think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant â I think itâs a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.
That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team has been looking into the topic again as that team has gained more research capacity in the past year or two.
Hello Luke, thanks for this, which was illuminating. Iâll make an initial clarifying comment and then go on to the substantive issues of disagreement.
At least for me, I donât think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant â I think itâs a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.
Iâm not sure what you mean here. Are you saying GiveWell didnât repeatedly ignore the work? That Open Phil didnât? Something else? As I set out in another comment, my experience with GiveWell staff was of being ignored by people who werenât at that familiar with the relevant literatureâFWIW, I donât recall the concerns you raise in your notes being raised with me. Iâve not had interactions with Open Phil staff prior to 2021 - for those reading, Luke and I have never spokenâso Iâm not able to comment regarding that.
Onto the substantive issues. Would you be prepared to more precisely state what your concerns are, and what sort of evidence would chance your mind? Reading your comments and your notes, Iâm not sure exactly what your objections are and, in so far as I do, they donât seem like strong objections.
You mention âweakly validated measuresâ as an issue but in the text you say âfor some scales, reliability and validity have been firmly establishedâ, which implies to me you think (some) scales are validated. So which scales are you worried about, to what extent, and why? Are they so non-validated we should think they contain no information? If some scales are validated, why not just use those ones? By analogy, we wouldnât give up on measuring temperature if we thought only some of our thermometers were broken. Iâm not sure if weâre even on the same page about what it is to âvalidateâ a measure of something (I can elaborate, if helpful).
On âunconvincing intervention studiesâ, I take it youâre referring to your conversation notes with Sonja Lyubormirsky. The âhappiness interventionsâ you talk about are really just those from the field of âpositive psychologyâ where, basically, you take mentally healthy people and try to get them to change their thoughts and behaviours to be happier, such as by writing down what theyâre grateful for. This implies a very narrow interpretation of âhappiness interventionsâ. Reducing poverty or curing diseases are âhappiness interventionsâ in my book because they increase happiness, but they are certainly not positive psychology interventions. One can coherently think that subjective wellbeing measures, eg self-reported happiness, are valid and capture something morally important but deny gratitude journalling etc. are particularly promising ways, in practice, of increasing it. Also, thereâs a big difference between the lab-style experiments psychologists run and the work economists tend to do looking at large panel and cohort data sets.
Regarding âone entire literature using the wrong statistical test for decadesâ, again, Iâm not sure exactly what you mean. Is the point about âitem response theoryâ? I confess thatâs not something that gets discussed in the academic world of subjective wellbeing measurementâI donât think Iâve ever heard it mentioned. After a quick look, it seems to be a method to relate scores of psychometric tests to real-world performance. That seems to be a separate methodological ballgame from concerns about the relationship between how people feel and how they report those feelings on a numerical scale, e.g. when we ask âhow happy are you, 0-10?â. Subjective wellbeing researchers do talk about the issue of âscale cardinalityâ, ie, roughly, does your â7/â10â feel the same to you as my â7/â10âł does to me? This issue has been starting to get quite a bit of attention in just the last couple of years but has, I concede, been rather neglected by the field. Iâve got a working paper on this under review which is (I think) the first comprehensive review of the problem.
To me, it looks like in your initial investigation you had the bad luck to run into a couple of dead ends and, quite understandably given those, didnât go further. But I hope youâll let me try to explain further to you why I think happiness research (like happiness itself) is worth taking seriously!
I donât have much time to engage on this, but here are some quick replies:
I donât know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me /â Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldnât say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
On âweakly validated measures,â Iâm talking in part about lack of IRT validation studies for SWB measures used in adults (NIH funded such studies for SWB measures in kids but not adults, IIRC), but also about other things. The published conversation notes only discuss a small fraction of my findings/âthoughts on the topic.
On âunconvincing intervention studiesâ I mean interventions from the SWB literature, e.g. gratitude journals and the like. Personally, Iâm more optimistic about health and anti-poverty interventions for the purpose of improving happiness.
On âwrong statistical test,â Iâm referring to the section called âOlder studies used inappropriate statistical methodsâ in the linked conversation notes with Joel Hektner.
TBC, I think happiness research is worth engaging and has things to teach us, and I think there may be some cost-effectiveness happiness interventions out there. As I said in my original comment, I moved on to other topics not because I think the field is hopeless, but because it was in a bad enough state that it didnât make sense for me to prioritize it at the time.
Thanks for this too. I appreciate youâve since moved on to other things, so this isnât really your topic to engage on anymore. However, Iâll make two comments.
First, you said you read various things in the area, including by me, since 2015. It would have been really helpful (to me) if, given you had different views, you had engaged at the time and set out where you disagreed and what sort of evidence would have changed your mind.
Second, and similarly, I would really appreciate it if the current team at Open Philanthropy could more precisely set out their perspective on all this. I did have a few interactions with various Open Phil staff in 2021, but I wouldnât say Iâve got anything like canonical answers on what their reservations are about 1. measuring outcomes in terms of SWB - Alex Bergerâs recent technical update didnât comment on thisâand 2. doing more research or grantmaking into the things that, from the SWB perspective, seem overlooked.
This is an interesting conversation. Itâs veering off into a separate topic. I wish there was a way to ârebaseâ these spin-off discussions into a different place. For better organisation.
Do you feel that existing data on subjective wellbeing is so compelling that itâs an indictment on EA for GiveWell/âOpenPhil not to have funded more work in that area? (Founderâs Pledge released their report in early 2019 and was presumably working on it much earlier, so they wouldnât seem to be blameworthy.)
I canât say much more here without knowing the details of how Michael/âothersâ work was received when they presented it to funders. The situation Iâve outlined seems to be compatible both with âthis work wasnât taken seriously enoughâ and âthis work was taken seriously, but seen as a weaker thing to fund than the things that were actually fundedâ (which is, in turn, compatible with âfunders were correct in their assessmentâ and âfunders were incorrect in their assessmentâ).
That Michael felt dismissed is moderate evidence for ânot taken seriously enoughâ. That his work (and other work like it) got a bunch of engagement on the Forum is weak evidence for âtaken seriouslyâ (what the Forum cares about =/â= what funders care about, but the correlation isnât 0). Iâm left feeling uncertain about this example, but itâs certainly reasonable to argue that mental health and/âor SWB hasnât gotten enough attention.
(Personally, I find the case for additional work on SWB more compelling than the case for additional work on mental health specifically, and I donât know the extent to which HLI was trying to get funding for one vs. the other.)
Do you feel that existing data on subjective wellbeing is so compelling that itâs an indictment on EA for GiveWell/âOpenPhil not to have funded more work in that area?
Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.
I think I will slightly dodge the question and answer the separate question â are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of the list of things to look into more that might change how we think about doing good).
Firstly to give a massive caveat that I do not know for sure. It is hard to judge and knowing exactly how seriously various orgs have looked into topics is very hard to do from the outside. So take the below with a pinch of salt. That said:
OpenPhil â AOK.
OpenPhil (neartermists) generally seem good at exploring new areas and experimenting (and as Luke highlights, did look into this).
GiveWell â hmmm could do better.
GiveWell seem to have a pattern of saying they will do more exploratory research (e.g. into policy) and then not doing it (mentioned here, I think 2020 has seen some but minimal progress).
I am genuinely surprised GiveWell have not found things better than anti-malaria and deworming (sure, there are limits on how effective scalable charities can be but it seems odd our first guesses are still the top recommended).
There is limited catering to anyone who is not a classical utilitarian â for example if you care about wellbeing (e.g. years lived with disability) but not lives saved it is unclear where to give.
EA in general â so-so.
There has been interest from EAs (individuals, Charity Entrepreneurship, Founders Pledge, EAG) on the value of happiness and addressing mental health issues, etc.
It is not just Michael. I get the sense the folk working on Improving Institutional Decision Making (IIDM) have struggled to get traction and funding and support too. (Although maybe promoters of new causes areas within EA always feel their ideas are not taken seriously.)
The EA community (not just GiveWell) seems very bad at catering to folk who are not roughly classical (or negative leaning) utilitarians (a thing I struggled with when working as a community builder).
I do believe there is a lack of exploratory research happening given the potential benefits (see here and here). Maybe Rethink are changing this.
Not sure I really answered the question. And anyway none of those points are very strong evidence as much as me trying to explain my current intuitions. But maybe I said something of interest.
In your view, what would it look like for EA to pay sufficient attention to mental health?
To me, it looks like thereâs a fair amount of engagement on this:
Peter Singer obviously cares about the issue, and heâs a major force in EA by himself.
Michael Plantâs last post got a positive writeup in Future Perfect and serious engagement from a lot of people on the Forum and on Twitter (including Alexander Berger, who probably has more influence over neartermist EA funding than any other person); Alex was somewhat negative on the post, but at least he read it.
Forum posts with the âmental healthâ tag generally seem to be well-received.
Will MacAskill invited three very prominent figures to run an EA Forum AMA on psychedelics as a promising mental health intervention.
Founders Pledge released a detailed cause area report on mental health, which makes me think that a lot of their members are trying to fund this area.
EA Global has featured several talks on mental health.
I canât easily find engagement with mental health from Open Phil or GiveWell, but this doesnât seem like an obvious sign of neglect, given the variety of other health interventions they havenât closely engaged with.
Iâm limited here by my lack of knowledge w/âr/ât funding constraints for orgs like StrongMinds and the Happier Lives Institute. If either org way really funding-constrained, Iâd consider them to be promising donation targets for people concerned about global health, but I also think that those people â if they look anywhere outside of GiveWell â have a good chance of finding these orgs, thanks to their strong presence on the Forum and in other EA spaces.
Iâve only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Lukeâs specific concerns about subjective wellbeing separately in a reply to his comment.
TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.
I raised the âSWB and mental health might really matterâ concerns in meetings with GiveWell staff about once a year since 2015. Before 2021, my experience was that they more or less dismissed my concerns, even though they didnât seem familiar with the relevant literature. When I asked what their specific doubts were, these were vague and seemed to change each time (âweâre not sure you can measure feelingsâ, âweâre worried about experimenter demand effectâ, etc.). Iâd typically point out their concerns had already been addressed in the literature, but that still didnât seem to make them more interested. (I donât recall anyone ever mentioning âitem response theoryâ, which Luke raises as his objection.) In the end, I got the impression that GiveWell staff thought I was a crank and were hoping I would just go away.
GiveWellâs public engagement has been almost non-existent. When HLI published, in August 2020, a document explaining how GiveWell could (re)estimate their own âmoral weightsâ using SWB, GiveWell didnât comment on this (a Founders Pledge researcher did, however, provide detailed comments). The first and only time GiveWell has responded publicly about this was in December 2020, where they set out their concerns in relation to our cash transfer vs therapy meta-analyses; Iâve replied to those comments (many of which expressed quite non-specific doubts) but not yet received a follow-up.
The response I was hoping forâindeed, am still hoping forâwas the one Will et al. gave above, namely, âWeâre really interested in serious critiques. What do you think weâre getting wrong, why, and what difference would it make if you were right? Would you like us to fund you to work on this?â Obviously, you wouldnât expect an organisation to engage with critiques that are practically unimportant and from non-credible sources. In this case, however, I was raising fundamental concerns that, if true, could substantially alter the priorities, both for GiveWell and EA more broadly. And, for context, at the time I initially highlighted these points I was doing a philosophy PhD supervised by Hilary Greaves and Peter Singer and the measurement of wellbeing was a big part of my thesis.
There has been quite good engagement from other EAs and EAs orgs, as Aaron Gertler notes above. I can add to those that, for instance, Founders Pledge have taken SWB on board in their internal decision-making and have since made recommendations in mental health. However, GiveWellâs lack of engagement has really made things difficult because EAs defer so much to GiveWell: a common question I get is âah, but what does GiveWell think?â People assume that, because GiveWell didnât take something seriously, that was strong evidence they shouldnât either. This frustration was compounded by the fact that because there isnât a clear, public statement of what GiveWellâs concerns were, I could neither try to address their concerns nor placate the worries of others by saying something like âGiveWellâs objection is X. We donât share that because of Yâ.
This is pure speculation on my part, but I wonder if GiveWell (and perhaps Open Phil too) developed an âugh fieldâ around subjective wellbeing and mental health. They didnât look into it initially because they were just too damn busy. But then, after a while, it became awkward to start engaging with because that would require admitting they should have done so years ago, so they just ignored it. I also suspect thereâs been something of an information cascade where someone originally looked at all this (see my reply to Luke above), decided it wasnât interesting, and then other staff members just took that on trust and didnât revisit itâeveryone knew an idea could be safely ignored even if they werenât sure why.
Since 2021, however, things have been much better. In late 2020, as mentioned, HLI published a blog post showing how SWB could be used to (re)estimate GiveWellâs âmoral weightsâ. I understand that some of GiveWellâs donors asked them for an opinion on this and that pushed them to engage with it. HLI had a productive conversation with GiveWell in February 2021 (see GiveWellâs notes) where, curiously, no specific objections to SWB were raised. GiveWell are currently working on a blog post responding to our moral weights piece and they kindly shared a draft with us in July asking for our feedback. Theyâve told us they plan to publish reports on SWB and psychotherapy in the next 3-6 months.
Regarding Open Phil, it seemed pointless to engage unless GiveWell came on board, because Open Phil also defer strongly to GiveWellâs judgements, as Alex Berger has recently stated. However, we recently had some positive engagement from Alex on Twitter, and a member of his team contacted HLI for advice after reading our report and recommendations on global mental health. Hence, we are now starting to see some serious engagement, but itâs rather overdue and still less fulsome than Iâd want.
Really sad to hear about this, thanks for sharing. And thank you for keeping at it despite the frustrations. I think you and the team at HLI are doing good and important work.
To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.
I think expecting orgs and donors to change direction is certainly a very high bar. But I donât think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.
FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didnât pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.
At least for me, I donât think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant â I think itâs a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.
That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team has been looking into the topic again as that team has gained more research capacity in the past year or two.
Hello Luke, thanks for this, which was illuminating. Iâll make an initial clarifying comment and then go on to the substantive issues of disagreement.
Iâm not sure what you mean here. Are you saying GiveWell didnât repeatedly ignore the work? That Open Phil didnât? Something else? As I set out in another comment, my experience with GiveWell staff was of being ignored by people who werenât at that familiar with the relevant literatureâFWIW, I donât recall the concerns you raise in your notes being raised with me. Iâve not had interactions with Open Phil staff prior to 2021 - for those reading, Luke and I have never spokenâso Iâm not able to comment regarding that.
Onto the substantive issues. Would you be prepared to more precisely state what your concerns are, and what sort of evidence would chance your mind? Reading your comments and your notes, Iâm not sure exactly what your objections are and, in so far as I do, they donât seem like strong objections.
You mention âweakly validated measuresâ as an issue but in the text you say âfor some scales, reliability and validity have been firmly establishedâ, which implies to me you think (some) scales are validated. So which scales are you worried about, to what extent, and why? Are they so non-validated we should think they contain no information? If some scales are validated, why not just use those ones? By analogy, we wouldnât give up on measuring temperature if we thought only some of our thermometers were broken. Iâm not sure if weâre even on the same page about what it is to âvalidateâ a measure of something (I can elaborate, if helpful).
On âunconvincing intervention studiesâ, I take it youâre referring to your conversation notes with Sonja Lyubormirsky. The âhappiness interventionsâ you talk about are really just those from the field of âpositive psychologyâ where, basically, you take mentally healthy people and try to get them to change their thoughts and behaviours to be happier, such as by writing down what theyâre grateful for. This implies a very narrow interpretation of âhappiness interventionsâ. Reducing poverty or curing diseases are âhappiness interventionsâ in my book because they increase happiness, but they are certainly not positive psychology interventions. One can coherently think that subjective wellbeing measures, eg self-reported happiness, are valid and capture something morally important but deny gratitude journalling etc. are particularly promising ways, in practice, of increasing it. Also, thereâs a big difference between the lab-style experiments psychologists run and the work economists tend to do looking at large panel and cohort data sets.
Regarding âone entire literature using the wrong statistical test for decadesâ, again, Iâm not sure exactly what you mean. Is the point about âitem response theoryâ? I confess thatâs not something that gets discussed in the academic world of subjective wellbeing measurementâI donât think Iâve ever heard it mentioned. After a quick look, it seems to be a method to relate scores of psychometric tests to real-world performance. That seems to be a separate methodological ballgame from concerns about the relationship between how people feel and how they report those feelings on a numerical scale, e.g. when we ask âhow happy are you, 0-10?â. Subjective wellbeing researchers do talk about the issue of âscale cardinalityâ, ie, roughly, does your â7/â10â feel the same to you as my â7/â10âł does to me? This issue has been starting to get quite a bit of attention in just the last couple of years but has, I concede, been rather neglected by the field. Iâve got a working paper on this under review which is (I think) the first comprehensive review of the problem.
To me, it looks like in your initial investigation you had the bad luck to run into a couple of dead ends and, quite understandably given those, didnât go further. But I hope youâll let me try to explain further to you why I think happiness research (like happiness itself) is worth taking seriously!
Hi Michael,
I donât have much time to engage on this, but here are some quick replies:
I donât know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me /â Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldnât say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
On âweakly validated measures,â Iâm talking in part about lack of IRT validation studies for SWB measures used in adults (NIH funded such studies for SWB measures in kids but not adults, IIRC), but also about other things. The published conversation notes only discuss a small fraction of my findings/âthoughts on the topic.
On âunconvincing intervention studiesâ I mean interventions from the SWB literature, e.g. gratitude journals and the like. Personally, Iâm more optimistic about health and anti-poverty interventions for the purpose of improving happiness.
On âwrong statistical test,â Iâm referring to the section called âOlder studies used inappropriate statistical methodsâ in the linked conversation notes with Joel Hektner.
TBC, I think happiness research is worth engaging and has things to teach us, and I think there may be some cost-effectiveness happiness interventions out there. As I said in my original comment, I moved on to other topics not because I think the field is hopeless, but because it was in a bad enough state that it didnât make sense for me to prioritize it at the time.
Hello Luke,
Thanks for this too. I appreciate youâve since moved on to other things, so this isnât really your topic to engage on anymore. However, Iâll make two comments.
First, you said you read various things in the area, including by me, since 2015. It would have been really helpful (to me) if, given you had different views, you had engaged at the time and set out where you disagreed and what sort of evidence would have changed your mind.
Second, and similarly, I would really appreciate it if the current team at Open Philanthropy could more precisely set out their perspective on all this. I did have a few interactions with various Open Phil staff in 2021, but I wouldnât say Iâve got anything like canonical answers on what their reservations are about 1. measuring outcomes in terms of SWB - Alex Bergerâs recent technical update didnât comment on thisâand 2. doing more research or grantmaking into the things that, from the SWB perspective, seem overlooked.
This is an interesting conversation. Itâs veering off into a separate topic. I wish there was a way to ârebaseâ these spin-off discussions into a different place. For better organisation.
Thank you Luke â super helpful to hear!!
Do you feel that existing data on subjective wellbeing is so compelling that itâs an indictment on EA for GiveWell/âOpenPhil not to have funded more work in that area? (Founderâs Pledge released their report in early 2019 and was presumably working on it much earlier, so they wouldnât seem to be blameworthy.)
I canât say much more here without knowing the details of how Michael/âothersâ work was received when they presented it to funders. The situation Iâve outlined seems to be compatible both with âthis work wasnât taken seriously enoughâ and âthis work was taken seriously, but seen as a weaker thing to fund than the things that were actually fundedâ (which is, in turn, compatible with âfunders were correct in their assessmentâ and âfunders were incorrect in their assessmentâ).
That Michael felt dismissed is moderate evidence for ânot taken seriously enoughâ. That his work (and other work like it) got a bunch of engagement on the Forum is weak evidence for âtaken seriouslyâ (what the Forum cares about =/â= what funders care about, but the correlation isnât 0). Iâm left feeling uncertain about this example, but itâs certainly reasonable to argue that mental health and/âor SWB hasnât gotten enough attention.
(Personally, I find the case for additional work on SWB more compelling than the case for additional work on mental health specifically, and I donât know the extent to which HLI was trying to get funding for one vs. the other.)
Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.
I think I will slightly dodge the question and answer the separate question â are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of the list of things to look into more that might change how we think about doing good).
Firstly to give a massive caveat that I do not know for sure. It is hard to judge and knowing exactly how seriously various orgs have looked into topics is very hard to do from the outside. So take the below with a pinch of salt. That said:
OpenPhil â AOK.
OpenPhil (neartermists) generally seem good at exploring new areas and experimenting (and as Luke highlights, did look into this).
GiveWell â hmmm could do better.
GiveWell seem to have a pattern of saying they will do more exploratory research (e.g. into policy) and then not doing it (mentioned here, I think 2020 has seen some but minimal progress).
I am genuinely surprised GiveWell have not found things better than anti-malaria and deworming (sure, there are limits on how effective scalable charities can be but it seems odd our first guesses are still the top recommended).
There is limited catering to anyone who is not a classical utilitarian â for example if you care about wellbeing (e.g. years lived with disability) but not lives saved it is unclear where to give.
EA in general â so-so.
There has been interest from EAs (individuals, Charity Entrepreneurship, Founders Pledge, EAG) on the value of happiness and addressing mental health issues, etc.
It is not just Michael. I get the sense the folk working on Improving Institutional Decision Making (IIDM) have struggled to get traction and funding and support too. (Although maybe promoters of new causes areas within EA always feel their ideas are not taken seriously.)
The EA community (not just GiveWell) seems very bad at catering to folk who are not roughly classical (or negative leaning) utilitarians (a thing I struggled with when working as a community builder).
I do believe there is a lack of exploratory research happening given the potential benefits (see here and here). Maybe Rethink are changing this.
Not sure I really answered the question. And anyway none of those points are very strong evidence as much as me trying to explain my current intuitions. But maybe I said something of interest.