I’m writing from the perspective of the Happier Lives Institute. We’re delighted that HLI’s work and the subjective wellbeing approach featured so prominently in this podcast. It was a really high-quality conversation, and kudos to host Rob Wiblin for doing such an excellent job putting forward our point of view. Quite a few people have asked us what we thought, so I’ve written up some comments.
I split those into four main comments and a number of minor ones. To preview, the main comments are:
We’re delighted, but surprised, to hear GiveWell are now so positive about the SWB approach; we’re curious to know what changed their mind.
HLI and GiveWell disagree on what does the most good based on differences of how to interpret the evidence; we’d be open to an ‘adversarial collaboration’ to see if we can iron out those differences.
We’d love to do more research, but we’re currently funding constrained. If you – GiveWell or anyone else – want to see it, please consider supporting us!
Finally, Rob, it’s about time you had us on the podcast!
Main points
1. We’re delighted, but surprised, to hear GiveWell are now so positive about the SWB approach; we’re curious to know what changed their mind.
Elie Hassenfeld says the differences in opinion between HLI and GiveWell aren’t because HLI cares about SWB and GiveWell does not, but down to differences of opinion interpreting the data[1]. This is great news – we’re glad to see major decision-makers like GiveWell taking happiness seriously – but it is also news to us!
Listeners of the podcast may not know this, but I (as a PhD student) and then HLI have been publicly advocating for SWB since about 2017 (e.g., 1, 2). I/we have also done this privately, with a number of organisations, including GiveWell, who I spoke to about once a year. Whilst lots of people were sympathetic, I could not seem to interest the various GiveWell staff I talked to. That’s why I was surprised when earlier this year, GiveWell made its first public written comment on SWB and was tentatively in favour; Elie’s podcast this week seemed more positive.
So, we’re curious to know how thinking in GiveWell changed on this. This is of interest to us but I’m sure others would like to know how change happens inside large organisations.
2. HLI and GiveWell disagree on what does the most good based on differences of how to interpret the evidence; we’d be open to an ‘adversarial collaboration’ to see if we can iron out those differences.
Elie explained that the reason GiveWell doesn’t recommend StrongMinds[2] which HLI does recommend, is due to differences in the interpretation of the empirical data. Effectively, what GiveWell did was look at our numbers, then apply some subjective adjustments on factors they thought were off. We previously wrote a long response to GiveWell’s assessment and don’t want to get stuck into all those weeds here. Elie says – and we agree! – that reasonable people can really disagree on how to interpret the evidence. That’s why we’d be interested in an ‘adversarial collaboration’ to see if we can resolve them. I can see three areas of disagreement.
First, on the general theoretical issue of whether and how to make subjective adjustments to evidence. GiveWell are prepared to make adjustments, even if they’re not sure exactly how big they should be. For example, Elie says he’s unsure about the 20% reduction for ‘experimenter demand effect’. Our current view is to be very reluctant to make adjustments without clear evidence of what size is justified. Our reluctance is motivated by cases such as this: these ‘GiveWellian haircuts’ can really add up and change the results, so the conclusion ends up being more on the researcher’s interpretation of the evidence than the evidence itself. But we’re not sure how to think about it either!
A second, potentially more tractable issue, is clarifying what evidence would change our mind about the specific issues. For instance, if there was a well conducted RCT that directly estimated the household spillover effects of psychotherapy in a setting similar to StrongMinds, we’d likely largely adopt that estimate.
A third issue is deworming. Elie and Rob discuss HLI’s reassessment of GiveWell’s deworming numbers, for which GiveWell very generously awarded us a prize. However, GiveWell haven’t commented – on the podcast or elsewhere – on our follow-up work, which finds the available evidence suggests that there are no statistically significant long-term effects of deworming on SWB. This uses the exact same studies that GiveWell relies on; it indicates that GiveWell aren’t as bought into about SWB as Elie sounds or they haven’t integrated this evidence yet.
3. We’d love to do more research, but we’re currently funding constrained. If you – GiveWell or anyone else – want to see it, please consider supporting us!
As a small organisation – we’re just 4 researchers – we were pleased to see our ideas and work is influencing the wider discussion about how to do have the biggest impact. As the podcast highlights, HLI brings a different (reasonable) perspective, we have provided important checks on other’s work, we provide unique expertise in philosophy and wellbeing measurement, and we have managed to push difficult issues up to the top of the agenda[3], [4], [5]. We think we ‘punch above our weight’.
Elie mentions a number of areas where he’d love to see more research including in SWB and the difficult question of how to put numbers on the value of saving a life. We think we’d be very well placed to do this work, for the reasons given above; we’re not sure anyone else will do it, either (we understand GiveWell don’t have immediate plans, for instance). However, we don’t have the capacity to do more and we can’t expand due to funding constraints. So, we’d love donors to step forward and support us!
Of course we’re biased, but we believe we’re a very high leverage, cost-effective funding opportunity for donors who want to see top-quality research that changes the paradigm on global wellbeing and how to do the most good. Please donate here or get in touch at Michael@happierlivesinstitute.org. We’re currently finalising our research agenda for 2023-4 (available on request).
4. Finally, Rob, it’s about time you had us on the podcast!
We’ve got much more to say about topics covered, plus other issues besides: longtermism, moral uncertainty, etc. (Rob has said we’re on the list, but it might take a while because of the whole AI thing that’s been blowing up; which seems fair).
“I think ultimately what it comes down to is we have a different interpretation of the empirical data — meaning we look at the same empirical data and reach different conclusions about what it means for the likely impact of the programme in the real world.”
“…I think one of the things that HLI has done effectively is just ensure that [tradeoffs between saving and extending lives] is on people’s minds. I mean, without a doubt their work has caused us to engage with it more than we otherwise might have. Similar to some of the questions you were asking earlier, like, “Why doesn’t institution X see that it should do whatever?” Well, because it’s kind of hard, and sometimes you need another organisation to be pushing it in front of you. I think that’s really good that they’ve done that, because it’s clearly an important area that we want to learn more about, and I think could eventually be more supportive of in the future.”
“Yeah, they went extremely deep on our deworming cost-effectiveness analysis and pointed out an issue that we had glossed over, where the effect of the deworming treatment degrades over time. We had seen that degrading, and the way we had treated it, I should say, was that that’s just a noisy estimate, and we just took the average estimate persisting over the long run. Their critique convinced us that we should at least incorporate some probability that the effect is degrading into our overall model, and that shifted our overall assessment of deworming down by a small amount. Had we taken their correction on board in the past, it would have meant a few million dollars that we would have given elsewhere instead of deworming. Their published critique, I think we didn’t agree with the headline result that they reached, but we were really grateful for that critique, and I thought it catalysed us to launch this Change Our Mind Contest. And also it was a great example of the engagement that we’re getting from being transparent. That we can say, “Here’s our decisions, here’s why they could point to an error, and it changes our mind.” That was really cool, and we were really grateful for it.”
“I think the pro of subjective wellbeing measures is that it’s one more angle to use to look at the effectiveness of a programme. It seems to me it’s an important one, and I would like us to take it into consideration.”
Linkpost from the HLI blog
Major points
I’m writing from the perspective of the Happier Lives Institute. We’re delighted that HLI’s work and the subjective wellbeing approach featured so prominently in this podcast. It was a really high-quality conversation, and kudos to host Rob Wiblin for doing such an excellent job putting forward our point of view. Quite a few people have asked us what we thought, so I’ve written up some comments.
I split those into four main comments and a number of minor ones. To preview, the main comments are:
We’re delighted, but surprised, to hear GiveWell are now so positive about the SWB approach; we’re curious to know what changed their mind.
HLI and GiveWell disagree on what does the most good based on differences of how to interpret the evidence; we’d be open to an ‘adversarial collaboration’ to see if we can iron out those differences.
We’d love to do more research, but we’re currently funding constrained. If you – GiveWell or anyone else – want to see it, please consider supporting us!
Finally, Rob, it’s about time you had us on the podcast!
Main points
1. We’re delighted, but surprised, to hear GiveWell are now so positive about the SWB approach; we’re curious to know what changed their mind.
Elie Hassenfeld says the differences in opinion between HLI and GiveWell aren’t because HLI cares about SWB and GiveWell does not, but down to differences of opinion interpreting the data[1]. This is great news – we’re glad to see major decision-makers like GiveWell taking happiness seriously – but it is also news to us!
Listeners of the podcast may not know this, but I (as a PhD student) and then HLI have been publicly advocating for SWB since about 2017 (e.g., 1, 2). I/we have also done this privately, with a number of organisations, including GiveWell, who I spoke to about once a year. Whilst lots of people were sympathetic, I could not seem to interest the various GiveWell staff I talked to. That’s why I was surprised when earlier this year, GiveWell made its first public written comment on SWB and was tentatively in favour; Elie’s podcast this week seemed more positive.
So, we’re curious to know how thinking in GiveWell changed on this. This is of interest to us but I’m sure others would like to know how change happens inside large organisations.
2. HLI and GiveWell disagree on what does the most good based on differences of how to interpret the evidence; we’d be open to an ‘adversarial collaboration’ to see if we can iron out those differences.
Elie explained that the reason GiveWell doesn’t recommend StrongMinds[2] which HLI does recommend, is due to differences in the interpretation of the empirical data. Effectively, what GiveWell did was look at our numbers, then apply some subjective adjustments on factors they thought were off. We previously wrote a long response to GiveWell’s assessment and don’t want to get stuck into all those weeds here. Elie says – and we agree! – that reasonable people can really disagree on how to interpret the evidence. That’s why we’d be interested in an ‘adversarial collaboration’ to see if we can resolve them. I can see three areas of disagreement.
First, on the general theoretical issue of whether and how to make subjective adjustments to evidence. GiveWell are prepared to make adjustments, even if they’re not sure exactly how big they should be. For example, Elie says he’s unsure about the 20% reduction for ‘experimenter demand effect’. Our current view is to be very reluctant to make adjustments without clear evidence of what size is justified. Our reluctance is motivated by cases such as this: these ‘GiveWellian haircuts’ can really add up and change the results, so the conclusion ends up being more on the researcher’s interpretation of the evidence than the evidence itself. But we’re not sure how to think about it either!
A second, potentially more tractable issue, is clarifying what evidence would change our mind about the specific issues. For instance, if there was a well conducted RCT that directly estimated the household spillover effects of psychotherapy in a setting similar to StrongMinds, we’d likely largely adopt that estimate.
A third issue is deworming. Elie and Rob discuss HLI’s reassessment of GiveWell’s deworming numbers, for which GiveWell very generously awarded us a prize. However, GiveWell haven’t commented – on the podcast or elsewhere – on our follow-up work, which finds the available evidence suggests that there are no statistically significant long-term effects of deworming on SWB. This uses the exact same studies that GiveWell relies on; it indicates that GiveWell aren’t as bought into about SWB as Elie sounds or they haven’t integrated this evidence yet.
3. We’d love to do more research, but we’re currently funding constrained. If you – GiveWell or anyone else – want to see it, please consider supporting us!
As a small organisation – we’re just 4 researchers – we were pleased to see our ideas and work is influencing the wider discussion about how to do have the biggest impact. As the podcast highlights, HLI brings a different (reasonable) perspective, we have provided important checks on other’s work, we provide unique expertise in philosophy and wellbeing measurement, and we have managed to push difficult issues up to the top of the agenda[3], [4], [5]. We think we ‘punch above our weight’.
Elie mentions a number of areas where he’d love to see more research including in SWB and the difficult question of how to put numbers on the value of saving a life. We think we’d be very well placed to do this work, for the reasons given above; we’re not sure anyone else will do it, either (we understand GiveWell don’t have immediate plans, for instance). However, we don’t have the capacity to do more and we can’t expand due to funding constraints. So, we’d love donors to step forward and support us!
Of course we’re biased, but we believe we’re a very high leverage, cost-effective funding opportunity for donors who want to see top-quality research that changes the paradigm on global wellbeing and how to do the most good. Please donate here or get in touch at Michael@happierlivesinstitute.org. We’re currently finalising our research agenda for 2023-4 (available on request).
4. Finally, Rob, it’s about time you had us on the podcast!
We’ve got much more to say about topics covered, plus other issues besides: longtermism, moral uncertainty, etc. (Rob has said we’re on the list, but it might take a while because of the whole AI thing that’s been blowing up; which seems fair).
“I think ultimately what it comes down to is we have a different interpretation of the empirical data — meaning we look at the same empirical data and reach different conclusions about what it means for the likely impact of the programme in the real world.”
An organisation that treats depression at scale and is currently our top recommendation.
“…I think one of the things that HLI has done effectively is just ensure that [tradeoffs between saving and extending lives] is on people’s minds. I mean, without a doubt their work has caused us to engage with it more than we otherwise might have. Similar to some of the questions you were asking earlier, like, “Why doesn’t institution X see that it should do whatever?” Well, because it’s kind of hard, and sometimes you need another organisation to be pushing it in front of you. I think that’s really good that they’ve done that, because it’s clearly an important area that we want to learn more about, and I think could eventually be more supportive of in the future.”
“Yeah, they went extremely deep on our deworming cost-effectiveness analysis and pointed out an issue that we had glossed over, where the effect of the deworming treatment degrades over time. We had seen that degrading, and the way we had treated it, I should say, was that that’s just a noisy estimate, and we just took the average estimate persisting over the long run. Their critique convinced us that we should at least incorporate some probability that the effect is degrading into our overall model, and that shifted our overall assessment of deworming down by a small amount. Had we taken their correction on board in the past, it would have meant a few million dollars that we would have given elsewhere instead of deworming. Their published critique, I think we didn’t agree with the headline result that they reached, but we were really grateful for that critique, and I thought it catalysed us to launch this Change Our Mind Contest. And also it was a great example of the engagement that we’re getting from being transparent. That we can say, “Here’s our decisions, here’s why they could point to an error, and it changes our mind.” That was really cool, and we were really grateful for it.”
“I think the pro of subjective wellbeing measures is that it’s one more angle to use to look at the effectiveness of a programme. It seems to me it’s an important one, and I would like us to take it into consideration.”