Some/all answers are in here, or in papers linked in that post. https://forum.effectivealtruism.org/posts/Lncdn3tXi2aRt56k5/health-and-happiness-research-topics-part-1-background-on
Derek
Yeah that’s what I use, and it’s cheaper than the fancy Wiley-branded fish-based product he linked to. You can get much cheaper fish oil, but if you’re going to get the expensive stuff anyway (I guess due to concerns about the quality of the cheaper brands), why not get vegan?
[Recording of the talk and related papers]
You can now view the recording of the talk from Professor John Brazier—Extending the QALY beyond health—the EQ HWB (Health and Wellbeing)
Kaltura
https://digitalmedia.sheffield.ac.uk/media/t/1_8k5slrc4
YouTube
https://www.youtube.com/watch?v=KTlsIvqyhNI
Papers associated with this talk
Special issue of Value in Health Development papers:
Brazier, J et al. ‘The EQ-HWB: overview of the development of a measure of health and well-being and key results’. Value in Health. https://www.sciencedirect.com/science/article/pii/S1098301522000833
Mukuria, C et al. “Qualitative Review on Domains of Quality of Life Important for Patients, Social Care Users, and Informal Carers to Inform the Development of the EQ Health and Wellbeing.” Value in Health (2022).
https://www.sciencedirect.com/science/article/pii/S1098301521032277
Carlton, J et al. “Generation, Selection, and Face Validation of Items for a New Generic Measure of Quality of Life: The EQ Health and Wellbeing.” Value in Health (2022). https://www.sciencedirect.com/science/article/pii/S1098301522000109
Peasgood, T et al. “Developing a New Generic Health and Wellbeing Measure: Psychometric Survey Results for the EQ Health and Wellbeing.” Value in Health (2022). https://www.sciencedirect.com/science/article/pii/S1098301521031922
International papers:
Monteiro AL, et al. A Comparison of a Preliminary Version of the EQ Health and Wellbeing Short and the 5-Level Version EQ-5D. Value Health. 2022 Mar 8:S1098-3015(22)00051-1. doi: 10.1016/j.jval.2022.01.003. Epub ahead of print. PMID: 35279371.
Augustovski F, Argento F, Rocío R, Luz G, Mukuria C, Belizán M. The Development of a New International Generic Measure (EQ Health and Wellbeing): Face Validity And Psychometric Stages In Argentina. https://www.sciencedirect.com/science/article/abs/pii/S1098301522000134
FYI the E-QALY work has been progressing quite well since you asked that question; I’ve just come out of a webinar on it. Let me know if you want me to send you notes/slides.
A few key points:
The measure has been named the EuroQol Health and Wellbeing (EQ-HWB); E-QALY seems to be what they are calling the broader project of extending the scope of the QALY.
Psychometric work and stakeholder consultation resulting in a 25-item ‘long’ measure, then further consultation resulted in a 9-item EQ-HWB-S (Short Form) covering 9 domains: Mobility, Daily activitie, Pain, Fatigue, Loneliness, Concentration & thinking clearly, Depression, Anxiety, Control.
A feasibility valuation study in 521 members of the UK public uses the time tradeoff (TTO, EQ-VT protocol) and discrete choice experiments (DCE). Due to covid this was done using video conferencing.
There was also a deliberative exercise with a 12-member panel of experts at NICE which reviewed the valuation results.
Based on the size of the utility decrement associated with the most severe level of each dimension, the order of importance is: Pain (by a long way); Mobility; Daily activities; Depression; Loneliness; Anxiety; Fatigue; Control; Concentration. (To me, the weight given to Mobility in particular might indicate that this measure does not overcome some of the biggest problems with earlier measures like the EQ-5D, though it seems to be much better overall.)
Other valuation studies, using different methodologies, are underway or planned. As far as I know, these don’t include ones that obtain weights based on SWB, but I think they will be looking at own-state utilities (i.e. weights derived from preferences of people with the relevant conditions).
Several papers are being published on it this year in a special edition of the journal Value in Health.
It started with a grant of 850,000 GBP; more has been spent since, but I’m not sure how much.
NICE still seems wedded to the EQ-5D for the foreseeable future, at least in standard health technology assessments, but they may use/accept the EQ-HWB in cases where broader effects are particularly important, e.g. impacts on carers.
Thanks. I tried 5-HTP a few years ago and didn’t notice any benefit, but maybe I’ll give it another go.
Thanks for the reply. I don’t have much more time to think about this at the moment, but some quick thoughts:
On time discounting: It might have been reasonable to omit discounting in this case for the reasons you suggest, but (a) it limits comparability across analyses if you or others do it elsewhere; (b) for various reasons, it would be good to have some estimate of the absolute, not just relative, costs and effects of these interventions; and (c) it’s pretty easy to implement in most software, e.g. Excel and R (maybe less so in Guesstimate), so there isn’t usually much reason not to do it.
On costs: (a) You only seem to measure depression, so if costs affect some other aspect of SWB then your analysis will not account for it. (b) It is also a good idea, where feasible, to account for non-monetary costs, such as lost time spent with family, and informal caregiver time. In this case, these are probably best covered by SWB outcomes, rather than being monetised, but since they involve spillovers on people other than the patient, they were not captured in this case. (c) Your detailed CEA of StrongMinds does not make it entirely clear what you mean by “all costs”; it just says “Our estimates of the average cost for treating a person in each programme are taken directly from StrongMinds’ accounting of its costs from 2019,” with no details about those accounts. For example, if they bought an expensive building in which to deliver training in 2018, that cost should normally be amortised over future years (roughly speaking, shared among future beneficiaries for the life of the building). So simply looking at 2019 expenditure does not necessarily capture “all costs”. I suggest reading Chapter 7 of Drummond et al to begin with, for a discussion of practical and conceptual issues in costing of health interventions.
On the focus on depression data: My “loading the dice” comment wasn’t about SDB/demand effects. Suppose, for example, that you want to compare intervention A, which treats both depression and severe physical pain; and intervention B, which only treats depression. You find that B reduces depression by more per dollar than A, so you conclude it is more cost-effective than A, and recommend it to donors. But it’s not really a fair comparison: you don’t know whether the overall benefit per dollar is greater in B than A, because you are ignoring the pain-relieving effects, which are likely greater in A. I haven’t looked at the GD data recently, but I can imagine something like that going on here, e.g. the cash has all sorts of benefits that aren’t captured by the depression measure, whereas the psychotherapy could have few such benefits.
On spillovers: I’m glad you are updating the analysis. To be frank, I think you probably shouldn’t have published this analysis in its current state, primarily due to the omission of spillovers. It’s just too misleading.
On sensitivity analysis: Also pleased you are going to add some of these. You’re right that some take longer than others, and it’s hard/impossible to do some of them in Guesstimate. But I think you can export the samples from Guesstimate to Excel, which should allow you to do some of the key ones without too much work, e.g. EVPI and CEAC/CEAF just need a simple macro and graph; see my Donational model for examples. (For extra usability and flexibility, you can do it in R and make a Shiny web app, but that takes a lot more work.)
This paper, the Drummond book above, and this book are good starting points if you want to learn how to do cost-effectiveness analysis (including sensitivity analysis).
A couple nitpicks:
Your title is misleading: this isn’t/these aren’t “meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy”. AFAICT, you are doing a cost-effectiveness analysis informed by meta-analyses of the effects of the two interventions. You aren’t doing a meta-analysis of cost-effectiveness studies.
The y axes of your graphs, and some of your tables, say things like “Effects of Depression Improvement”. As far as I can tell, these are showing the effects of the interventions on depression/SWB/MHa in terms of SD. They aren’t, for example, showing the effects of depression (i.e. the consequences of depression for something else), as implied by this wording.
There is much to be admired in this report, and I don’t find it intuitively implausible that mental health interventions are several times more cost-effective than cash transfers in terms of wellbeing (which I also agree is probably what matters most). That said, I have several concerns/questions about certain aspects of the methodology, most of which have already been raised by others. Here are just a few of them, in roughly ascending order of importance:
Outcomes should be time-discounted, for at least two reasons. First, to account for uncertainty as to whether they will obtain, e.g. there could be no counterfactual benefit in 10 years because of social upheaval, catastrophic events (e.g. an AI apocalypse, natural disaster), or the availability of more effective treatments for depression/ill-being/poverty. Second, to account for generally improving circumstances and opportunities for reinvestment: these countries are generally getting richer, people can invest cash transfers, etc. This will be even more important when assessing deworming and other interventions with benefits far in the future. (There is probably no need to discount costs as it seems they are incurred around the time the intervention is delivered in both cases.)
I’ve only skimmed the reports, but it isn’t clear to me what exactly is included in the costs for StrongMinds, e.g. sometimes capital costs (buildings etc), or overheads like management salaries and rent, are incorrectly left out of cost-effectiveness analyses. If you haven’t already, you might also want to consider any costs to the beneficiaries, e.g. if therapy recipients had to travel, pay for materials, miss work, etc. As you note, most of the difference in the cost-effectiveness is determined by the programmes’ costs rather than their consequences, so it’s important to get this right (which you may well have done).
You note that both interventions are assessed only in terms of their effect on depression. A couple years ago I summarised the findings of the four available evaluations of GiveDirectly in an unpublished draft post ( see Appendix 2.1, copied below, and the “GiveWell” subsection of section 2.2, the relevant part of which is copied below). The studies recorded data on many other indicators of wellbeing, which were sometimes combined into indices of “psychological wellbeing” with up to 10 components (as well as many non-wellbeing outcomes like consumption and education). Apologies if you explain this somewhere, but why did you only use the data on depression? Was it to facilitate an ‘apples to apples’ comparison, or something like that? If so, I wonder if it that was loading the dice a bit: at first blush, it seems unfair to compare two interventions in terms of outcome A when one is aimed solely at improving outcome A and the other is aimed at improving outcomes A, B, C, D, E, F, G and H (at least when B–H are relevant, i.e. indicators of subjective wellbeing).
I share others’ concerns about the omission of spillovers. In the draft post I linked above (partly copied below), I recorded my impression that the evidence so far, while somewhat lacking, suggests only null or positive spillovers to other households (at least for the current version of the programme, which ‘treats’ all eligible households in the village). As part of a separate project I did last year (which I’m not allowed to share), I also concluded that non-recipients within the household benefited considerably: “Only about 1.6 members of each household (average size ~4.3) were surveyed to get the wellbeing results, of which only 1 actually received the money. There was no statistically significant wellbeing difference between the recipients and surveyed non-recipient household members, and there is evidence of many benefits to non-recipients other than psychological wellbeing (e.g. education, domestic violence, child labour). Nevertheless, we expect the effects to be a little lower among non-recipients…” Omitting the inter-household spillovers is perhaps reasonable for the primary analysis, but it seems harder to justify ignoring benefits to others within the household.
Whatever may be justified for the base case, I don’t understand why you haven’t done a proper sensitivity analysis. Stochastic uncertainty is captured well by the Monte Carlo simulations, but it is standard practice in many fields (including health economics) to carry out scenario analyses that investigate the effects of contestable structural and methodological assumptions. It should be quite straightforward to adapt the model so as to include/exclude (or vary the values of) spillovers, non-depression data, certain kinds of costs, discount rates, etc. You can present the results of these analyses yourself, but users can also put their own set of assumptions in a well-constructed model to see how that changes things. (Many other analyses are also potentially helpful, especially when the difference in cost-effectiveness between the alternatives is relatively small, e.g. deterministic one-way and two-way analyses that show how the cost-effectiveness ratio changes with high/low values for each parameter; threshold analyses that show what value a parameter must attain for the ‘worse’ programme to become the more cost-effective; value of information, showing how much it would be worth spending on further studies to reduce uncertainty; and perhaps most usefully in this case, a cost-effectiveness acceptability curve indicating the probability that StrongMinds is cost-effective at a given threshold, such as the 3-8x GiveDirectly that GiveWell is currently using as its bar for new charities. Some examples are here.)
Topic 2.2: (Re-)prioritising causes and interventions
[…]
GiveWell
[…]
Spillover effects
Secondly, there are also potential issues with ‘spillover effects’ of increased consumption, i.e. the impact on people other than the beneficiaries. This is particularly relevant to GiveDirectly, which provides unconditional cash transfers; but consumption is also, according to GiveWell’s model, the key outcome of deworming (Deworm the World, Sightsavers, the END Fund) and vitamin A supplementation (Hellen Keller International). Evidence from multiple contexts suggests that, to some extent, the psychological benefits of wealth are relative: increasing one person’s income improves their SWB, but this is at least partly offset by decreases in the SWB of others in the community, particularly on measures of life satisfaction (e.g. Clark, 2017). If increasing overall wellbeing is the ultimate aim, it seems important to factor these ‘side-effects’ into the cost-effectiveness analysis.
As usual, GiveWell provides a sensible discussion of the relevant evidence. However, it is somewhat out of date and does not fully report the findings most relevant to SWB, so I’ve provided a summary of wellbeing outcomes from the four most relevant papers in Appendix 2.1. In brief:
All four studies found positive treatment effects, i.e. improvement to the psychological wellbeing of cash recipients, though in two cases this finding was sensitive to particular methodological choices.
Two studies of GiveDirectly found negative psychological spillovers.
Two found only null or positive spillovers.
As GiveWell notes, it is hard to aggregate the evidence on spillovers (psychological and otherwise) because of:
Major differences in study methodology (e.g. components of the psychological wellbeing index, type of control, inclusion/exclusion criteria, follow-up period).
Major differences in the programs being studied (e.g. size of transfers, proportion of households in a village receiving transfers).
Absence of key information (e.g. how many non-recipient households are affected by spillover effects for each treated household, how the magnitude of spillovers changes with distance and over time, how they differ among eligible and ineligible households).
Like GiveWell, I suspect the adverse happiness spillovers from GiveDirectly’s current program are fairly small. In order of importance, these are the three main reasons:
The negative findings were based on within-village analyses, i.e. comparing treated and untreated households in the same village. These may not be relevant to the current GiveDirectly program, which gives money to all eligible households in treated villages (and sometimes all households in the village). The two studies that investigated potential spillovers in untreated villages in the same area as the treated ones found no statistically significant effect.
Eggers et al. (2019) (the “general equilibrium” study), which found only null or positive spillovers, was by far the largest, seems to have had the fewest methodological limitations, and investigated a version of the program most similar to current practice.
At least one of the ‘negative’ studies, Haushofer & Shapiro (2018), had significant methodological issues, e.g. differential attrition rates and lack of baseline data on across-village controls (though results were fairly robust to authors’ efforts to address these).
In addition, any psychological harm seems to be primarily to life satisfaction rather than hedonic states. As noted in Haushofer, Reisinger, & Shapiro (2019): “This result is intuitive: the wealth of one’s neighbors may plausibly affect one’s overall assessment of life, but have little effect on how many positive emotional experiences one encounters in everyday life. This result complements existing distinctions between these different facets of well-being, e.g. the finding that hedonic well-being has a “satiation point” in income, whereas evaluative well-being may not (Kahneman and Deaton, 2010).” This is reassuring for those of us who tend to think feelings ultimately matter more than cognitive evaluations.
Nevertheless, I’m not extremely confident in the net wellbeing impact of GiveDirectly.
Non-trivial comparison effects are found in many other contexts, so it is perhaps reasonable to expect them here too. (I haven’t properly looked at that evidence so I’m not sure how strong my prior should be.)
As with any metric, there are various potential biases in wellbeing measures that could lead to under- or over-estimation of effects. When assessing the actual effect on wellbeing/welfare/utility (rather than on the specific measures of wellbeing used in the study), we should consider the evidence in the context of other findings that I haven’t discussed here.
Even a negative spillover with a very small effect size, which seems plausible in this case, could offset much or all of the positive impact. For instance, if recipient households gain 1 happiness point from the transfer, but every transfer causes 10 other households to lose 0.1 points for the same duration, the net effect is neutral.
I have only summarised the relevant papers; I haven’t tried to critique them in detail. GiveWell has also not analysed the latest versions of some of the key studies, which differ considerably from the working papers, so they might uncover some issues that I haven’t spotted.
A few more notes on interpreting the wellbeing effects of GiveDirectly:
As with other health and poverty interventions, I suspect the overall, long-run impact will be more sensitive to unmeasured and unmodeled indirect effects (e.g. consumption of factory-farmed meat, population size, CO2 emissions) than to methods for estimating welfare (e.g. SWB instruments vs consumption). But I’m leaving these broader issues with short-termist methodology aside for now.
The mechanisms of any adverse wellbeing effects have not been established in this case, and may not be pure psychological ‘comparison effects’ (jealousy, reduced status, etc). For instance, they could be mediated through consumption (e.g. poorer households selling goods to richer ones) or through some other, perhaps culture-specific, process.
Like any metric, SWB measures are imperfect. So even when SWB data are available, an assessment of the SWB effects of an intervention may be improved by taking into account information on other outcomes, plus ‘common sense’ reasoning.
In addition, I would note that the other income-boosting charities reviewed by GiveWell could potentially cause negative psychological spillovers. According to GiveWell’s model, the primary benefit of deworming and vitamin A supplementation is increased earnings later in life, yet no adjustment is made for any adverse effects this could have on other members of the community. As far as I can tell, the issue has not been discussed at all. Perhaps this is because these more ‘natural’ boosts to consumption are considered less likely to impinge on neighbours’ wellbeing than windfalls such as large cash transfers. But I’d like to see this justified using the available evidence.
I make some brief suggestions for improving assessment of psychological spillover effects in the “potential solutions” subsection below.
Four studies investigated psychological impacts of GiveDirectly transfers. Two of these found wellbeing gains for cash recipients (“treatment effects”) and only null or positive psychological spillovers:
Haushofer & Shapiro (2016) (9-month follow-up)
0.26 standard deviation (SD; p<0.01), positive, within-village treatment effect (i.e. comparing treated and untreated households in the same village) on an index of psychological wellbeing with 10 components (Table IV, p. 2011).
Statistically significant benefits for (in decreasing order of magnitude) Depression, Stress, Life Satisfaction, and Happiness at the 1% level, and Worries at the 10% level. Null effects (at the 10% level) on Cortisol, Trust, Locus of Control, Optimism, and Self-esteem (though point estimates were mostly positive).
Null, precise, within-village spillover effect on the index of psychological wellbeing; point estimate positive (0.1 SD; Table III, p. 2004).
Egger et al. (2019) (the “general equilibrium” study)
0.09 SD (p<0.01) within-village treatment effect (i.e. assuming all spillovers are contained within a village) on a 4-item index of psychological wellbeing.
Driven entirely by Life Satisfaction; no effect on Depression, Happiness, or Stress. (See this table, which the authors kindly sent to me on request.)
0.12 SD (p<0.1) “total” treatment effect (both within-village and across-village) on psychological wellbeing.
Driven by Happiness (0.15 SD; p<0.05); no others significant at the 10% level. (See this table.)
Null, fairly precise “total” spillover effect (combining within- and across-village effects) on the index of psychological wellbeing (and on every individual component); point estimate small and positive (0.08 SD). (See this table.)
Note: GiveWell reports a positive, statistically significant within-village spillover effect on psychological wellbeing of about 0.1 SD, based on an earlier draft of the paper. I can’t find this in the published paper; perhaps it was cut because of the authors’ stated preference for the “total” specification.
However, two studies are more concerning:
Haushofer & Shapiro (2018) (3-year follow-up; working paper)
Within-village 0.16 SD (p<0.01) treatment effect on an 8-component index of psychological wellbeing (Table 3, p. 16).
Driven primarily by improvements to Depression and Locus of Control (p<0.05), followed by Happiness and Life Satisfaction (p<0.1). No statistically significant (at the 10% level) change in Stress, Trust, Optimism, and Self-esteem. (Table B.7, p. 55)
Null across-village treatment effect on psychological wellbeing (Table 5, p. 22).
Approx. −0.2 SD (p<0.01) adverse psychological wellbeing spillover on untreated households in treated villages (Table 7, p. 26).
Driven by Stress (p<0.01), Depression (p<0.05), Happiness (p<0.1), and Optimism (p<0.1). No statistically significant (at the 10% level) change in Life Satisfaction, Trust, Locus of control, or Self-esteem. (Table B.15, p. 63)
Haushofer, Reisinger, & Shapiro (2019)
A 1 SD increase in own wealth causes a 0.13 SD (p<0.01) increase in the psychological well-being index (p.13; Table 3, p. 27).
At the average change in own wealth of eligible (thatched-roof) households of USD 354, this translates into a treatment effect of 0.09 SD.
At the average transfer of $709 among treated households, this translates into a treatment effect of 0.18 SD.
Driven by Happiness and Stress (p<0.01) then Life Satisfaction and Depression (p<0.05). No statistically significant (at the 10% level) effect on Salivary Cortisol. (Table 5, p. 29)
A 1 SD increase in village mean wealth (i.e. neighbours in one’s own village having a larger average transfer size) causes a decrease of 0.06 SD in psychological well-being over a 15 month period, only significant at the 10% level (p. 14; Table 3, p. 27).
At the average cross-village change in neighbours’ wealth of $327, this translates into an effect of −0.2 SD.
Driven entirely by Life Satisfaction (0.14 SD; p<0.01; p. 15; Table 5, p. 29)
At a change in neighbours’ wealth of $327, this translates into a Life Satisfaction effect of −0.4 SD (which is much larger than the own-wealth benefit, but less precisely estimated).
Subgroup analysis 1: No statistically significant within-village difference between treated and untreated households in psychological wellbeing effects of a change in neighbours’ wealth. (This suggests that what matters is how much more your neighbours received, not whether you received any transfer.)
Subgroup analysis 2: No statistically significant within-village difference in the psychological wellbeing effect of a change in neighbours’ wealth between households below versus above the median wealth of their village at baseline. (This suggests poorer households did not suffer more adverse psychological spillovers than wealthier ones.)
Methodological variations: Broadly similar results using alternative measures of the change in village mean wealth. (See p. 17 and Tables A.9–A.14 for details.)
No effect of village-level inequality on psychological wellbeing (holding constant one’s own wealth) over any time period and using three alternative measures of inequality.
Note: GiveWell’s review of an earlier version of the paper reports a “statistically significant negative effect on an index of psychological well-being that is larger than the short-term positive effect that the study finds for receiving a transfer, but the negative effect becomes smaller and non-statistically significant when including data from the full 15 months of follow-up… The authors interpret these results as implying that cash transfers have a negative effect on well-being that fades over time.” I’m not sure why the authors removed those analyses from the final version.
Is the CO2 accumulation entirely due to human (or I suppose animal) respiration? So it will typically be worse in small houses with lots of people (holding other factors, like ventilation, constant)?
In a modern house, with no open fires, lead paint etc, what “household air pollution” might there be?
Thanks—this is useful and I will explore some of the suggestions.
Is there much research comparing immediate vs extended release melatonin? E.g.:
Is IR better for speeding sleep onset, as one might expect?
Does XR actually improve sleep maintenance/duration more than IR?
Do they have the same effect on sleep efficiency?
Is the optimal dose the same for each?
Dose aside, do combined IR/XR supplements, or taking a bit of each, give you the ‘best of both worlds’?
[Edited on 19 Nov 2021: I removed links to my models and report, as I was asked to do so.]
Just to clarify, our (Derek Foster’s/Rethink Priorities’) estimated Effect Size of ~0.01–0.02 DALYs averted per paying user assumes a counterfactual of no treatment for anxiety. It is misleading to estimate total DALYs averted without taking into account the proportion of users who would have sought other treatment, such as a different app, and the relative effectiveness of that treatment.
In our Main Model, these inputs are named “Relative impact of Alternative App” and “Proportion of users who would have used Alternative App”. The former is by default set at 1, because the other leading apps seem(ed) likely to be at least as effective as Mind Ease, though we didn’t look at them in depth independently of Hauke. The second defaults to 0; I suppose this was to get an upper bound of effectiveness, and because of the absence of relevant data, though I don’t recall my thought process at the time. (If it’s set to 1, the counterfactual impact is of course 0.)
Our summary, copied in a previous comment, also stresses that the estimate is per paying user. I don’t remember exactly why, but our report says:
Other elements of the MindEase evaluation (i.e. parts not done by Rethink Priorities) consider a “user” to be a paying user, i.e. someone who has downloaded the app and purchased a monthly or annual plan. For consistency, we will adopt the same definition. (Note that this is a very important assumption, as the average effect size and retention is likely to be many times smaller for those who merely download or install the app.)
As far as I can tell (correct me if I’m wrong), your “Robust, uncertainty-adjusted DALYs averted per user” figure is essentially my theoretical upper-bound estimate with no adjustments for realistic counterfactuals. It seems likely (though I have no evidence as such) that:
Many users would otherwise use a different app.
Those apps are roughly as effective as MindEase.
The users who are least likely to use another app, such as people in developing countries who were given free access, are unlikely to be paying (and therefore perhaps less likely to regularly use/benefit from it) – not to mention issues with translation to different cultures/languages.
So 0.02 DALYs averted per user seems to me like an extremely optimistic average effect size, based on the information we had around the middle of last year.
[Edited on 19 Nov 2021: I was asked to remove the links.]
For those who are interested, here is the write-up of my per-user impact estimate (which was based in part on statistical analyses by David Moss): [removed]
The Main Model in Guesstimate is here: [removed]
The Effect Size model, which feeds into the Main Model, is here: [removed]
I was asked to compare it to GiveDirectly donations, so results are expressed as such. Here is the top-level summary:
Our analysis suggests that, compared to doing nothing to relieve anxiety, MindEase causes about as much benefit per paying user as donating $40 (90% confidence interval: $10 to $140) to GiveDirectly. We suspect that other leading apps are similarly effective (perhaps more so), in which case most of the value of MindEase will come from reaching people who would not have accessed alternative treatment.
Due to time constraints and lack of high-quality information, the analysis involved a lot of guesswork and simplifying assumptions. Of the parameters included in our Main Model, the results are most sensitive to the effect sizes of both MindEase and GiveDirectly, the retention of those effects over time, and the choice of outcome metric (DALYs vs WELLBYs). One large, independent study could eliminate much of this uncertainty. Additional factors worth considering include indirect effects (e.g. economic productivity, meat consumption, evidence generation), opportunity costs of team members’ time, and robustness to non-utilitarian worldviews.
Note that this was done around June 2020 so there may be better information on MindEase’s effectiveness by now. Also, I think the Happier Lives Institute has since done a more thorough analysis of the wellbeing impact of GiveDirectly, which could potentially be used to update the estimate.
- 18 Nov 2021 21:56 UTC; 2 points) 's comment on EA-Aligned Impact Investing: Mind Ease Case Study by (
Health and happiness research topics—Part 2: The HALY+: Improving preference-based health metrics
Hi Sam,
Thanks for the comments.
1. Have you done much stakeholder engagement? No. I discuss this a little bit in this section of Part 2, but I basically just suggest that people look into this and come up with a strategy before spending a huge amount of time on the research. I do know of academics who would may be able to advise on this, e.g. people who have developed previous metrics in consultation with NICE etc, but they’re busy and I suspect they wouldn’t want to invest a lot of time into efforts outside academia.
I think they’d reject the assumption that they are “not improving these metrics” and would point to considerable quantities of research in this area. The main issue, I think, is that they want a different kind of metric that what I’m proposing, e.g. they think it’s important that they are based on public preferences and are focused on health rather than wellbeing. A lot of resources are going into what I see (perhaps unfairly) as “tinkering around the edges,” e.g. testing variations of the time tradeoff/DCE and different versions of the EQ-5D, rather than addressing the fundamental problems.
As I say in Part 3 with respect to the sHALY (SWB-based HALY):
In my view, the strongest reason not to do this project is the apparent lack of interest among key stakeholders. Clinicians, patients, and major HALY “consumers” such as NICE and IHME seem strongly opposed to a pure SWB measure, even if focused on dimensions of health, and to the use of patient-reported values more broadly. As discussed in previous posts, this is due to a combination of normative concerns, such as the belief that those who pay for healthcare have the right to determine its distribution or that disability has disvalue beyond its effect on wellbeing, and doubts about the practicality of SWB measures in these domains.
So this project may only be worth considering if the sHALY would be useful for non-governmental purposes (e.g., within effective altruism), or in “supplementary” analyses alongside more standard methods (e.g., to highlight how QALYs neglect mental health). Either that, or changing the minds of large numbers of influential stakeholders will have to be a major part of the project—which may not be entirely unrealistic, given the increasing prominence of wellbeing in the public sector. We should also consider the possibility that projects such as this, which offer a viable alternative to the status quo, would themselves help to shift opinion.
That said, there is increasing increasing interest in hybrid health/wellbeing measures like the E-QALY, and scope for incremental improvement of current HALYs (see Part 2), and in the use of wellbeing for cross-sector prioritisation. In at least the latter case, you are likely to know more than me about how to effect policy change within governments.
2. Problem 4 - neglect of spillover affects – probably cannot be solved by changing the metric. I discuss spillovers a little in Part 2 and plan to have a separate post on it in Part 6 (but it might be a while before that’s out, and it’s likely to focus on raising questions rather than providing solutions). I’m still unsure what to do about them and would like to see more research on this. I agree changing the metric alone won’t solve the issue, but it may help—knowing the extent to which the metric captures spillovers seems like an important starting point.
3. Who would you recommend to fund if I want to see more work like this? It probably depends what your aims are. If it’s to influence NICE, IHME, etc, it probably has to go via academia or those institutions. If you want to develop a metric for use in EA, funding individual EAs or EA orgs may work—but even then, it’s probably wise to work closely with relevant academics to avoid reinventing the wheel. So I guess if you have a lot of money to throw at this, funding academics or PhD students may be a good bet; there is already some funding available (I’m applying for PhD scholarships in this area at the moment), but it may be hard to get funding for ideas that depart radically from existing approaches. I list some relevant institutions and individuals in Part 2.
4. How is the E-QALY project going? It got very delayed due to COVID-19. I’m not sure what the new timeline is.
Interesting, thanks!
Health and happiness research topics—Part 3: The sHALY: Developing subjective wellbeing-based health metrics
Thanks Bob! I will probably do this after publishing the next post.
I’ve made a few edits to address some of these issues, e.g.:
Clearly, there are many possible “wellbeing approaches” to economic evaluation and population health summary, defined both by the unit of value (hedonic states, preferences, objective lists, SWB) and by how they aggregate those units when calculating total value. Indeed, welfarism can be understood as a specific form of desire theory combined with a maximising principle (i.e., simple additive aggregation); and extra-welfarism, in some forms, is just an objective list theory plus equity (i.e., non-additive aggregation).
However, it seems that most advocates for the use of wellbeing in healthcare reject the narrow welfarist conception of utility, while retaining fairly standard, utility-maximising CEA methods—perhaps with some post-hoc adjustments to address particularly pressing distributional issues. So it seems reasonable to consider it a distinct (albeit heterogenous) perspective.
For the purpose of exposition, I will assume that the objective is to maximise total SWB (remaining agnostic between affect, evaluations, or some combination). This is not because I am confident it’s the right goal; in fact, I think healthcare decision-making should probably, at least in public institutions, give some weight to other conceptions of wellbeing, and perhaps to distributional concerns such as fairness. One reason to do so is normative uncertainty—we can’t be sure that the quasi-utilitarianism implied by that approach is correct—but it’s also a pragmatic response to the diversity of opinions among stakeholders and the challenges of obtaining good SWB measurements, as discussed in later posts.
However, I am fairly confident that SWB-maximization—or indeed any sensible wellbeing-focused strategy—would be an improvement over current practice, so it seems like a reasonable foundation on which to build. Moreover, most of these criticisms should hold considerable force from a welfarist, extra-welfarist, or simply “common sense” perspective. One certainly does not have to be a die-hard utilitarian to appreciate that reform is needed.
Changed the first two problem headings to avoid ambiguity and, in the first case, to focus on the result of the problem rather than the cause, which helps distinguish it from 5.
Hi Michael. Thanks for the feedback.
A few general points to begin with:
I think it’s generally fine to use terminology any way you like as long as you’re clear about what you mean.
In this piece I was summarising debates in health economics, and my framing reflects that literature.
The main objective of these posts is to highlight particular issues that may deserve further attention from researchers, and sometimes that has to come at the expense of conceptual rigour (or at least I couldn’t think of a way to avoid that tradeoff). Like you, my natural inclination is to put everything in mutually exclusive and collectively exhaustive categories, but that doesn’t always result in the most action-relevant information being front and centre.
To address your specific points:
I try to make it very clear what I mean by “welfarism” and its alternatives:
The QALY originally emerged from welfare economics, grounded in expected utility theory (EUT), which defined welfare in terms of the satisfaction of individual preferences. QALYs were intended to reflect, at least approximately, the preferences of a rational individual decision-maker (as described by the von Neumann-Morgenstern [vNM] axioms) concerning their own health, and could therefore properly be called utilities.
Others have argued that QALYs should not represent utility in this sense. These “non-welfarists” or “extra-welfarists” typically believe things like equity, capability, or health itself are of intrinsic value (Brouwer et al., 2008; Coast, Smith, & Lorgelly, 2008; Birch & Donaldson, 2003; Buchanan & Wordsworth, 2015). If such considerations are included in the QALY, the (welfarist) utility of patients may not change proportionally with the size of QALY gains.
Most criticism of HALYs has come from three broad camps: welfare economics (which aims to maximise the satisfaction of individual preferences), extra-welfarism (which has other objectives), and wellbeing (often but not always from a classical utilitarian perspective).
In a nutshell, welfarists complain that QALYs, and CEAs based on them, do not reflect the preferences of rational, self-interested utility-maximizers.
Extra-welfarists, on the other hand, generally think the QALY (and CEA more broadly) is currently too welfarist. Though extra-welfarism is ill-defined and encompasses a broad range of views, the uniting belief is that there is inherent value in things other than the satisfaction of individuals’ preferences (Brouwer et al., 2008).
For the welfarist, there are broader efficiency-related issues with using cost-per-HALY CEAs for resource allocation […] Therefore, counting everyone’s health the same does not maximise utility in the welfarist sense, even within the health sector.
So it should be clear that welfarism, as the term is used in modern (health) economics, offers a very specific theory of value (satisfaction of rational, self-regarding preferences that adhere to the axioms of expected utility theory) that is much more narrow than most desire theories. That said, I agree welfarism, extra-welfarism, and wellbeing-oriented ideas are not entirely distinct categories, and note overlaps between them:
Hedonism: … This is associated with the classical utilitarianism of Jeremy Bentham and John Stuart Mill, classical economics (mid-18th to late 19th century)…
Desire theories: Wellbeing consists in the satisfaction of preferences or desires. This is linked with neoclassical (welfare) economics, which began defining utility/welfare in terms of preferences around 1900 (largely because they were easier to measure than hedonic states), preference utilitarianism, …
Objective list theories: Wellbeing consists in the attainment of goods that do not consist in merely pleasurable experience nor in desire-satisfaction (though those can be on the list). … These have influenced some conceptions of psychological wellbeing,[46] and many extra-welfarist ideas. The capabilities approach also falls under this heading…
I mention distributional issues in the context of extra-welfarism:
These “non-welfarists” or “extra-welfarists” typically believe things like equity, capability, or health itself are of intrinsic value (Brouwer et al., 2008; Coast, Smith, & Lorgelly, 2008; Birch & Donaldson, 2003; Buchanan & Wordsworth, 2015). If such considerations are included in the QALY, the (welfarist) utility of patients may not change proportionally with the size of QALY gains.
Descriptively, it seems the extra-welfarists are winning. Although QALYs, and CEA as a whole, do not generally include overt consideration of distributional factors, they do depart from traditional welfare economics in a number of ways …
This “QALY egalitarianism” is often challenged by welfarists on the grounds that WTP varies among individuals, but many extra-welfarists reject it for other reasons. For example, some have argued that more value should be attached to health gained by the young—those who have not yet had their “fair innings”—than by the elderly (Williams, 1997); by those in a worse initial state of health, or for larger individual health gains[43] (e.g., Nord, 2005); by those who were not responsible for their illness (e.g., Dworkin, 1981a, 1981b); by those at the end of life, as currently implemented by NICE; or by people of low socioeconomic status.[44]
They are addressed further in Part 2 when I discussed how HALYs should be aggregated.
I do think I could perhaps have been clearer about the distinction between HALYs and economic evaluation (the latter is typically HALY-maximising, but doesn’t have to be), and analogously between the unit of value (e.g. wellbeing, health) and moral theory (utilitarianism, egalitarianism, etc). I may edit the post later if I have time.
What you call problem 2 I’d reframe as expectations =/= reality.
“Preferences =/= value” was intended as shorthand for something like “the preferences on which current HALY weights are based do not accurately reflect the value of the states to people experiencing them”. Or as I put it elsewhere: “They are based on ill-informed judgements of the general public”. It wasn’t a philosophical comment on desire theories. Still, I can see how it might be misleading (plus it doesn’t strictly apply to DALYs, which arguably aren’t preference-based), so I may change it to your suggestion...though “expectations” doesn’t really fit DALYs either, so I’d welcome alternative ideas.
I agree problem 3 (suffering/happiness) is about inadequate scaling and doesn’t presuppose hedonism, but I don’t think I imply otherwise. I decided to include it as a separate problem, even though it’s applicable to more than one type of scale/theory, because it’s an issue that is very neglected—in health economics and elsewhere. As noted above, the aim of this series is to draw attention to issues that I think more people should be working on, not make a conceptually/philosophically rigorous analysis.
That’s also why I didn’t have distributional issues as a separate “problem”. I note at the the start of the list that “The criticisms assume the objective is to maximize aggregate SWB” (while also noting that they “should also hold some force from a welfarist, extra-welfarist, or simply ‘common sense’ perspective”) and from that standpoint the current default (in most HALY-based analyses/guidelines) of HALY maximisation is not a “problem,” so long as they better reflect SWB. That said, as noted above, I do mention distributional issues earlier in the post and in Part 2, in case someone does want to work on those.
Problem 4 is not that HALYs don’t include spillovers; it’s that “They are difficult to interpret, capturing some but not all spillover effects.” (When I say “Neglect of spillover effects,” I mean that the issue of spillovers is problematically neglected in the literature, not that HALYs don’t measure them at all.) This should be clear from the text:
there is some evidence that people valuing health states take into account other factors, especially impact on relatives … On the other hand, it seems reasonable to assume health state values do not fully reflect the consequences for the rest of society—something that would be impossible for most respondents to predict, even if they were wholly altruistic.
I agree this is likely to be an issue with other metrics too (Part 6 is all about this, and it’s mentioned in Part 2), and I suspect it will mostly have to be dealt with at the aggregation stage, but it’s not the case that the content of the metrics is irrelevant. For example, the questionnaires (and therefore the descriptive system) could include items like “To what extent do you feel you’re a burden on others?” (a very common concern expressed in qualitative studies); and/or the valuation exercise could ask people to take into account the impact of their (e.g.) health condition on others (or alternatively to consider only their own health/wellbeing). If this makes a difference to the values produced, it would make HALYs/WELBYs easier to interpret, which would also inform broader evaluation methodology, like whether to administer health/wellbeing measures to relatives separately and add them to the total.
Problem 5 is not merely a restatement Problem 1, though of course they’re closely connected. Problem 1 focuses on why HALYs aren’t that good at prioritising within healthcare (i.e. achieving technical efficiency, from a fixed budget). Problem 5 is that are useless at cross-sector prioritisation (i.e. allocative efficiency). The cause is similar (health focus), and I think I combined them in an early draft; but as with states worse than dead, I wanted to have 5 as a separate issue in order to draw particular attention to it. The difference becomes especially relevant when comparing, for example, the sHALY (which assigns weight to health states based on SWB, thereby addressing Problem 1 but not 5) and the WELBY (which potentially addresses both, but probably at the expense of validity within specific domains such as healthcare, in which case it may be useful for high-level cross-sector prioritisation, e.g., setting budgets for different government departments [Problem 5], but not for priority-setting within, say, the NHS [Problem 1]). Following similar feedback from others, I did change 5 to “They are consequently of limited use in prioritising across sectors or cause areas” in my main list in order to highlight the relationship.
(Really, all of these problems are due to (a) the descriptive system, (b) the valuation method, and possibly (c) the aggregation method, so any further breakdown risks overlap and confusion—but those categories don’t really tell you why you should care about them, or what elements you should focus on, so it didn’t seem like a helpful typology for the “Problems” section.)
Still, I am not entirely happy with this way of dividing things up or framing things (e.g., some problems focus more “causes” and some on “effects”) and would welcome suggestions of alternatives that are both conceptually rigorous/consistent and draw attention to the practical implications.
There is a lot of potential in fish welfare/stunning. In addition to what others have mentioned, IIRC from some reading a few years ago:
The greatest bottleneck in humane slaughter is research, e.g. determining parameters/designing machines for stunning each major species, as they differ so much. There just aren’t many experts in this field, and the leading researchers are mostly very busy (and pretty old), but perhaps financial incentives would persuade some people with the right sort of background to go into this area.
As well as electrical and percussive stunning, anaesthetising with clove oil/eugenol seems a promising and under-researched method of reducing the pain of slaughter. Because it may just involve adding a liquid/powder to a tank containing the fish, it may also require less tailoring to each species than than other methods (though it can affect the flavour if “too much” is used). I have some notes on this if anyone is interested.
Crustastun could be mass-produced and supplied cheaply/freely to places that would otherwise boil crustaceans alive. I seem to recall a French lawyer had invented another machine that was even better (or cheaper) but was too busy to promote it; maybe EAs could buy the patent or something?