Moral Weights according to EA Orgs
This post was motivated by SoGive’s moral weights being (to a first check) quite different to Founders Pledge (FP) and Happier Lives Institute (HLI). Upon checking in more detail, this appears to be the largest discrepency across any organisation. (Although we are still waiting to find out many missing values in the grid as HLI’s research is ongoing).
|1 income doubling for 1 year||1||1||1 ||1|
|preventing 1 year of severe depression||~1.51 (*)||1.28||0.71-1.42||4|
|1 additional year of life||2.30||1.95||-2.8 to 2.91  (*)||-|
|preventing 1 death under 5||117.7||123.2||-192 to 200 ||100|
|preventing 1 death over 5||83.6||83.7||-67 to 70||100|
Broadly all organisations (with the exception of SoGive’s view on depression) are very much aligned.
(*) means I expect the organisation would not endorse the figures used here. In the case of GiveWell my best guess is this is roughly inline with what they would use. For Happier Lives Institute it is an upper bound I expect they will be far below when they finish their research.
Open Phil’s summary of their moral weights is very clear and interesting, but:
For now, in order to be more consistent in our practices, we’re going to defer to GiveWell and start to use the number of DALYs that would be implied by extrapolating their moral weights.
I have left them off here, as I would just be duplicating the GiveWell numbers.
GiveWell’s weights are sourced from here. I have made a few small calculations to align these numbers with the other orgs.
Founders Pledge’s moral weights are avaiable here.
Happier Lives Institute
Unfortunately, their moral weights are still in the process of being generated. You can determine the range of weights they will use in future in their article The elephant in the bednet.
SoGive’s weights can be found here. I have used them verbatim
This is a calculation. A 100% increase in income/consumption is worth 1.27 / 0.69 = 1.86 WELLBYs (in HLI terms). (See inputs tab C25) We want this to be 1-unit, so we take 1⁄1.86 = 0.55 to be a WELLBY and other numbers are calculated from this.
GiveWell has a strong aversion to disability weights used blindly, so take this number with a grain of salt.
Founders Pledge don’t explicitly include depression in their data. I have used the disability weights they used in their public CEA of StrongMinds. I am under the impression they are working to move towards HLI’s model for this.
This is also a calculation. HLI are inconsistent in how they calculate the impact of depression in WELLBYs. Here they say depression is worth 1.3 WELLBYs. (So 1.3 * 0.55 = 0.71) in units of income doubling. One potential explanation is that “depression” is less severe than “severe depression” so potentially this number could be doubled—they estimate the effect of StrongMinds to be ~1.8 WELLBYs)
GiveWell uses a metric “Years lived with disease/disability” which as far as I can tell is equivalent to “value of averting 1 year of death”.
As mentioned above HLI are still in the process of deciding what their moral weights are. I am taking the upperbound of their deprevationist model, the highest number it could be. The highest number is a deprevationist model of losing 4.95 WELLBY. (4.95 − 0). The lowest number is the same model using a neutral point of 10. “would seem unintuitive to most, but relates to tranquilism and minimalist axiologies” (See inputs tab C18)
I have taken the average of “death averted from malaria” and “death averted from vitamin A”. The numbers are similar and I don’t think material to the analysis here.
Using life-expectancy of 70.16, average age of death of under 5s of 1.54 and average age of death of over 5s of 46.06. (Numbers via HLI’s sheet “GiveWell Numbers”). Method suggested by Joel.
- EA & LW Forum Summaries (9th Jan to 15th Jan 23′) by 18 Jan 2023 7:29 UTC; 17 points) (LessWrong;
- EA & LW Forum Summaries (9th Jan to 15th Jan 23′) by 18 Jan 2023 7:29 UTC; 14 points) (
- 9 Mar 2023 12:15 UTC; 7 points)'s comment on How many QALYs equate to a typical human life? by (
- 10 Jan 2023 8:24 UTC; 4 points)'s comment on StrongMinds should not be a top-rated charity (yet) by (
It’s awesome that you’ve put this together, as I think this is really valuable information. Honestly, what surprises me most here is how similar all four organizations’ numbers are across most of the items involved.
As you pointed out, however, your use of the highest-possible value for HLI’s value of extending a life by a year definitely undersells how different HLI is from the others. I think it would be better if you explicitly showed both endpoints of the range HLI considers, which includes negative values on the low end. Without that, I worry that readers who were otherwise not highly familiar with HLI’s work would not come away with a correct impression of HLI’s views.
I agree—and I started out trying to list all their approaches, but it very quickly becomes untractable in the table format. I have edited to show the full range, although I’m not sure if it’s more or less helpful than before. Hopefully it does should how counter-intuitive their model can be
Thanks for the edit! I think that’s helpful
Is this because we argued that it’s plausible that a life can have negative wellbeing?
This was also gratifying for us to see, but it’s probably important to note that our approach incorporates weights from both GiveWell and HLI at different points, so the estimates are not completely independent.
Please reply to this comment if there is another org you would like to see added to the grid.
I think it’s valuable to see all of this in one place, and I appreciate the digging required to piece this together.
A few comments:
The highest neutral point we think is plausible is 5⁄10 on a 0 to 10 wellbeing scale, but we mentioned that some philosophical views would stake a claim to the feasibility of 10⁄10.
The highest value of a year of life we’d consider plausible is 10 WELLBYs a year (LS = 10⁄10 and neutral point = 0), and the lowest as −5 (LS = 0⁄10 and neutral point = 5).
But if we’re only accepting low-income country average LS values (~4/10 in our malaria report), then this would be −1 to 4.
I think you can fill out the missing cells for HLI by taking the average age of death, which for Malaria I is ~2 for under 5′s and ~46 for over 5s. Assuming a life expectancy of 70 (what we’ve assumed previously for malaria deaths), that’d imply a moral weight of under-5s = (70 − 2) * (-1 , 4) or (70 − 45) * (-1, 4).
We haven’t explicitly set out to estimate the wellbeing burden of depression, but this is an interesting question. I haven’t thought too much about whether we can use our estimate of the benefit of treating depression with StrongMinds as an implicitly assigning a wellbeing weight to StrongMinds. I’m not sure this is as straightforward as it may appear.
We are still developing our views on these moral weights, particularly around saving lives. To put it lightly, these are philosophically complex questions. Our present aim is to suggest what one should do, conditional on the moral view one holds. But perhaps surprisingly, this takes considerably more effort than assuming a viewpoint and seeing what follows.
Granted, this has its limits. Our emphasis on subjective wellbeing is itself conditional on the primacy of theories of wellbeing that emphasise subjective states (e.g., hedonism, desire satisfaction).
If you can point me to somewhere on the HLI website I can cite I will update this.
Will do (I will still be using the same range as before though per my point above about finding somewhere I can cite HLI on using 5 as the maximum neutral point).
See section 2.2
Or you could also note that we estimate the lower bound of the value of saving a life as assuming a neutral point of 5.
I had seen both of those, but I didn’t read either of them as commitments that HLI thinks that the neutral point is between 0 and 5.
I agree this is valuable, thank you for doing this.
I’ll just echo something Matt said about possible lack of independence...
Prior to doing our formal Delphi process for determining our moral weights, we at SoGive had been using a placeholder set of moral weights. The placeholder was heavily influenced by GiveWell’s moral weights.
Our process did then incorporate lots of other perspectives, including a survey of the EA community, and a survey of the wider population, as well as explicit exhortations to think things through independently. Despite all these things, I think it’s possible that our process might have ended up anchoring on the previous placeholder weights, i.e. indirectly anchoring on GiveWell’s moral weights. I don’t think anyone in the team was looking at or aware of FP’s or HLI’s moral weights, so I don’t expect there was any direct influence there.
Thanks for putting this together, this is super interesting!
Am I right in saying there is an implicit negative sign on all of the “bad” ones (depression and deaths under 5)? I found this a bit confusing to read especially with HLI including negative numbers in their ranges. Perhaps adding a “preventing” before all of them would be helpful.
Yes—good point. I have fixed that
From ishaan here.
I thought it cleaner to reply to this comment about moral weights here where you could see my calculations as it will make it easier to find the discussion and it is more related to moral weights.
It’s certainly plausible, although I don’t know where my mistake is.
I am very confident HLI are inconsistent between reports. I have already queried them on this. I don’t know if I have Joel’s permission to publish his full reply, but he is looking into it. I also noted it in the footnotes here
I’m not sure 2-5 SD-years is plausible for severe depression. 3 SDs would saturate the entire scale 0-24.
0.92 SD-years gets converted to 2.0 WELLBYs since they multiply SD-years by the 2.17 figure. This is something I have had confirmed with Joel and this is how they are creating their figures on this page.
My response to this post overall is that I think some of what is going on here is that different people and different organizations mean very different things when we say “Depression”. Since “depression” is not really a binary, the value of averting “1 case of severe depression” can change a lot depending on how you define severity, in such a way that differences in reasonable definitions of “sufficiently bad depression” can plausibly differ by 1-3x when you break it down into “how many SD counts as curing depression” terms.
However, the in-progress nature of SoGives’ mental health work makes pinning down what we do mean sort of tricky. What exactly did the participants in the SoGive Delphi Process mean when they said “severe depression”? How should I, as an analyst who isn’t aiming to set the moral weights but is attempting to advise people using them, interpret that? These things are currently in flux, in the sense that I’m basically in the process of making various judgement calls about them right now, which I’ll describe below.
It’s true that the PHQ-9 score of 27 points maxes out around 2-4sd. How many SD it is exactly depends on the spread of your population of course (for example if 1sd=6.1 points then the range of a 27 point scale spans 4.42sd ), and for some population spreads it would be 3sd.
These two things are related actually! I think the trouble is that the word “severity depression” is ambiguous as to how bad it is, so different people can mean different things by it.
One might argue that the following was an awkward workaround which should have been done differently, but basically, to make transparent my internal thought process here (In terms of what I thought after joining sogive, starting this analysis, and encountering these weights) was the following:
-> “hm, this implies we’re willing to trade averting 25 years of depression against one (mostly neonatal) death. Is this unusual?”
→ “Maybe we are thinking about the type of severe, suicidal depression that is an extremely net negative experience, a state which is worse than death.”
→ “Every questionnaire creator seems to have recommended cut-offs for gradients of depression such as “mild” and “moderate” (e.g. the creators of the PHQ-9 scale are recommending 20 points as the cut-off for “severe” depression) but these aren’t consistent between scales and are ultimately arbitrary choices.”
-> “extrapolating linearly from the time-trade-off literature people seemed to think that a year of depression breaks even with dying a year earlier around 5.5sd. Maybe less if it’s not linear.”
-> “But maybe it should be more because what’s really happening here is that we’re seeing multiple patients improve by 0.5-0.8 sd. The people surveyed in that paper think that the difference between 2sd->3sd is bigger than 1sd->2sd. People might disagree on the correct way to sum these up.”
→ concluding with me thinking that various reasonable people might set the standard for “averting severe depression” between 2-6 sd, depending on whether they wanted ordinary severity or worse than death severity
So, hopefully that answers your question as to why I wrote to you that 2-5sd is reasonable for severe depression. I’m going to try to justify this further in subsequent posts. Some additional thoughts that I had were:
-> I notice that this is still weighting depression more heavily than the people surveyed in the time-trade-off, but if we set it on the higher range of 3-6sd it still feels like a morally plausible view (especially considering that some people might have assigned lower moral weight to neonates).
→ My role is to tell people what the effect is, not to tell them what moral weights to use. However, I’m noticing that all the wiggle room to interpret what “severe” means is on me, and I notice that I keep wanting to nudge the SD-years I accept as higher in order to make the view match what I think is morally plausible.
-> I’ll just provisionally use something between 3-5 sd-years for the purpose of completing analysis, because my main aim is to figure out what therapy does in terms of sd.
→ But I should probably publish a tool that allows people to think about moral weights in terms of standard deviation, and maybe we can survey people for moral weights again in the future in a manner that lets them talk about standard deviations rather than whatever connotations they attached to “severe depression”. Then we can figure out what people really think about various grades of depression and how much income and life they’re willing to trade about it.
In fact the next thing I’m scheduled to publish is a write up that talks in detail about how to translate SD into something more morally intuitive. So hopefully that will help us make some progress on the moral weights issue.
So to summarize, I think (assuming your calculations w.r.t. everyone else’s weights are correct) what’s going on here is that it looks like SoGive is weighing depression 4x more than everyone, but those moral weights were set in the absence of a concrete recommendations, and in the end …and arguably this is an artifact me choosing after the fact to set a really high SD threshold for “severity” as a reaction to the weights, and what really needs to happen is that we need to go through that process I described of polling people again in a way that breaks down “severity” differently… in the final analysis, once a concrete recommendation comes out, it probably won’t be that different? (Though you’ve added two items, sd<->daly/wellby and cash<->sd, on my list of things to check for robustness and if it ends up being notable I’m definitely going to flag it, so thank you for that). I do think that this story will ultimately end with some revisiting of moral weights, how they should be set, and what they mean, and how to communicate them.
(There’s another point that came up in the other thread though, regarding “does it pass the sanity check w.r.t. cash transfer effects on well being”, which this doesn’t address. although it falls outside the scope of my current work I have been wanting to get a firmer sense of the empirical cash <-> wellby <-> sd depression correlations and apropos of your comments perhaps this should be made more explicit in moral weights agendas.)