HLI’s estimates imply, for example, that a donor would pick offering StrongMinds’ intervention to 20 individuals over averting the death of a child, and that receiving StrongMinds’ program is 80% as good for the recipient as an additional year of healthy life.
I.e., is it your view that 4-8 weeks of group therapy (~12 hours) for 20 people is preferable to averting the death of a child?
To be clear on what the numbers are: we estimate that group psychotherapy has an effect of 10.5 WELLBYs on the recipient’s household, and that the death of a child in a LIC has a −7.3 WELLBY effect on the bereaved household. But the estimate for grief was very shallow. The report this estimate came from was not focused on making a cost-effectiveness estimate of saving a life (with AMF). Again, I know this sounds weasel-y, but we haven’t yet formed a view on the goodness of saving a life so I can’t say how much group therapy HLI thinks is preferable averting the death of a child.
That being said, I’ll explain why this comparison, as it stands, doesn’t immediately strike me as absurd. Grief has an odd counterfactual. We can only extend lives. People who’re saved will still die and the people who love them will still grieve. The question is how much worse the total grief is for a very young child (the typical beneficiary of e.g., AMF) than the grief for the adolescent, or a young adult, or an adult, or elder they’d become [1]-- all multiplied by mortality risk at those ages.
So is psychotherapy better than the counterfactual grief averted? Again, I’m not sure because the grief estimates are quite shallow, but the comparison seems less absurd to me when I hold the counterfactual in mind.
I assume people, who are not very young children, also have larger social networks and that this could also play into the counterfactual (e.g., non-children may be grieved for by more people who forged deeper bonds). But I’m not sure how much to make of this point.
this comparison, as it stands, doesn’t immediately strike me as absurd. Grief has an odd counterfactual. We can only extend lives. People who’re saved will still die and the people who love them will still grieve. The question is how much worse the total grief is for a very young child (the typical beneficiary of e.g., AMF) than the grief for the adolescent, or a young adult, or an adult, or elder they’d become
My intuition, which is shared by many, is that the badness of a child’s death is not merely due to the grief of those around them. So presumably the question should not be comparing just the counterfactual grief of losing a very young child VS an [older adult], but also “lost wellbeing” from living a net-positive-wellbeing life in expectation?
I also just saw that Alex claims HLI “estimates that StrongMinds causes a gain of 13 WELLBYs”. Is this for 1 person going through StrongMinds (i.e. ~12 hours of group therapy), or something else? Where does the 13 WELLBYs come from?
I ask because if we are using HLI’s estimates of WELLBYs per death averted, and use your preferred estimate for the neutral point, then 13 / (4.95-2) is >4 years of life. Even if we put the neutral point at zero, this suggests 13 WELLBYs is worth >2.5 years of life.[1]
I think I’m misunderstanding something here, because GiveWell claims “HLI’s estimates imply that receiving IPT-G is roughly 40% as valuable as an additional year of life per year of benefit or 80% of the value of an additional year of life total.”
Can you help me disambiguate this? Apologies for the confusion.
My intuition, which is shared by many, is that the badness of a child’s death is not merely due to the grief of those around them. Thus the question should not be comparing just the counterfactual grief of losing a very young child VS an [older adult], but also “lost wellbeing” from living a net-positive-wellbeing life in expectation.
I didn’t mean to imply that the badness of a child’s death is just due to grief. As I said in my main comment, I place substantial credence (2/3rds) in the view that death’s badness is the wellbeing lost. Again, this my view not HLIs.
The 13 WELLBY figure is the household effect of a single person being treated by StrongMinds. But that uses the uncorrected household spillover (53% spillover rate). With the correction (38% spillover) it’d be 10.5 WELLBYs (3.7 WELLBYs for recipient + 6.8 for household).
GiveWell arrives at the figure of 80% because they take a year of life as valued at 4.55 WELLBYs = 4.95 − 0.5 according to their preferred neutral point, and StrongMinds benefit ,according to HLI, to the direct recipient is 3.77 WELLBYs --> 3.77 / 4.55 = ~80%. I’m not sure where the 40% figure comes from.
If I understand correctly, the updated figures should then be:
For 1 person being treated by StrongMinds (excluding all household spillover effects) to be worth the WELLBYs gained for a year of life[1] with HLI’s methodology, the neutral point needs to be at least 4.95-3.77 = 1.18.
If we include spillover effects of StrongMinds (and use the updated / lower figures), then the benefit of 1 person going through StrongMinds is 10.7 WELLBYs.[2] Under HLI’s estimates, this is equivalent to more than two years of wellbeing benefits from the average life, even if we set the neutral point at zero. Using your personal neutral point of 2 would suggest the intervention for 1 person including spillovers is equivalent to >3.5 years of wellbeing benefits. Is this correct or am I missing something here?
1.18 as the neutral point seems pretty reasonable, though the idea that 12 hours of therapy for an individual is worth the wellbeing benefits of 1 year of an average life when only considering impacts to them, and anywhere between 2~3.5 years of life when including spillovers does seem rather unintuitive to me, despite my view that we should probably do more work on subjective wellbeing measures on the margin. I’m not sure if this means:
WELLBYs as a measure can’t capturing what I care about in a year of healthy life, so we should not use solely WELLBYs when measuring wellbeing;
HLI isn’t applying WELLBYs in a way that captures the benefits of a healthy life;
The existing way of estimating 1 year of life via WELLBYs is wrong in some other way (e.g. the 4.95 assumption is wrong, the 0-10 scale is wrong, the ~1.18 neutral point is wrong);
HLI have overestimated the benefits of StrongMinds;
I have a very poorly calibrated view of how good / bad 12 hours of therapy / a year of life is worth, though this seems less likely.
Would be interested in your thoughts on this / let me know if I’ve misinterpreted anything!
I appreciate your candid response. To clarify further: suppose you give a mother a choice between “your child dies now (age 5), but you get group therapy” and “your child dies in 60 years (age 65), but no group therapy”. Which do you think she will choose?
Also, if you don’t mind answering: do you have children? (I have a hypothesis that EA values are distorted by the lack of parents in the community; I don’t know how to test this hypothesis. I hope my question does not come off as rude.)
I don’t think that’s the right question for three reasons.
The hypothetical mother will almost certainly consider the well-being of her child (under a deprivationist framework) in making that decision—no one is suggesting that saving a life is less valuable than therapy under such an approach. Whatever the merits of an epicurean view that doesn’t weigh lost years of life, we wouldn’t have made it long as a species if parents applied that logic to their own young children.
Second, the hypothetical mother would have to live with the guilt of knowing she could have saved her child but chose something for herself.
Finally, GiveWell-type recommendations often would fail the same sort of test. Many beneficiaries would choose receiving $8X (where X = bednet cost) over receiving a bednet, even where GiveWell thinks they would be better off with the latter.
If the mother would rather have her child alive, then under what definition of happiness/utility do you conclude she would be happier with her child dead (but getting therapy)? I understand you’re trying to factor out the utility loss of the child; so am I. But just from the mother’s perspective alone: she prefers scenario X to scenario Y, and you’re saying it doesn’t count for some reason? I don’t get it.
I think you’re double-subtracting the utility of the child: you’re saying, let’s factor it out by not asking the child his preference, and ALSO let’s ADDITIONALLY factor it out by not letting the mother be sad about the child not getting his preference. But the latter is a fact about the mother’s happiness, not the child’s.
Second, the hypothetical mother would have to live with the guilt of knowing she could have saved her child but chose something for herself.
Let’s add memory loss to the scenario, so she doesn’t remember making the decision.
Finally, GiveWell-type recommendations often would fail the same sort of test. Many beneficiaries would choose receiving $8X (where X = bednet cost) over receiving a bednet, even where GiveWell thinks they would be better off with the latter.
Yes, and GiveWell is very clear about this and most donors bite the bullet (people make irrational decisions with regards to small risks of death, and also, betnets have positive externalities to the rest of the community). Do you bite the bullet that says “the mother doesn’t know enough about her own happiness; she’d be happier with therapy than with a living child”?
Finally, I do hope you’ll answer regarding whether you have children. Thanks again.
I’m not Joel (nor do I work for HLI, GiveWell, SM, or any similar organization). I do have a child, though. And I do have concerns with overemphasis on whether one is a parent, especially when one’s views are based (in at least significant part) on review of the relevant academic literature. Otherwise, does one need both to be a parent and to have experienced a severe depressive episode (particularly in a low-resource context where there is likely no safety net) in order to judge the tradeoffs between supporting AMF and supporting SM?
Personally—I am skeptical that the positive effect of therapy exceeds the negative effect of losing one’s young child on a parent’s own well-being. I just don’t think the thought experiment you proposed is a good way to cross-check the plausibility of such a view. The consideration of the welfare of one’s child (independent of one’s own welfare) in making decisions is just too deeply rooted for me to think we can effectively excise it in a thought experiment.
In any event—given that SM can deliver many courses of therapy with the resources AMF needs to save one child, the two figures don’t need to be close if one believes the only benefit from AMF is the prevention of parental grief. SM’s effect size would only need to be greater 1/X of the WELLBYs lost from parental grief from one child death, where X is the number of courses SM can deliver with the resources AMF needs to prevent one child death. That is the bullet that epicurean donors have to bite to choose SM over AMF.
Personally—I am skeptical that the positive effect of therapy exceeds the negative effect of losing one’s young child on a parent’s own well-being.
It’s good to hear you say this.
In any event—given that SM can deliver many courses of therapy with the resources AMF needs to save one child, the two figures don’t need to be close
Definitely true. But if a source (like a specific person or survey) gives me absurd numbers, it is a reason to dismiss it entirely. For example, if my thermometer tells me it’s 1000 degrees in my house, I’m going to throw it out. I’m not going to say “even if you merely believe it’s 90 degrees we should turn on the AC”. The exaggerated claim is disqualifying; it decreases the evidentiary value of the thermometer’s reading to zero.
When someone tells me that group therapy is more beneficial to the mother’s happiness than saving her child from death, I don’t need to listen to that person anymore. And if it’s a survey that tells me this, throw out the survey. If it’s some fancy academic methods and RCTs, the interesting question is where they went wrong, and someone should definitely investigate that, but at no point should people take it seriously.
By all means, let’s investigate how the thermometer possibly gave a reading of 1000 degrees. But until we diagnose the issue, it is NOT a good idea to use “1000 degrees in the house” in any decision-making process. Anyone who uses “it’s 1000 degrees in this room” as a placeholder value for making EA decisions is, in my view, someone who should never be trusted with any levers of power, as they cannot spot obvious errors that are staring them in the face.
We both think the ratio of parental grief WELLBYs to therapy WELLBYs is likely off, although that doesn’t tell us which number is wrong. Given that your argument is that an implausible ratio should tip HLI off that there’s a problem, the analysis below takes the view more favorable to HLI—that the parental grief number (for which much less work has been done) is at least the major cause of the ratio being off.
As I see it, the number of WELLBYs preserved by averting an episode of parental grief is very unlikely to be material to any decision under HLI’s cost-effectiveness model. Under philosophical assumptions where it is a major contributor to the cost-effectiveness estimate, that estimate is almost always going to be low enough that life-saving interventions won’t be considered cost-effective on the whole. Under philosophical assumptions where life-saving programs may be cost-effective, the bulk of the effectiveness will come directly from the effect on the saved life itself. Thus, it would not be unreasonable for HLI—which faces significant resource constraints—to have deprioritized attempts to improve the accuracy of its estimate for WELLBYs preserved by averting an episode of parental grief.
Given that, I can see three ways of dealing with parental grief in the cost-effectiveness model for AMF. Ignoring it seems rather problematic. And I would argue that reporting the value one’s relatively shallow research provided (with a disclaimer that one has low certainty in the value) is often more epistemically virtuous than making up adjusting to some value one thinks is more likely to be correct for intuitive reasons, bereft of actual evidence to support that number. I guess the other way is to just not publish anything until one can turn in more precise models . . . but that norm would make it much more difficult to bring new and innovative ideas to the table.
I don’t think the thermometer analogy really holds here. Assuming HLI got a significantly wrong value for WELLBYs preserved by averting an episode of parental grief, there are a number of plausible explanations, the bulk of which would not justify not “listen[ing] to [them] anymore.” The relevant literature on grief could be poor quality or underdeveloped; HLI could have missed important data or modeled inadequately due to the resources it could afford to spend on the question; it could have made a technical error; its methodology could be ill-suited for studying parental grief; its methodology could be globally unsound; and doubtless other reasons. In other words, I wouldn’t pay attention to the specific thermometer that said it was much hotter than it was . . . but in most cases I would only update weakly against using other thermometers by the same manufacturer (charity evaluator), or distrusting thermometer technology in general (the WELLBY analysis).
Moreover, I suspect there have been, and will continue to be, malfunctioning thermometers at most of the major charity evaluators and major grantmakers. The grief figure is a non-critical value relating to an intervention that HLI isn’t recommending. For the most part, if an evaluator or grantmaker isn’t recommending or funding an organization, it isn’t going to release its cost-effectiveness model for that organization at all. Even where funding is recommended, there often isn’t the level of reasoning transparency that HLI provides. If we are going to derecognize people who have used malfunctioning thermometer values in any cost-effectiveness analysis, there may not be many people left to perform them.
I’ve criticized HLI on several occasions before, and I’m likely to find reasons to criticize it again at some point. But I think we want to encourage its willingness to release less-refined models for public scrutiny (as long as the limitations are appropriately acknowledged) and its commitment to reasoning transparency more generally. I am skeptical of any argument that would significantly incentivize organizations to keep their analyses close to the chest.
The most important thing to note here is that, if you dig through the various long reports, the tradeoff is:
With $7800 you can save the life of a child, or
If you grant HLI’s assumptions regarding costs (and I’m a bit skeptical even there), you can give a multi-week group therapy to 60 people for that same cost (I think 12 sessions of 90 min).
Which is better? Well, right off the bat, if you think mothers would value their children at 60x what they value the therapy sessions, you’ve already lost.
Of course, the child’s life also matters, not just the mother’s happiness. But HLI has a range of “assumptions” regarding how good a life is, and in many of these assumptions the life of the child is indeed fairly value-less compared to benefits in the welfare of the mother (because life is suffering and death is OK, basically).
All this is obfuscated under various levels of analysis. Moreover, in HLI’s median assumption, not only is the therapy more effective, it is 5x more effective. They are saying: the number of group therapies that equal the averted death of a child is not 60, but rather, 12.
To me that’s broken-thermometer level.
I know the EA community is full of broken thermometers, and it’s actually one of the reasons I do not like the community. One of my main criticisms of EA is, indeed, “you’re taking absurd numbers (generated by authors motivated to push their own charities/goals) at face value”. This also happens with animal welfare: there’s this long report and 10-part forum series evaluating animals’ welfare ranges, and it concludes that 1 human has the welfare range of (checks notes) 14 bees. Then others take that at face value and act as if a couple of beehives or shrimp farms are as important as a human city.
I am skeptical of any argument that would significantly incentivize organizations to keep their analyses close to the chest.
This is not the first time I’ve had this argument made to me when I criticize an EA charity. It seems almost like the default fallback. I think EA has the opposite problem, however: nobody ever dares to say the emperor has no clothes, and everyone goes around pretending 1 human is worth 14 bees and a group therapy session increases welfare by more than the death of your child decreases it.
I think that it is possible to buy that humans only have 14 times as painful maximum pains/pleasurable maximal pleasure than bees, and still think 14 bees=1 human is silly. You just have to reject hedonism about well-being. I have strong feelings about saving humans over animals, but I have no intuition whatsoever that if my parents’ dog burns her paw it hurts less than when I burn my hand. The whole idea that animals have less intense sensations than us seems to me less like a commonsense claim, and more like something people committed to both hedonism and antispeciesism made up to reconcile their intuitive repugnant at results like 10 pigs or whatever=1 human. (Bees are kind of a special case because lots of people are confident they aren’t conscious at all.)
Where’s the evidence that, e.g., everyone “act[s] as if a couple of beehives or shrimp farms are as important as a human city”?So someone wrote a speculative report about bee welfare ranges . . . if “everyone” accepted that “1 human is worth 14 bees”—or even anything close to that—the funding and staffing pictures in EA would look very, very different. How many EAs are working in bee welfare, and how much is being spent in that area?
As I understand the data, EA resources in GH&D are pretty overwhelmingly in life-saving interventions like AMF, suggesting that the bulk of EA does not agree with HLI at present. I’m not as well versed in farmed animal welfare, but I’m pretty sure no one in that field is fundraising for interventions costing anywhere remotely near hundreds of dollars to save a bee and claiming they are effective.
In the end, reasoning transparency by charity evaluators helps the donor better make an informed moral choice. Carefully reading analyses from various sources helps me (and other donors) make choices that are consistent with our own values. EA is well ahead of most charitable movements by explicitly acknowledging that trade-offs exist and at least attempting to reason about them. One can (and should) decline to donate where the charity’s treatment of tradeoffs isnt convincing. As I’ve stated elsewhere on this post, I’m sticking with GiveWell-style interventions at least for now.
Oh, I should definitely clarify: I find effective altruism the philosophy, as well as most effective altruists and their actions, to be very good and admirable. My gripe is with what I view as the “EA community”—primarily places like this forum, organizations such as the CEA, and participants in EA global. The more central to EA-the-community, the worse I like the the ideas.
In my view, what happens is that there are a lot of EA-ish people donating to GiveWell charities, and that’s amazing. And then the EA movement comes and goes “but actually, you should really give the money to [something ineffective that’s also sometimes in the personal interest of the person speaking]” and some people get duped. So forums like this one serve to take money that would go to malaria nets, and try as hard as they can to redirect it to less effective charities.
So, to your questions: how many people are working towards bee welfare? Not many. But on this forum, it’s a common topic of discussion (often with things like nematodes instead of bees). I haven’t been to EA global, but I know where I’d place my bets for what receives attention there. Though honestly, both HLI and the animal welfare stuff is probably small potatoes compared to AI risk and meta-EA, two areas in which these dynamics play an even bigger role (and in which there are even more broken thermometers and conflicts of interest).
Yes. There is a large range of such numbers. I am not sure of the right tradeoff. I would intuitively expect a billion therapy sessions to be an overestimate (i.e. clearly more valuable than the life of a child), but I didn’t do any calculations. A thousand seems like an underestimate, but again who knows (I didn’t do any calculations). HLI is claiming (checks notes) ~12.
To flip the question: Do you think there’s a number you would reject for how many people treated with psychotherapy would be worth the death of one child, even if some seemingly-fancy analysis based on survey data backed it up? Do you ever look at the results of an analysis and go “this must be wrong,” or is that just something the community refuses to do on principle?
To be a little more precise:
I.e., is it your view that 4-8 weeks of group therapy (~12 hours) for 20 people is preferable to averting the death of a child?
To be clear on what the numbers are: we estimate that group psychotherapy has an effect of 10.5 WELLBYs on the recipient’s household, and that the death of a child in a LIC has a −7.3 WELLBY effect on the bereaved household. But the estimate for grief was very shallow. The report this estimate came from was not focused on making a cost-effectiveness estimate of saving a life (with AMF). Again, I know this sounds weasel-y, but we haven’t yet formed a view on the goodness of saving a life so I can’t say how much group therapy HLI thinks is preferable averting the death of a child.
That being said, I’ll explain why this comparison, as it stands, doesn’t immediately strike me as absurd. Grief has an odd counterfactual. We can only extend lives. People who’re saved will still die and the people who love them will still grieve. The question is how much worse the total grief is for a very young child (the typical beneficiary of e.g., AMF) than the grief for the adolescent, or a young adult, or an adult, or elder they’d become [1]-- all multiplied by mortality risk at those ages.
So is psychotherapy better than the counterfactual grief averted? Again, I’m not sure because the grief estimates are quite shallow, but the comparison seems less absurd to me when I hold the counterfactual in mind.
I assume people, who are not very young children, also have larger social networks and that this could also play into the counterfactual (e.g., non-children may be grieved for by more people who forged deeper bonds). But I’m not sure how much to make of this point.
Thanks Joel.
My intuition, which is shared by many, is that the badness of a child’s death is not merely due to the grief of those around them. So presumably the question should not be comparing just the counterfactual grief of losing a very young child VS an [older adult], but also “lost wellbeing” from living a net-positive-wellbeing life in expectation?
I also just saw that Alex claims HLI “estimates that StrongMinds causes a gain of 13 WELLBYs”. Is this for 1 person going through StrongMinds (i.e. ~12 hours of group therapy), or something else? Where does the 13 WELLBYs come from?
I ask because if we are using HLI’s estimates of WELLBYs per death averted, and use your preferred estimate for the neutral point, then 13 / (4.95-2) is >4 years of life. Even if we put the neutral point at zero, this suggests 13 WELLBYs is worth >2.5 years of life.[1]
I think I’m misunderstanding something here, because GiveWell claims “HLI’s estimates imply that receiving IPT-G is roughly 40% as valuable as an additional year of life per year of benefit or 80% of the value of an additional year of life total.”
Can you help me disambiguate this? Apologies for the confusion.
13 / 4.95
I didn’t mean to imply that the badness of a child’s death is just due to grief. As I said in my main comment, I place substantial credence (2/3rds) in the view that death’s badness is the wellbeing lost. Again, this my view not HLIs.
The 13 WELLBY figure is the household effect of a single person being treated by StrongMinds. But that uses the uncorrected household spillover (53% spillover rate). With the correction (38% spillover) it’d be 10.5 WELLBYs (3.7 WELLBYs for recipient + 6.8 for household).
GiveWell arrives at the figure of 80% because they take a year of life as valued at 4.55 WELLBYs = 4.95 − 0.5 according to their preferred neutral point, and StrongMinds benefit ,according to HLI, to the direct recipient is 3.77 WELLBYs --> 3.77 / 4.55 = ~80%. I’m not sure where the 40% figure comes from.
That makes sense, thanks for clarifying!
If I understand correctly, the updated figures should then be:
For 1 person being treated by StrongMinds (excluding all household spillover effects) to be worth the WELLBYs gained for a year of life[1] with HLI’s methodology, the neutral point needs to be at least 4.95-3.77 = 1.18.
If we include spillover effects of StrongMinds (and use the updated / lower figures), then the benefit of 1 person going through StrongMinds is 10.7 WELLBYs.[2] Under HLI’s estimates, this is equivalent to more than two years of wellbeing benefits from the average life, even if we set the neutral point at zero. Using your personal neutral point of 2 would suggest the intervention for 1 person including spillovers is equivalent to >3.5 years of wellbeing benefits. Is this correct or am I missing something here?
1.18 as the neutral point seems pretty reasonable, though the idea that 12 hours of therapy for an individual is worth the wellbeing benefits of 1 year of an average life when only considering impacts to them, and anywhere between 2~3.5 years of life when including spillovers does seem rather unintuitive to me, despite my view that we should probably do more work on subjective wellbeing measures on the margin. I’m not sure if this means:
WELLBYs as a measure can’t capturing what I care about in a year of healthy life, so we should not use solely WELLBYs when measuring wellbeing;
HLI isn’t applying WELLBYs in a way that captures the benefits of a healthy life;
The existing way of estimating 1 year of life via WELLBYs is wrong in some other way (e.g. the 4.95 assumption is wrong, the 0-10 scale is wrong, the ~1.18 neutral point is wrong);
HLI have overestimated the benefits of StrongMinds;
I have a very poorly calibrated view of how good / bad 12 hours of therapy / a year of life is worth, though this seems less likely.
Would be interested in your thoughts on this / let me know if I’ve misinterpreted anything!
More precisely, the average wellbeing benefits from 1 year of life from an adult in 6 African countries
3.77*(1+0.38*4.85)
I appreciate your candid response. To clarify further: suppose you give a mother a choice between “your child dies now (age 5), but you get group therapy” and “your child dies in 60 years (age 65), but no group therapy”. Which do you think she will choose?
Also, if you don’t mind answering: do you have children? (I have a hypothesis that EA values are distorted by the lack of parents in the community; I don’t know how to test this hypothesis. I hope my question does not come off as rude.)
I don’t think that’s the right question for three reasons.
The hypothetical mother will almost certainly consider the well-being of her child (under a deprivationist framework) in making that decision—no one is suggesting that saving a life is less valuable than therapy under such an approach. Whatever the merits of an epicurean view that doesn’t weigh lost years of life, we wouldn’t have made it long as a species if parents applied that logic to their own young children.
Second, the hypothetical mother would have to live with the guilt of knowing she could have saved her child but chose something for herself.
Finally, GiveWell-type recommendations often would fail the same sort of test. Many beneficiaries would choose receiving $8X (where X = bednet cost) over receiving a bednet, even where GiveWell thinks they would be better off with the latter.
Thanks for your response.
If the mother would rather have her child alive, then under what definition of happiness/utility do you conclude she would be happier with her child dead (but getting therapy)? I understand you’re trying to factor out the utility loss of the child; so am I. But just from the mother’s perspective alone: she prefers scenario X to scenario Y, and you’re saying it doesn’t count for some reason? I don’t get it.
I think you’re double-subtracting the utility of the child: you’re saying, let’s factor it out by not asking the child his preference, and ALSO let’s ADDITIONALLY factor it out by not letting the mother be sad about the child not getting his preference. But the latter is a fact about the mother’s happiness, not the child’s.
Let’s add memory loss to the scenario, so she doesn’t remember making the decision.
Yes, and GiveWell is very clear about this and most donors bite the bullet (people make irrational decisions with regards to small risks of death, and also, betnets have positive externalities to the rest of the community). Do you bite the bullet that says “the mother doesn’t know enough about her own happiness; she’d be happier with therapy than with a living child”?
Finally, I do hope you’ll answer regarding whether you have children. Thanks again.
I’m not Joel (nor do I work for HLI, GiveWell, SM, or any similar organization). I do have a child, though. And I do have concerns with overemphasis on whether one is a parent, especially when one’s views are based (in at least significant part) on review of the relevant academic literature. Otherwise, does one need both to be a parent and to have experienced a severe depressive episode (particularly in a low-resource context where there is likely no safety net) in order to judge the tradeoffs between supporting AMF and supporting SM?
Personally—I am skeptical that the positive effect of therapy exceeds the negative effect of losing one’s young child on a parent’s own well-being. I just don’t think the thought experiment you proposed is a good way to cross-check the plausibility of such a view. The consideration of the welfare of one’s child (independent of one’s own welfare) in making decisions is just too deeply rooted for me to think we can effectively excise it in a thought experiment.
In any event—given that SM can deliver many courses of therapy with the resources AMF needs to save one child, the two figures don’t need to be close if one believes the only benefit from AMF is the prevention of parental grief. SM’s effect size would only need to be greater 1/X of the WELLBYs lost from parental grief from one child death, where X is the number of courses SM can deliver with the resources AMF needs to prevent one child death. That is the bullet that epicurean donors have to bite to choose SM over AMF.
Sorry for confusing you for Joel!
It’s good to hear you say this.
Definitely true. But if a source (like a specific person or survey) gives me absurd numbers, it is a reason to dismiss it entirely. For example, if my thermometer tells me it’s 1000 degrees in my house, I’m going to throw it out. I’m not going to say “even if you merely believe it’s 90 degrees we should turn on the AC”. The exaggerated claim is disqualifying; it decreases the evidentiary value of the thermometer’s reading to zero.
When someone tells me that group therapy is more beneficial to the mother’s happiness than saving her child from death, I don’t need to listen to that person anymore. And if it’s a survey that tells me this, throw out the survey. If it’s some fancy academic methods and RCTs, the interesting question is where they went wrong, and someone should definitely investigate that, but at no point should people take it seriously.
By all means, let’s investigate how the thermometer possibly gave a reading of 1000 degrees. But until we diagnose the issue, it is NOT a good idea to use “1000 degrees in the house” in any decision-making process. Anyone who uses “it’s 1000 degrees in this room” as a placeholder value for making EA decisions is, in my view, someone who should never be trusted with any levers of power, as they cannot spot obvious errors that are staring them in the face.
We both think the ratio of parental grief WELLBYs to therapy WELLBYs is likely off, although that doesn’t tell us which number is wrong. Given that your argument is that an implausible ratio should tip HLI off that there’s a problem, the analysis below takes the view more favorable to HLI—that the parental grief number (for which much less work has been done) is at least the major cause of the ratio being off.
As I see it, the number of WELLBYs preserved by averting an episode of parental grief is very unlikely to be material to any decision under HLI’s cost-effectiveness model. Under philosophical assumptions where it is a major contributor to the cost-effectiveness estimate, that estimate is almost always going to be low enough that life-saving interventions won’t be considered cost-effective on the whole. Under philosophical assumptions where life-saving programs may be cost-effective, the bulk of the effectiveness will come directly from the effect on the saved life itself. Thus, it would not be unreasonable for HLI—which faces significant resource constraints—to have deprioritized attempts to improve the accuracy of its estimate for WELLBYs preserved by averting an episode of parental grief.
Given that, I can see three ways of dealing with parental grief in the cost-effectiveness model for AMF. Ignoring it seems rather problematic. And I would argue that reporting the value one’s relatively shallow research provided (with a disclaimer that one has low certainty in the value) is often more epistemically virtuous than
making upadjusting to some value one thinks is more likely to be correct for intuitive reasons, bereft of actual evidence to support that number. I guess the other way is to just not publish anything until one can turn in more precise models . . . but that norm would make it much more difficult to bring new and innovative ideas to the table.I don’t think the thermometer analogy really holds here. Assuming HLI got a significantly wrong value for WELLBYs preserved by averting an episode of parental grief, there are a number of plausible explanations, the bulk of which would not justify not “listen[ing] to [them] anymore.” The relevant literature on grief could be poor quality or underdeveloped; HLI could have missed important data or modeled inadequately due to the resources it could afford to spend on the question; it could have made a technical error; its methodology could be ill-suited for studying parental grief; its methodology could be globally unsound; and doubtless other reasons. In other words, I wouldn’t pay attention to the specific thermometer that said it was much hotter than it was . . . but in most cases I would only update weakly against using other thermometers by the same manufacturer (charity evaluator), or distrusting thermometer technology in general (the WELLBY analysis).
Moreover, I suspect there have been, and will continue to be, malfunctioning thermometers at most of the major charity evaluators and major grantmakers. The grief figure is a non-critical value relating to an intervention that HLI isn’t recommending. For the most part, if an evaluator or grantmaker isn’t recommending or funding an organization, it isn’t going to release its cost-effectiveness model for that organization at all. Even where funding is recommended, there often isn’t the level of reasoning transparency that HLI provides. If we are going to derecognize people who have used malfunctioning thermometer values in any cost-effectiveness analysis, there may not be many people left to perform them.
I’ve criticized HLI on several occasions before, and I’m likely to find reasons to criticize it again at some point. But I think we want to encourage its willingness to release less-refined models for public scrutiny (as long as the limitations are appropriately acknowledged) and its commitment to reasoning transparency more generally. I am skeptical of any argument that would significantly incentivize organizations to keep their analyses close to the chest.
I disagree with you on several points.
The most important thing to note here is that, if you dig through the various long reports, the tradeoff is:
With $7800 you can save the life of a child, or
If you grant HLI’s assumptions regarding costs (and I’m a bit skeptical even there), you can give a multi-week group therapy to 60 people for that same cost (I think 12 sessions of 90 min).
Which is better? Well, right off the bat, if you think mothers would value their children at 60x what they value the therapy sessions, you’ve already lost.
Of course, the child’s life also matters, not just the mother’s happiness. But HLI has a range of “assumptions” regarding how good a life is, and in many of these assumptions the life of the child is indeed fairly value-less compared to benefits in the welfare of the mother (because life is suffering and death is OK, basically).
All this is obfuscated under various levels of analysis. Moreover, in HLI’s median assumption, not only is the therapy more effective, it is 5x more effective. They are saying: the number of group therapies that equal the averted death of a child is not 60, but rather, 12.
To me that’s broken-thermometer level.
I know the EA community is full of broken thermometers, and it’s actually one of the reasons I do not like the community. One of my main criticisms of EA is, indeed, “you’re taking absurd numbers (generated by authors motivated to push their own charities/goals) at face value”. This also happens with animal welfare: there’s this long report and 10-part forum series evaluating animals’ welfare ranges, and it concludes that 1 human has the welfare range of (checks notes) 14 bees. Then others take that at face value and act as if a couple of beehives or shrimp farms are as important as a human city.
This is not the first time I’ve had this argument made to me when I criticize an EA charity. It seems almost like the default fallback. I think EA has the opposite problem, however: nobody ever dares to say the emperor has no clothes, and everyone goes around pretending 1 human is worth 14 bees and a group therapy session increases welfare by more than the death of your child decreases it.
I think that it is possible to buy that humans only have 14 times as painful maximum pains/pleasurable maximal pleasure than bees, and still think 14 bees=1 human is silly. You just have to reject hedonism about well-being. I have strong feelings about saving humans over animals, but I have no intuition whatsoever that if my parents’ dog burns her paw it hurts less than when I burn my hand. The whole idea that animals have less intense sensations than us seems to me less like a commonsense claim, and more like something people committed to both hedonism and antispeciesism made up to reconcile their intuitive repugnant at results like 10 pigs or whatever=1 human. (Bees are kind of a special case because lots of people are confident they aren’t conscious at all.)
Where’s the evidence that, e.g., everyone “act[s] as if a couple of beehives or shrimp farms are as important as a human city”?So someone wrote a speculative report about bee welfare ranges . . . if “everyone” accepted that “1 human is worth 14 bees”—or even anything close to that—the funding and staffing pictures in EA would look very, very different. How many EAs are working in bee welfare, and how much is being spent in that area?
As I understand the data, EA resources in GH&D are pretty overwhelmingly in life-saving interventions like AMF, suggesting that the bulk of EA does not agree with HLI at present. I’m not as well versed in farmed animal welfare, but I’m pretty sure no one in that field is fundraising for interventions costing anywhere remotely near hundreds of dollars to save a bee and claiming they are effective.
In the end, reasoning transparency by charity evaluators helps the donor better make an informed moral choice. Carefully reading analyses from various sources helps me (and other donors) make choices that are consistent with our own values. EA is well ahead of most charitable movements by explicitly acknowledging that trade-offs exist and at least attempting to reason about them. One can (and should) decline to donate where the charity’s treatment of tradeoffs isnt convincing. As I’ve stated elsewhere on this post, I’m sticking with GiveWell-style interventions at least for now.
Oh, I should definitely clarify: I find effective altruism the philosophy, as well as most effective altruists and their actions, to be very good and admirable. My gripe is with what I view as the “EA community”—primarily places like this forum, organizations such as the CEA, and participants in EA global. The more central to EA-the-community, the worse I like the the ideas.
In my view, what happens is that there are a lot of EA-ish people donating to GiveWell charities, and that’s amazing. And then the EA movement comes and goes “but actually, you should really give the money to [something ineffective that’s also sometimes in the personal interest of the person speaking]” and some people get duped. So forums like this one serve to take money that would go to malaria nets, and try as hard as they can to redirect it to less effective charities.
So, to your questions: how many people are working towards bee welfare? Not many. But on this forum, it’s a common topic of discussion (often with things like nematodes instead of bees). I haven’t been to EA global, but I know where I’d place my bets for what receives attention there. Though honestly, both HLI and the animal welfare stuff is probably small potatoes compared to AI risk and meta-EA, two areas in which these dynamics play an even bigger role (and in which there are even more broken thermometers and conflicts of interest).
Do you think there’s a number you would accept for how many people treated with psychotherapy would be “worth” the death of one child?
Yes. There is a large range of such numbers. I am not sure of the right tradeoff. I would intuitively expect a billion therapy sessions to be an overestimate (i.e. clearly more valuable than the life of a child), but I didn’t do any calculations. A thousand seems like an underestimate, but again who knows (I didn’t do any calculations). HLI is claiming (checks notes) ~12.
To flip the question: Do you think there’s a number you would reject for how many people treated with psychotherapy would be worth the death of one child, even if some seemingly-fancy analysis based on survey data backed it up? Do you ever look at the results of an analysis and go “this must be wrong,” or is that just something the community refuses to do on principle?