Thanks for the thorough engagement, Michael. We appreciate thoughtful critical engagement with our work and are always happy to see more of it. (And thanks for flagging this to us in advance so we could think about it—we appreciate that too!)
One place where I particularly appreciate the push is on better defining and articulating what we mean by “worldviews” and how we approach worldview diversification. By worldview we definitely do not mean “a set of philosophical assumptions”—as Holden writes in the blog post where he introduced the concept, we define worldviews as:
a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than either of the types of giving discussed above; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty, as laid out in the third bullet point above).
We think it is a mistake to collapse worldviews in the sense that we use them to popular debates in philosophy, and we definitely don’t aim to be exhaustive across worldviews that have many philosophical adherents. We see proliferation of worldviews as costly for the standard intellectual reason that they inhibit optimization, as well as carrying substantial practical costs, so we think the bar for putting money behind an additional worldview is significantly higher than you seem to think. But we haven’t done a good job articulating and exploring what we do mean and how that interacts with the case for worldview diversification (which itself remains undertheorized). We appreciate the push on this and are planning to do more thinking and writing on it in the future.
In terms of disagreements, I think maybe the biggest one is a meta one about the value of philosophy per se. We are less worried about internal consistency than we think it is appropriate for philosophers to be, and accordingly less interested in costly exercises that would make us more consistent without carrying obviously large practical benefits. When we encounter critiques, our main questions are, “how would we spend our funding differently if this critique were correct? How costly are the deviations that we’re making according to this critique?” As an example of a case where we spent a lot of time thinking about the philosophy and ended up thinking it didn’t really have high utility stakes and so just deprioritized it for now, see the last footnote on this post (where we find that the utility stakes of a ~3x increase in valuations on lives in some countries would be surprisingly small, not because they would not change what we would fund but because the costs of mistakes are not that big on the view that has higher valuations). You mentioned being confused by what’s going on in that sheet, which is totally fair—feel free to email Peter for a more detailed explanation/walkthrough as the footnote indicates.
In this particular writeup, you haven’t focused as much on the upshot of what we should fund that we don’t (or what we do fund that we shouldn’t), but elsewhere in your writing I take your implication to be that we should do more on mental health. Based on my understanding of your critiques, I think that takeaway is wrong, and in fact taking on board your critiques here would lead us to do more of what most of OP Global Health and Wellbeing already does—save kids’ lives and work to fight the worst abuses of factory farming, potentially with a marginal reduction in our more limited work focused on increasing incomes. Three particular disagreements that I think drive this:
Set point. I think setting a neutral point on a life satisfaction scale of 5⁄10 is somewhere between unreasonable and unconscionable, and OP institutionally is comfortable with the implication that saving human lives is almost always good. Given that we think the correct neutral point is low, taking your other points on board would imply that we should place even more weight on life-saving interventions. We think that is plausible, but for now we’ll note that we’re already really far in this direction compared to other actors. That doesn’t mean we shouldn’t go further, but we do think it should prompt some humility on our part re: even more extreme divergence with consensus, which is one reason we’re going slowly.
Hedonism. We think that most plausible arguments for hedonism end up being arguments for the dominance of farm animal welfare. We seem to put a lot of weight on those arguments relative to you, and farm animal welfare is OP GHW’s biggest area of giving after GiveWell recommendations. If we updated toward more weight on hedonism, we think the correct implication would be even more work on FAW, rather than work on human mental health. A little more abstractly, we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare. That doesn’t mean they’re useless—we can still learn a lot when noisy measures all point in the same direction—but we don’t think it makes sense to entrench a certain survey-based measure like life satisfaction scores as the ultimate goal.
Population ethics. While we’re ambivalent about how much to bet on the total view, we disagree with your claim that doing so would reduce our willingness to pay for saving lives given offsetting fertility effects. As I wrote here, Roodman’s report is only counting the first generation. If he is right that preventing two under-5 deaths leads to ~one fewer birth, that’s still one more kid net making it to adulthood and being able to have kids of their own. Given fertility rates in the places where we fund work to save lives, I think that would more than offset the Roodman adjustment in just a few decades, and potentially cumulatively lead to much higher weight on the value of saving kids’ lives today (though one would also have to be attentive to potential costs of bigger populations).
Related to the point about placing less weight on the value of philosophy per se, we’re reluctant to get pulled into long written back and forths about this kind of thing, so I’m not planning to say more on this thread by default, but happy to continue these discussions in the future. And thanks again for taking the time to engage here.
Thanks very much for these comments! Given that Alex—who I’ll refer to in the 3rd person from here—doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.
Alex’s main point seems to be that Open Philanthropy (OP) won’t engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that—I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the split of OP into the ‘longtermism’ and ‘global health and wellbeing’ pots is an indication of this.
My main reply is that Alex has been too quick to conclude that moral philosophy won’t matter for OP’s decision-making on global health and wellbeing. Let me (re)state a few points which show, I think, that it does matter and, as a consequence, OP should engage further.
As John Halstead has pointed out in another comment, the location of the neutral point could make a big difference and it’s not obvious where it is. If this was a settled question, I might agree with Alex’s take, but it’s not settled.
Relatedly, as I say in the post, switching between two different accounts of the badness of death (deprivationism and TRIA) would alter the value of life-extending to life-improving interventions by a factor of perhaps 5 or more.
Alex seems to object to hedonism, but I’m not advocating for hedonism (at least, not here). My main point is about adopting a ‘subjective wellbeing (SWB) worldview’, where you use the survey research on how people actually experience their lives to determine what does the most good. I’m not sure exactly what OP’s worldview is—that’s basically the point of the main post—but it seems to place little weight on people’s feelings (their ‘experienced utility’) and far more on what they do or would choose (their ‘decision utility’). But, as I argue above, these two can substantially come apart: we don’t always choose what makes us happiest. Indeed, we make predictable mistakes (see our report on affective forecasting for more on this).
Mental health is a problem that looks pretty serious on the SWB worldview but appears nowhere in the worldview that OP seems to favour. As noted, HLI finds therapy for depressed people in LICs is about 10x more cost-effective than cash-transfers in LICs. That, to me, is sufficient to take the SWB worldview seriously. I don’t see what this necessarily has to do with animals.
Will the SWB lens reveal different priorities in other cases? Very probably—pain and loneliness look more important, economic growth less, etc. - but I can’t say for sure because attempts to apply this lens are so new. I had hoped OP’s response would be “oh, this seems to really matter, let’s investigate further” but it seems to be “we’re not totally convinced, so we’ll basically ignore this”.
Alex says “we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare” but he doesn’t explain or defend that claim. (There are a few other places above where he states, but doesn’t argue for, his opinion, which makes it harder to have a constructive disagreement.)
On the total view, saving lives, and fertility, we seem to be disagreeing about one thing but agreeing about another. I said the total view would lead us to reduce the value of saving lives. Alex says it might actually cause us to increase the value of saving lives when we consider longer-run effects. Okay. In which case, it would seem we agree that taking a stand on population ethics might really matter. In which case, I take it we ought to see where the argument goes (rather than ignore it in case it takes us somewhere we don’t like).
It seems that Alex’s conclusion that moral philosophy barely matters relies heavily on the reasoning in the spreadsheet linked to in footnote 50 of the technical update blog post. The footnote states “Our [OP’s] analysis tends to find that picking the wrong moral weight only means sacrificing 2-5% of the good we could do”. I discussed this above in footnote 3, but I expect it’s worth restating and elaborating on that here. The spreadsheet isn’t explained and it’s unclear what the justification is. I assume the “2-5%” thing is really a motte-and-bailey. To explain, one might think OP is making a very strong claim such as “whatever assumptions you make about morality makes almost no difference to what you ought to do”. Clearly, that claim is implausible. If OP does believe this, that would be an amazing conclusion about practical ethics and I would encourage them to explain it in full. However, it seems that OP is probably making a much weaker claim, such as “given some restrictions on what our moral views can be, we find it makes little difference which ones we pick”. This claim is plausible, but of course, the concern is that the choice of moral views has been unduly restricted. What the preceding bullet points demonstrate is that different moral assumptions (and/or ‘worldviews’) could substantially change our conclusions—it’s not just a 2-5% difference.
I understand, of course, that investigating—and, possibly, implementing—additional worldviews is, well, hassle. But Open Philanthropy is a multi-billion dollar foundation that’s publicly committed to worldview diversification and it looks like it would make a practical difference.
Set point. I think setting a neutral point on a life satisfaction scale of 5⁄10 is somewhere between unreasonable and unconscionable
The author doesn’t argue that the neutral point is 5⁄10, he argues (1) that the decision about where to set the neutral point is crucial for prioritising resources, (2) you haven’t defended a particular neutral point in public.
and OP institutionally is comfortable with the implication that saving human lives is almost always good. Given that we think the correct neutral point is low, taking your other points on board would imply that we should place even more weight on life-saving interventions.
I don’t really see how this responds to Michael’s point. You say “assuming that the neutral point is low, we should spend more on life saving”. But his point is that you haven’t defended a low neutral point and that it might be above zero. If the neutral point is (eg) 2.5, that implies that much of your spending on life saving (like bednets) is net harmful. One recent unpublished study found that UK respondents put the neutral point at 2. This seems like the kind of thing that is practically important enough to make it worth GiveWell thinking about.
Since you define worldview as a “set of … beliefs that favor a certain kind of giving,” then it matters whether you understand income and health as “intrinsically [or] instrumentally valuable.” In the latter but not the former case, if you learn that income and health do not optimize for your desired end, you would change your giving.
I am understanding investment recommendation implications as programs on education, relationship improvement, cooperation (with achievement outcomes), mental health, chronic pain reduction, happiness vs. life satisfaction research, conflict prevention and mitigation, companionship, employment, crime reduction, and democracy:
and the objective list, where wellbeing consists in various objective goods such as knowledge, love, and achievement
underweight invisible, ongoing misery (such as mental illness or chronic pain)
the best thing for improving happiness may be different from the best thing for increasing life satisfaction. Investigating this requires extra work.
other things affect our wellbeing too (such as war, loneliness, unemployment, crime, living in a democracy, etc.) and their value is not entirely reducible to effects on health or income.
Divestment recommendations can be understood as bednets in Kenya, GiveDirectly transfers to some but not other members of large proportion extremely poor communities, and Centre for Pesticide Suicide Prevention:
It’s worth pointing out that many of those whose lives are saved by the interventions that OP funds, such as anti-malaria bednets, will have a life satisfaction score below the neutral point, unless we set it at, or near to, 0⁄10. IDinsight’s aforementioned beneficiary preferences survey has an SWB question and found those surveyed in Kenya had an average life satisfaction score of 2.3/10.
But OP (and others) tend to ignore fairness; the aim is just to do the most good.
But, don’t happier people gain a greater benefit from an extra year of life than less happy people? If so, how can it be consistent to conclude we should account for quantity when assessing the value of saving lives, but not quality?
I understand that you disengage from replies but I am interested in OP’s perspective on the 0-10 life satisfaction value at which you would invest into life satisfaction improving rather than family planning programs.
I am also wondering about your definition of health and rationale for selecting the DALY metric to represent this state.
Thanks for the thorough engagement, Michael. We appreciate thoughtful critical engagement with our work and are always happy to see more of it. (And thanks for flagging this to us in advance so we could think about it—we appreciate that too!)
One place where I particularly appreciate the push is on better defining and articulating what we mean by “worldviews” and how we approach worldview diversification. By worldview we definitely do not mean “a set of philosophical assumptions”—as Holden writes in the blog post where he introduced the concept, we define worldviews as:
We think it is a mistake to collapse worldviews in the sense that we use them to popular debates in philosophy, and we definitely don’t aim to be exhaustive across worldviews that have many philosophical adherents. We see proliferation of worldviews as costly for the standard intellectual reason that they inhibit optimization, as well as carrying substantial practical costs, so we think the bar for putting money behind an additional worldview is significantly higher than you seem to think. But we haven’t done a good job articulating and exploring what we do mean and how that interacts with the case for worldview diversification (which itself remains undertheorized). We appreciate the push on this and are planning to do more thinking and writing on it in the future.
In terms of disagreements, I think maybe the biggest one is a meta one about the value of philosophy per se. We are less worried about internal consistency than we think it is appropriate for philosophers to be, and accordingly less interested in costly exercises that would make us more consistent without carrying obviously large practical benefits. When we encounter critiques, our main questions are, “how would we spend our funding differently if this critique were correct? How costly are the deviations that we’re making according to this critique?” As an example of a case where we spent a lot of time thinking about the philosophy and ended up thinking it didn’t really have high utility stakes and so just deprioritized it for now, see the last footnote on this post (where we find that the utility stakes of a ~3x increase in valuations on lives in some countries would be surprisingly small, not because they would not change what we would fund but because the costs of mistakes are not that big on the view that has higher valuations). You mentioned being confused by what’s going on in that sheet, which is totally fair—feel free to email Peter for a more detailed explanation/walkthrough as the footnote indicates.
In this particular writeup, you haven’t focused as much on the upshot of what we should fund that we don’t (or what we do fund that we shouldn’t), but elsewhere in your writing I take your implication to be that we should do more on mental health. Based on my understanding of your critiques, I think that takeaway is wrong, and in fact taking on board your critiques here would lead us to do more of what most of OP Global Health and Wellbeing already does—save kids’ lives and work to fight the worst abuses of factory farming, potentially with a marginal reduction in our more limited work focused on increasing incomes. Three particular disagreements that I think drive this:
Set point. I think setting a neutral point on a life satisfaction scale of 5⁄10 is somewhere between unreasonable and unconscionable, and OP institutionally is comfortable with the implication that saving human lives is almost always good. Given that we think the correct neutral point is low, taking your other points on board would imply that we should place even more weight on life-saving interventions. We think that is plausible, but for now we’ll note that we’re already really far in this direction compared to other actors. That doesn’t mean we shouldn’t go further, but we do think it should prompt some humility on our part re: even more extreme divergence with consensus, which is one reason we’re going slowly.
Hedonism. We think that most plausible arguments for hedonism end up being arguments for the dominance of farm animal welfare. We seem to put a lot of weight on those arguments relative to you, and farm animal welfare is OP GHW’s biggest area of giving after GiveWell recommendations. If we updated toward more weight on hedonism, we think the correct implication would be even more work on FAW, rather than work on human mental health. A little more abstractly, we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare. That doesn’t mean they’re useless—we can still learn a lot when noisy measures all point in the same direction—but we don’t think it makes sense to entrench a certain survey-based measure like life satisfaction scores as the ultimate goal.
Population ethics. While we’re ambivalent about how much to bet on the total view, we disagree with your claim that doing so would reduce our willingness to pay for saving lives given offsetting fertility effects. As I wrote here, Roodman’s report is only counting the first generation. If he is right that preventing two under-5 deaths leads to ~one fewer birth, that’s still one more kid net making it to adulthood and being able to have kids of their own. Given fertility rates in the places where we fund work to save lives, I think that would more than offset the Roodman adjustment in just a few decades, and potentially cumulatively lead to much higher weight on the value of saving kids’ lives today (though one would also have to be attentive to potential costs of bigger populations).
Related to the point about placing less weight on the value of philosophy per se, we’re reluctant to get pulled into long written back and forths about this kind of thing, so I’m not planning to say more on this thread by default, but happy to continue these discussions in the future. And thanks again for taking the time to engage here.
Thanks very much for these comments! Given that Alex—who I’ll refer to in the 3rd person from here—doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.
Alex’s main point seems to be that Open Philanthropy (OP) won’t engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that—I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the split of OP into the ‘longtermism’ and ‘global health and wellbeing’ pots is an indication of this.
My main reply is that Alex has been too quick to conclude that moral philosophy won’t matter for OP’s decision-making on global health and wellbeing. Let me (re)state a few points which show, I think, that it does matter and, as a consequence, OP should engage further.
As John Halstead has pointed out in another comment, the location of the neutral point could make a big difference and it’s not obvious where it is. If this was a settled question, I might agree with Alex’s take, but it’s not settled.
Relatedly, as I say in the post, switching between two different accounts of the badness of death (deprivationism and TRIA) would alter the value of life-extending to life-improving interventions by a factor of perhaps 5 or more.
Alex seems to object to hedonism, but I’m not advocating for hedonism (at least, not here). My main point is about adopting a ‘subjective wellbeing (SWB) worldview’, where you use the survey research on how people actually experience their lives to determine what does the most good. I’m not sure exactly what OP’s worldview is—that’s basically the point of the main post—but it seems to place little weight on people’s feelings (their ‘experienced utility’) and far more on what they do or would choose (their ‘decision utility’). But, as I argue above, these two can substantially come apart: we don’t always choose what makes us happiest. Indeed, we make predictable mistakes (see our report on affective forecasting for more on this).
Mental health is a problem that looks pretty serious on the SWB worldview but appears nowhere in the worldview that OP seems to favour. As noted, HLI finds therapy for depressed people in LICs is about 10x more cost-effective than cash-transfers in LICs. That, to me, is sufficient to take the SWB worldview seriously. I don’t see what this necessarily has to do with animals.
Will the SWB lens reveal different priorities in other cases? Very probably—pain and loneliness look more important, economic growth less, etc. - but I can’t say for sure because attempts to apply this lens are so new. I had hoped OP’s response would be “oh, this seems to really matter, let’s investigate further” but it seems to be “we’re not totally convinced, so we’ll basically ignore this”.
Alex says “we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare” but he doesn’t explain or defend that claim. (There are a few other places above where he states, but doesn’t argue for, his opinion, which makes it harder to have a constructive disagreement.)
On the total view, saving lives, and fertility, we seem to be disagreeing about one thing but agreeing about another. I said the total view would lead us to reduce the value of saving lives. Alex says it might actually cause us to increase the value of saving lives when we consider longer-run effects. Okay. In which case, it would seem we agree that taking a stand on population ethics might really matter. In which case, I take it we ought to see where the argument goes (rather than ignore it in case it takes us somewhere we don’t like).
It seems that Alex’s conclusion that moral philosophy barely matters relies heavily on the reasoning in the spreadsheet linked to in footnote 50 of the technical update blog post. The footnote states “Our [OP’s] analysis tends to find that picking the wrong moral weight only means sacrificing 2-5% of the good we could do”. I discussed this above in footnote 3, but I expect it’s worth restating and elaborating on that here. The spreadsheet isn’t explained and it’s unclear what the justification is. I assume the “2-5%” thing is really a motte-and-bailey. To explain, one might think OP is making a very strong claim such as “whatever assumptions you make about morality makes almost no difference to what you ought to do”. Clearly, that claim is implausible. If OP does believe this, that would be an amazing conclusion about practical ethics and I would encourage them to explain it in full. However, it seems that OP is probably making a much weaker claim, such as “given some restrictions on what our moral views can be, we find it makes little difference which ones we pick”. This claim is plausible, but of course, the concern is that the choice of moral views has been unduly restricted. What the preceding bullet points demonstrate is that different moral assumptions (and/or ‘worldviews’) could substantially change our conclusions—it’s not just a 2-5% difference.
I understand, of course, that investigating—and, possibly, implementing—additional worldviews is, well, hassle. But Open Philanthropy is a multi-billion dollar foundation that’s publicly committed to worldview diversification and it looks like it would make a practical difference.
The author doesn’t argue that the neutral point is 5⁄10, he argues (1) that the decision about where to set the neutral point is crucial for prioritising resources, (2) you haven’t defended a particular neutral point in public.
I don’t really see how this responds to Michael’s point. You say “assuming that the neutral point is low, we should spend more on life saving”. But his point is that you haven’t defended a low neutral point and that it might be above zero. If the neutral point is (eg) 2.5, that implies that much of your spending on life saving (like bednets) is net harmful. One recent unpublished study found that UK respondents put the neutral point at 2. This seems like the kind of thing that is practically important enough to make it worth GiveWell thinking about.
Since you define worldview as a “set of … beliefs that favor a certain kind of giving,” then it matters whether you understand income and health as “intrinsically [or] instrumentally valuable.” In the latter but not the former case, if you learn that income and health do not optimize for your desired end, you would change your giving.
I am understanding investment recommendation implications as programs on education, relationship improvement, cooperation (with achievement outcomes), mental health, chronic pain reduction, happiness vs. life satisfaction research, conflict prevention and mitigation, companionship, employment, crime reduction, and democracy:
Divestment recommendations can be understood as bednets in Kenya, GiveDirectly transfers to some but not other members of large proportion extremely poor communities, and Centre for Pesticide Suicide Prevention:
I understand that you disengage from replies but I am interested in OP’s perspective on the 0-10 life satisfaction value at which you would invest into life satisfaction improving rather than family planning programs.
I am also wondering about your definition of health and rationale for selecting the DALY metric to represent this state.