Thanks very much for these comments! Given that Alex—who I’ll refer to in the 3rd person from here—doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.
Alex’s main point seems to be that Open Philanthropy (OP) won’t engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that—I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the split of OP into the ‘longtermism’ and ‘global health and wellbeing’ pots is an indication of this.
My main reply is that Alex has been too quick to conclude that moral philosophy won’t matter for OP’s decision-making on global health and wellbeing. Let me (re)state a few points which show, I think, that it does matter and, as a consequence, OP should engage further.
As John Halstead has pointed out in another comment, the location of the neutral point could make a big difference and it’s not obvious where it is. If this was a settled question, I might agree with Alex’s take, but it’s not settled.
Relatedly, as I say in the post, switching between two different accounts of the badness of death (deprivationism and TRIA) would alter the value of life-extending to life-improving interventions by a factor of perhaps 5 or more.
Alex seems to object to hedonism, but I’m not advocating for hedonism (at least, not here). My main point is about adopting a ‘subjective wellbeing (SWB) worldview’, where you use the survey research on how people actually experience their lives to determine what does the most good. I’m not sure exactly what OP’s worldview is—that’s basically the point of the main post—but it seems to place little weight on people’s feelings (their ‘experienced utility’) and far more on what they do or would choose (their ‘decision utility’). But, as I argue above, these two can substantially come apart: we don’t always choose what makes us happiest. Indeed, we make predictable mistakes (see our report on affective forecasting for more on this).
Mental health is a problem that looks pretty serious on the SWB worldview but appears nowhere in the worldview that OP seems to favour. As noted, HLI finds therapy for depressed people in LICs is about 10x more cost-effective than cash-transfers in LICs. That, to me, is sufficient to take the SWB worldview seriously. I don’t see what this necessarily has to do with animals.
Will the SWB lens reveal different priorities in other cases? Very probably—pain and loneliness look more important, economic growth less, etc. - but I can’t say for sure because attempts to apply this lens are so new. I had hoped OP’s response would be “oh, this seems to really matter, let’s investigate further” but it seems to be “we’re not totally convinced, so we’ll basically ignore this”.
Alex says “we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare” but he doesn’t explain or defend that claim. (There are a few other places above where he states, but doesn’t argue for, his opinion, which makes it harder to have a constructive disagreement.)
On the total view, saving lives, and fertility, we seem to be disagreeing about one thing but agreeing about another. I said the total view would lead us to reduce the value of saving lives. Alex says it might actually cause us to increase the value of saving lives when we consider longer-run effects. Okay. In which case, it would seem we agree that taking a stand on population ethics might really matter. In which case, I take it we ought to see where the argument goes (rather than ignore it in case it takes us somewhere we don’t like).
It seems that Alex’s conclusion that moral philosophy barely matters relies heavily on the reasoning in the spreadsheet linked to in footnote 50 of the technical update blog post. The footnote states “Our [OP’s] analysis tends to find that picking the wrong moral weight only means sacrificing 2-5% of the good we could do”. I discussed this above in footnote 3, but I expect it’s worth restating and elaborating on that here. The spreadsheet isn’t explained and it’s unclear what the justification is. I assume the “2-5%” thing is really a motte-and-bailey. To explain, one might think OP is making a very strong claim such as “whatever assumptions you make about morality makes almost no difference to what you ought to do”. Clearly, that claim is implausible. If OP does believe this, that would be an amazing conclusion about practical ethics and I would encourage them to explain it in full. However, it seems that OP is probably making a much weaker claim, such as “given some restrictions on what our moral views can be, we find it makes little difference which ones we pick”. This claim is plausible, but of course, the concern is that the choice of moral views has been unduly restricted. What the preceding bullet points demonstrate is that different moral assumptions (and/or ‘worldviews’) could substantially change our conclusions—it’s not just a 2-5% difference.
I understand, of course, that investigating—and, possibly, implementing—additional worldviews is, well, hassle. But Open Philanthropy is a multi-billion dollar foundation that’s publicly committed to worldview diversification and it looks like it would make a practical difference.
Thanks very much for these comments! Given that Alex—who I’ll refer to in the 3rd person from here—doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.
Alex’s main point seems to be that Open Philanthropy (OP) won’t engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that—I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the split of OP into the ‘longtermism’ and ‘global health and wellbeing’ pots is an indication of this.
My main reply is that Alex has been too quick to conclude that moral philosophy won’t matter for OP’s decision-making on global health and wellbeing. Let me (re)state a few points which show, I think, that it does matter and, as a consequence, OP should engage further.
As John Halstead has pointed out in another comment, the location of the neutral point could make a big difference and it’s not obvious where it is. If this was a settled question, I might agree with Alex’s take, but it’s not settled.
Relatedly, as I say in the post, switching between two different accounts of the badness of death (deprivationism and TRIA) would alter the value of life-extending to life-improving interventions by a factor of perhaps 5 or more.
Alex seems to object to hedonism, but I’m not advocating for hedonism (at least, not here). My main point is about adopting a ‘subjective wellbeing (SWB) worldview’, where you use the survey research on how people actually experience their lives to determine what does the most good. I’m not sure exactly what OP’s worldview is—that’s basically the point of the main post—but it seems to place little weight on people’s feelings (their ‘experienced utility’) and far more on what they do or would choose (their ‘decision utility’). But, as I argue above, these two can substantially come apart: we don’t always choose what makes us happiest. Indeed, we make predictable mistakes (see our report on affective forecasting for more on this).
Mental health is a problem that looks pretty serious on the SWB worldview but appears nowhere in the worldview that OP seems to favour. As noted, HLI finds therapy for depressed people in LICs is about 10x more cost-effective than cash-transfers in LICs. That, to me, is sufficient to take the SWB worldview seriously. I don’t see what this necessarily has to do with animals.
Will the SWB lens reveal different priorities in other cases? Very probably—pain and loneliness look more important, economic growth less, etc. - but I can’t say for sure because attempts to apply this lens are so new. I had hoped OP’s response would be “oh, this seems to really matter, let’s investigate further” but it seems to be “we’re not totally convinced, so we’ll basically ignore this”.
Alex says “we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare” but he doesn’t explain or defend that claim. (There are a few other places above where he states, but doesn’t argue for, his opinion, which makes it harder to have a constructive disagreement.)
On the total view, saving lives, and fertility, we seem to be disagreeing about one thing but agreeing about another. I said the total view would lead us to reduce the value of saving lives. Alex says it might actually cause us to increase the value of saving lives when we consider longer-run effects. Okay. In which case, it would seem we agree that taking a stand on population ethics might really matter. In which case, I take it we ought to see where the argument goes (rather than ignore it in case it takes us somewhere we don’t like).
It seems that Alex’s conclusion that moral philosophy barely matters relies heavily on the reasoning in the spreadsheet linked to in footnote 50 of the technical update blog post. The footnote states “Our [OP’s] analysis tends to find that picking the wrong moral weight only means sacrificing 2-5% of the good we could do”. I discussed this above in footnote 3, but I expect it’s worth restating and elaborating on that here. The spreadsheet isn’t explained and it’s unclear what the justification is. I assume the “2-5%” thing is really a motte-and-bailey. To explain, one might think OP is making a very strong claim such as “whatever assumptions you make about morality makes almost no difference to what you ought to do”. Clearly, that claim is implausible. If OP does believe this, that would be an amazing conclusion about practical ethics and I would encourage them to explain it in full. However, it seems that OP is probably making a much weaker claim, such as “given some restrictions on what our moral views can be, we find it makes little difference which ones we pick”. This claim is plausible, but of course, the concern is that the choice of moral views has been unduly restricted. What the preceding bullet points demonstrate is that different moral assumptions (and/or ‘worldviews’) could substantially change our conclusions—it’s not just a 2-5% difference.
I understand, of course, that investigating—and, possibly, implementing—additional worldviews is, well, hassle. But Open Philanthropy is a multi-billion dollar foundation that’s publicly committed to worldview diversification and it looks like it would make a practical difference.