Thank you Ben and Sarah for your post. Your commitment to saving and improving the lives of mothers in extreme poverty is very admirable.
we acknowledge that certain philosophical frameworks that prioritise utility maximisation may disagree with the impactfulness of family planning work.
These uncertainties may matter more than most people realize.
Many EAs believe that creating happy lives is good. Will MacAskill writes that “if your children have lives that are sufficiently good, then your decision to have them is good for them.”[1] Toby Ord writes that “Any plausible account of population ethics will involve…making sacrifices on behalf of merely possible people.”[2] Both Will and Toby place moral weight on the non-person-affecting view, where preventing the creation of a happy person is as bad as killing them!
Using moral uncertainty, let’s say there’s only a 1% chance that the non-person-affecting view is true. In sub-Saharan Africa, 37% of unintended pregnancies end in abortion, leading to 8.0 million abortions per year.[3] This implies there are 21.6 million unintended pregnancies per year, leading to 200k maternal deaths.[4] To prevent one maternal death, one would have to prevent 108 unintended pregnancies on average. Even with only a 1% chance that the non-person-affecting view is true, this intervention is still net negative, because it causes 1.08 deaths (in expectation) to avert one maternal death.
The non-person-affecting view is mainstream among longtermists, and even non-longtermists may be willing to grant a small probability that we should care about future people.
Possible Objections
Some prevented unintended pregnancies will be replaced by others, so preventing one unintended pregnancy doesn’t necessarily mean preventing one future person
This is a legitimate point, not factored into the above analysis, but all it does is just change one’s maximum credence in the non-person-affecting view for the intervention to not be net negative. 50% replacement means a <2% credence; 75% replacement means a <4% credence.
People in extreme poverty may live net negative lives
This is debatable. Either way, if one believes this, they’re arguing that many EA charities in global health and development are net harmful, because they save the lives of people in extreme poverty. This would also mean MHI’s purpose of saving the lives of mothers in extreme poverty is a bad thing.
This is also debatable, because humans seem to reduce wild invertebrate populations, and there are many ethical arguments pointing to these invertebrates living net negative lives. There’s some reason to believe that this dominates our (horrific) treatment of farmed animals. As before, if you believe this, you should protest most EA charities in global health and development, and disagree with MHI’s purpose of saving the lives of mothers in extreme poverty.
How does MHI incorporate moral uncertainty into its analyses of the net impact of its interventions?
Both Will and Toby place moral weight on the non-person-affecting view, where preventing the creation of a happy person is as bad as killing them!
I’m not sure supporters of non-person-affecting views would endorse this exact claim, if only because a lot of people would likely be very upset if you killed their friend/family member.
From the perspective of long-termism, it seems plausible to me that countries with very rapidly growing populations, and that don’t allow women the ability to control whether and when to reproduce, may be less politically stable themselves and may also contribute to increased political instability globally (I have no evidence to support this—happy to be corrected). My intuition is that increasing global political stability and improving quality of life should be a key priority for longtermists over the next hundred years (after reducing x-risk), and once this is achieved more emphasis can be put on increasing population—if humans/posthumans/AGI in the future decide this is a good idea.
Hi Julia. Thank you for your charity in our previous interactions.
Please let me know how you feel my comment puts words in people’s mouths. I’ll happily fix or retract any part of that comment which is misleadingly put.
It implies that Will and Toby believe that preventing the creation of a happy person is as bad as killing them. I think that’s pretty unlikely, because most people who value future lives think murdering an existing person is a lot worse than not creating a life.
I don’t think my statement that Will and Toby “place moral weight” on the non-person-affecting view implies that they accept all of its conclusions. The statement I made is corroborated by Will and Toby’s own words.
Toby, in collaboration with Hilary Greaves, argues that moral uncertainty “systematically pushes one towards choosing the option preferred by the Total and Critical Level views” as a population’s size increases.[1] If Toby accepts his own argument, this means Toby places moral weight on total utilitarianism, which implies the non-person-affecting view.
Will spends most of Chapter 8 What We Owe The Future arguing that “all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections”.[2] Will states that “the view that I incline towards” is to “accept the Repugnant Conclusion”.[3] The most parsimonious view which accepts the Repugnant Conclusion is total utilitarianism, so it’s unsurprising Will endorses Hilary and Toby’s placing of moral weight on total utilitarianism to “end up with a low but positive critical level”.[4]
I don’t think Will and Toby believe that preventing the creation of a happy person is as bad as killing them. (Although I do personally think that’s the logical conclusion of their arguments.) The statement I actually made, that Will and Toby “place moral weight” on that view, seems consistent with their writings and worldviews.
I’m not sure supporters of non-person-affecting views would endorse this exact claim, if only because a lot of people would likely be very upset if you killed their friend/family member.
I think this somewhat conflates people’s philosophical views and their gut instincts. (For what it’s worth, I support the non-person-affecting view, and I would endorse that moral claim.) The quote is similar to:
I’m not sure moral universalists would endorse the claim that “killing a stranger causes the same moral harm as killing my friend/family member”, because losing a friend would make them grieve for weeks, but strangers are murdered all the time, and they never cry about it.
I’m not sure utilitarians who care about animals would endorse the claim that “torturing and killing a billion chickens is objectively worse than killing my friend/family member”, because the latter would make them grieve for weeks, but they hardly shed a tear over the former, even though it happens on a weekly basis.
countries with very rapidly growing populations...contribute to increased political instability globally
I also have a weak intuition that a rapidly growing population contributes to political instability. However, population growth should increase our resilience to disasters, including nuclear war and bio-risk. Population growth also increases economic growth. This EA analysis of the long-term effects of population growth finds population growth to be net positive, mainly due to its economic effects. Overall, I think the evidence points to population growth being net positive.
Thank you Ben and Sarah for your post. Your commitment to saving and improving the lives of mothers in extreme poverty is very admirable.
These uncertainties may matter more than most people realize.
Many EAs believe that creating happy lives is good. Will MacAskill writes that “if your children have lives that are sufficiently good, then your decision to have them is good for them.”[1] Toby Ord writes that “Any plausible account of population ethics will involve…making sacrifices on behalf of merely possible people.”[2] Both Will and Toby place moral weight on the non-person-affecting view, where preventing the creation of a happy person is as bad as killing them!
Using moral uncertainty, let’s say there’s only a 1% chance that the non-person-affecting view is true. In sub-Saharan Africa, 37% of unintended pregnancies end in abortion, leading to 8.0 million abortions per year.[3] This implies there are 21.6 million unintended pregnancies per year, leading to 200k maternal deaths.[4] To prevent one maternal death, one would have to prevent 108 unintended pregnancies on average. Even with only a 1% chance that the non-person-affecting view is true, this intervention is still net negative, because it causes 1.08 deaths (in expectation) to avert one maternal death.
The non-person-affecting view is mainstream among longtermists, and even non-longtermists may be willing to grant a small probability that we should care about future people.
Possible Objections
Some prevented unintended pregnancies will be replaced by others, so preventing one unintended pregnancy doesn’t necessarily mean preventing one future person
This is a legitimate point, not factored into the above analysis, but all it does is just change one’s maximum credence in the non-person-affecting view for the intervention to not be net negative. 50% replacement means a <2% credence; 75% replacement means a <4% credence.
People in extreme poverty may live net negative lives
This is debatable. Either way, if one believes this, they’re arguing that many EA charities in global health and development are net harmful, because they save the lives of people in extreme poverty. This would also mean MHI’s purpose of saving the lives of mothers in extreme poverty is a bad thing.
Saving human lives is net harmful, because of the meat-eater problem
This is also debatable, because humans seem to reduce wild invertebrate populations, and there are many ethical arguments pointing to these invertebrates living net negative lives. There’s some reason to believe that this dominates our (horrific) treatment of farmed animals. As before, if you believe this, you should protest most EA charities in global health and development, and disagree with MHI’s purpose of saving the lives of mothers in extreme poverty.
How does MHI incorporate moral uncertainty into its analyses of the net impact of its interventions?
MacAskill, W. (2022). What We Owe the Future (p. 250). Basic Books.
Ord, T. (2021). The Precipice (p. 263). Hachette Books.
https://www.guttmacher.org/fact-sheet/abortion-subsaharan-africa
https://ourworldindata.org/saving-maternal-lives
I’m not sure supporters of non-person-affecting views would endorse this exact claim, if only because a lot of people would likely be very upset if you killed their friend/family member.
From the perspective of long-termism, it seems plausible to me that countries with very rapidly growing populations, and that don’t allow women the ability to control whether and when to reproduce, may be less politically stable themselves and may also contribute to increased political instability globally (I have no evidence to support this—happy to be corrected). My intuition is that increasing global political stability and improving quality of life should be a key priority for longtermists over the next hundred years (after reducing x-risk), and once this is achieved more emphasis can be put on increasing population—if humans/posthumans/AGI in the future decide this is a good idea.
>I’m not sure supporters of non-person-affecting views would endorse this exact claim
I’d put it more strongly—I think the original comment puts words in people’s mouths that I don’t think they mean at all.
Hi Julia. Thank you for your charity in our previous interactions.
Please let me know how you feel my comment puts words in people’s mouths. I’ll happily fix or retract any part of that comment which is misleadingly put.
It implies that Will and Toby believe that preventing the creation of a happy person is as bad as killing them. I think that’s pretty unlikely, because most people who value future lives think murdering an existing person is a lot worse than not creating a life.
Thanks for the clarification!
I don’t think my statement that Will and Toby “place moral weight” on the non-person-affecting view implies that they accept all of its conclusions. The statement I made is corroborated by Will and Toby’s own words.
Toby, in collaboration with Hilary Greaves, argues that moral uncertainty “systematically pushes one towards choosing the option preferred by the Total and Critical Level views” as a population’s size increases.[1] If Toby accepts his own argument, this means Toby places moral weight on total utilitarianism, which implies the non-person-affecting view.
Will spends most of Chapter 8 What We Owe The Future arguing that “all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections”.[2] Will states that “the view that I incline towards” is to “accept the Repugnant Conclusion”.[3] The most parsimonious view which accepts the Repugnant Conclusion is total utilitarianism, so it’s unsurprising Will endorses Hilary and Toby’s placing of moral weight on total utilitarianism to “end up with a low but positive critical level”.[4]
I don’t think Will and Toby believe that preventing the creation of a happy person is as bad as killing them. (Although I do personally think that’s the logical conclusion of their arguments.) The statement I actually made, that Will and Toby “place moral weight” on that view, seems consistent with their writings and worldviews.
Greaves, Hilary; Ord, Toby, ‘Moral uncertainty about population ethics’, Journal of Ethics and Social Philosophy, https://philpapers.org/rec/GREMUA-2
MacAskill, W. (2022). What We Owe the Future (p. 250). Basic Books. p. 234
Ibid. p. 245
Ibid. p. 250
I think this somewhat conflates people’s philosophical views and their gut instincts. (For what it’s worth, I support the non-person-affecting view, and I would endorse that moral claim.) The quote is similar to:
I’m not sure moral universalists would endorse the claim that “killing a stranger causes the same moral harm as killing my friend/family member”, because losing a friend would make them grieve for weeks, but strangers are murdered all the time, and they never cry about it.
I’m not sure utilitarians who care about animals would endorse the claim that “torturing and killing a billion chickens is objectively worse than killing my friend/family member”, because the latter would make them grieve for weeks, but they hardly shed a tear over the former, even though it happens on a weekly basis.
I also have a weak intuition that a rapidly growing population contributes to political instability. However, population growth should increase our resilience to disasters, including nuclear war and bio-risk. Population growth also increases economic growth. This EA analysis of the long-term effects of population growth finds population growth to be net positive, mainly due to its economic effects. Overall, I think the evidence points to population growth being net positive.