Both Will and Toby place moral weight on the non-person-affecting view, where preventing the creation of a happy person is as bad as killing them!
I’m not sure supporters of non-person-affecting views would endorse this exact claim, if only because a lot of people would likely be very upset if you killed their friend/family member.
From the perspective of long-termism, it seems plausible to me that countries with very rapidly growing populations, and that don’t allow women the ability to control whether and when to reproduce, may be less politically stable themselves and may also contribute to increased political instability globally (I have no evidence to support this—happy to be corrected). My intuition is that increasing global political stability and improving quality of life should be a key priority for longtermists over the next hundred years (after reducing x-risk), and once this is achieved more emphasis can be put on increasing population—if humans/posthumans/AGI in the future decide this is a good idea.
Hi Julia. Thank you for your charity in our previous interactions.
Please let me know how you feel my comment puts words in people’s mouths. I’ll happily fix or retract any part of that comment which is misleadingly put.
It implies that Will and Toby believe that preventing the creation of a happy person is as bad as killing them. I think that’s pretty unlikely, because most people who value future lives think murdering an existing person is a lot worse than not creating a life.
I don’t think my statement that Will and Toby “place moral weight” on the non-person-affecting view implies that they accept all of its conclusions. The statement I made is corroborated by Will and Toby’s own words.
Toby, in collaboration with Hilary Greaves, argues that moral uncertainty “systematically pushes one towards choosing the option preferred by the Total and Critical Level views” as a population’s size increases.[1] If Toby accepts his own argument, this means Toby places moral weight on total utilitarianism, which implies the non-person-affecting view.
Will spends most of Chapter 8 What We Owe The Future arguing that “all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections”.[2] Will states that “the view that I incline towards” is to “accept the Repugnant Conclusion”.[3] The most parsimonious view which accepts the Repugnant Conclusion is total utilitarianism, so it’s unsurprising Will endorses Hilary and Toby’s placing of moral weight on total utilitarianism to “end up with a low but positive critical level”.[4]
I don’t think Will and Toby believe that preventing the creation of a happy person is as bad as killing them. (Although I do personally think that’s the logical conclusion of their arguments.) The statement I actually made, that Will and Toby “place moral weight” on that view, seems consistent with their writings and worldviews.
I’m not sure supporters of non-person-affecting views would endorse this exact claim, if only because a lot of people would likely be very upset if you killed their friend/family member.
I think this somewhat conflates people’s philosophical views and their gut instincts. (For what it’s worth, I support the non-person-affecting view, and I would endorse that moral claim.) The quote is similar to:
I’m not sure moral universalists would endorse the claim that “killing a stranger causes the same moral harm as killing my friend/family member”, because losing a friend would make them grieve for weeks, but strangers are murdered all the time, and they never cry about it.
I’m not sure utilitarians who care about animals would endorse the claim that “torturing and killing a billion chickens is objectively worse than killing my friend/family member”, because the latter would make them grieve for weeks, but they hardly shed a tear over the former, even though it happens on a weekly basis.
countries with very rapidly growing populations...contribute to increased political instability globally
I also have a weak intuition that a rapidly growing population contributes to political instability. However, population growth should increase our resilience to disasters, including nuclear war and bio-risk. Population growth also increases economic growth. This EA analysis of the long-term effects of population growth finds population growth to be net positive, mainly due to its economic effects. Overall, I think the evidence points to population growth being net positive.
I’m not sure supporters of non-person-affecting views would endorse this exact claim, if only because a lot of people would likely be very upset if you killed their friend/family member.
From the perspective of long-termism, it seems plausible to me that countries with very rapidly growing populations, and that don’t allow women the ability to control whether and when to reproduce, may be less politically stable themselves and may also contribute to increased political instability globally (I have no evidence to support this—happy to be corrected). My intuition is that increasing global political stability and improving quality of life should be a key priority for longtermists over the next hundred years (after reducing x-risk), and once this is achieved more emphasis can be put on increasing population—if humans/posthumans/AGI in the future decide this is a good idea.
>I’m not sure supporters of non-person-affecting views would endorse this exact claim
I’d put it more strongly—I think the original comment puts words in people’s mouths that I don’t think they mean at all.
Hi Julia. Thank you for your charity in our previous interactions.
Please let me know how you feel my comment puts words in people’s mouths. I’ll happily fix or retract any part of that comment which is misleadingly put.
It implies that Will and Toby believe that preventing the creation of a happy person is as bad as killing them. I think that’s pretty unlikely, because most people who value future lives think murdering an existing person is a lot worse than not creating a life.
Thanks for the clarification!
I don’t think my statement that Will and Toby “place moral weight” on the non-person-affecting view implies that they accept all of its conclusions. The statement I made is corroborated by Will and Toby’s own words.
Toby, in collaboration with Hilary Greaves, argues that moral uncertainty “systematically pushes one towards choosing the option preferred by the Total and Critical Level views” as a population’s size increases.[1] If Toby accepts his own argument, this means Toby places moral weight on total utilitarianism, which implies the non-person-affecting view.
Will spends most of Chapter 8 What We Owe The Future arguing that “all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections”.[2] Will states that “the view that I incline towards” is to “accept the Repugnant Conclusion”.[3] The most parsimonious view which accepts the Repugnant Conclusion is total utilitarianism, so it’s unsurprising Will endorses Hilary and Toby’s placing of moral weight on total utilitarianism to “end up with a low but positive critical level”.[4]
I don’t think Will and Toby believe that preventing the creation of a happy person is as bad as killing them. (Although I do personally think that’s the logical conclusion of their arguments.) The statement I actually made, that Will and Toby “place moral weight” on that view, seems consistent with their writings and worldviews.
Greaves, Hilary; Ord, Toby, ‘Moral uncertainty about population ethics’, Journal of Ethics and Social Philosophy, https://philpapers.org/rec/GREMUA-2
MacAskill, W. (2022). What We Owe the Future (p. 250). Basic Books. p. 234
Ibid. p. 245
Ibid. p. 250
I think this somewhat conflates people’s philosophical views and their gut instincts. (For what it’s worth, I support the non-person-affecting view, and I would endorse that moral claim.) The quote is similar to:
I’m not sure moral universalists would endorse the claim that “killing a stranger causes the same moral harm as killing my friend/family member”, because losing a friend would make them grieve for weeks, but strangers are murdered all the time, and they never cry about it.
I’m not sure utilitarians who care about animals would endorse the claim that “torturing and killing a billion chickens is objectively worse than killing my friend/family member”, because the latter would make them grieve for weeks, but they hardly shed a tear over the former, even though it happens on a weekly basis.
I also have a weak intuition that a rapidly growing population contributes to political instability. However, population growth should increase our resilience to disasters, including nuclear war and bio-risk. Population growth also increases economic growth. This EA analysis of the long-term effects of population growth finds population growth to be net positive, mainly due to its economic effects. Overall, I think the evidence points to population growth being net positive.