The total view is not the only view on which future good lives starting has moral value. You can also think that if you believe in (amongst other things):
-Maximizing average utility across all people who ever live, in which case future people coming into existence is good if their level of well-being is above the mean level of well-being of the people before them.
-A view on which adding happy lives gets less and less valuable the more happy people have lived, but never reaches zero. (Possibly helpful with avoiding the repugnant conclusion.)
-A view like the previous one on which both the total amount of utility and how fairly it is distributed matter, so that more utility is always in itself better, and so adding happy people is always intrinsically good in itself, but a population with less total utility but a fairer distribution of utility can sometimes be better than a population with more utility, less fairly distributed.
This isnât just nitpicking: the total view is extreme in various ways that the mere idea that happy people coming into existence is good is not.
Also, even if you reject the view that creating happy people is intrinsically valuable, you might want to ensure there are happy people in the future just to satisfy the preferences of current people, most of whom probably have at least some desire for happy descendants of at least one of their family/âculture/âhumanity as whole, although it is true that this wonât get you the view that preventing extinction is astronomically valuable.
Thanks for the considered response. Youâre right that the Total View is not the only view on which future good lives has moral value (though that does seem to be the main one bandied about). Perhaps I should have written âI donât subscribe to the idea that adding happy people is intrinsically good in itselfâ as I think that better reflects my position â I subscribe to the Person-Affecting View (PAV).
The reason I prefer the PAV is not because of the repugnant conclusion (which I donât actually find ârepugnantâ) but more the problem of existence comparativism â I donât think that, for a given person, existing can be better or worse than not existing.
Given my PAV, I agree with your last point that there is some moral value to ensuring happy people in the future, if that would satisfy the preferences of current people. But in my experience, most people seem to have very weak preferences for the continued existence of âhumanityâ as a whole. Most people seem very concerned about the immediate impacts on those within their moral circle (i.e. themselves and their children, maybe grandchildren), but not that much beyond that. So on that basis, I donât think reducing extinction risk will beat out increasing the value of futures where we survive.
To be clear, I donât have an objection to the extinction risk work EA endorses that is robustly good on a variety of worldviews (e.g. preventing all-out nuclear war is great on the PAV, too). But I donât have a problem with humans or digital minds going extinct per se. For example, if humans went extinct because of declining fertility rates (which I donât think is likely), I wouldnât see that as a big moral catastrophe that requires intervention.
âI donât think that, for a given person, existing can be better or worse than not existing. â
Presumably even given this, you wouldnât create a person who would spending their entire life in terrible agony, begging for death. If that can be a bad thing to do even though existing canât be worse than not existing, then why canât it be a good thing to create happy people, even though existing canât be better than not existing?
No I wouldnât create a person who would spend their entire life in agony. But I think the reason many people including myself hold the PAV despite the procreation asymmetry is because we recognise that, in real life, two thing are separate: (1) creating a person; (2) making that person happy. I disagree that (1) alone is good. At best, it is neutral. I only think that (2) is good.
If I were to create a child and abandon it, I do not think that is better than not creating the child in the first place. That is true even if the child ends up being happy for whatever reason (e.g. it ends up being adopted by a great parent).
In contrast, it is indeed possible to create a child who would spend their entire life in agony. In fact, if I created a child and did nothing more, that childâs life would likely be miserable and short. So I see any asymmetric preference to avoid creating unhappy lives, without wanting to create happy lives, as entirely reasonable.
Moreover, I do not think moral realism is correct and see different views of population ethics as being subjective. They depend on each personâs intrinsic values. And no intrinsic values are logical. Logic can help you find ways to achieve your intrinsic values. But it cannot tell you what your intrinsic values should be. Logic is a powerful tool, but it has limits. I think it is important to recognise where logic can helpâand where it canât.
The total view is not the only view on which future good lives starting has moral value. You can also think that if you believe in (amongst other things):
-Maximizing average utility across all people who ever live, in which case future people coming into existence is good if their level of well-being is above the mean level of well-being of the people before them.
-A view on which adding happy lives gets less and less valuable the more happy people have lived, but never reaches zero. (Possibly helpful with avoiding the repugnant conclusion.)
-A view like the previous one on which both the total amount of utility and how fairly it is distributed matter, so that more utility is always in itself better, and so adding happy people is always intrinsically good in itself, but a population with less total utility but a fairer distribution of utility can sometimes be better than a population with more utility, less fairly distributed.
This isnât just nitpicking: the total view is extreme in various ways that the mere idea that happy people coming into existence is good is not.
Also, even if you reject the view that creating happy people is intrinsically valuable, you might want to ensure there are happy people in the future just to satisfy the preferences of current people, most of whom probably have at least some desire for happy descendants of at least one of their family/âculture/âhumanity as whole, although it is true that this wonât get you the view that preventing extinction is astronomically valuable.
Thanks for the considered response. Youâre right that the Total View is not the only view on which future good lives has moral value (though that does seem to be the main one bandied about). Perhaps I should have written âI donât subscribe to the idea that adding happy people is intrinsically good in itselfâ as I think that better reflects my position â I subscribe to the Person-Affecting View (PAV).
The reason I prefer the PAV is not because of the repugnant conclusion (which I donât actually find ârepugnantâ) but more the problem of existence comparativism â I donât think that, for a given person, existing can be better or worse than not existing.
Given my PAV, I agree with your last point that there is some moral value to ensuring happy people in the future, if that would satisfy the preferences of current people. But in my experience, most people seem to have very weak preferences for the continued existence of âhumanityâ as a whole. Most people seem very concerned about the immediate impacts on those within their moral circle (i.e. themselves and their children, maybe grandchildren), but not that much beyond that. So on that basis, I donât think reducing extinction risk will beat out increasing the value of futures where we survive.
To be clear, I donât have an objection to the extinction risk work EA endorses that is robustly good on a variety of worldviews (e.g. preventing all-out nuclear war is great on the PAV, too). But I donât have a problem with humans or digital minds going extinct per se. For example, if humans went extinct because of declining fertility rates (which I donât think is likely), I wouldnât see that as a big moral catastrophe that requires intervention.
âI donât think that, for a given person, existing can be better or worse than not existing. â
Presumably even given this, you wouldnât create a person who would spending their entire life in terrible agony, begging for death. If that can be a bad thing to do even though existing canât be worse than not existing, then why canât it be a good thing to create happy people, even though existing canât be better than not existing?
No I wouldnât create a person who would spend their entire life in agony. But I think the reason many people including myself hold the PAV despite the procreation asymmetry is because we recognise that, in real life, two thing are separate: (1) creating a person; (2) making that person happy. I disagree that (1) alone is good. At best, it is neutral. I only think that (2) is good.
If I were to create a child and abandon it, I do not think that is better than not creating the child in the first place. That is true even if the child ends up being happy for whatever reason (e.g. it ends up being adopted by a great parent).
In contrast, it is indeed possible to create a child who would spend their entire life in agony. In fact, if I created a child and did nothing more, that childâs life would likely be miserable and short. So I see any asymmetric preference to avoid creating unhappy lives, without wanting to create happy lives, as entirely reasonable.
Moreover, I do not think moral realism is correct and see different views of population ethics as being subjective. They depend on each personâs intrinsic values. And no intrinsic values are logical. Logic can help you find ways to achieve your intrinsic values. But it cannot tell you what your intrinsic values should be. Logic is a powerful tool, but it has limits. I think it is important to recognise where logic can helpâand where it canât.