I think humans will go extinct at some point, so reducing extinction risk just kicks the can down the road.
On a selfish level, I don’t want humans to go extinct anytime soon. But on an impartial level, I don’t care really care whether humans go extinct, say, 500 years from now vs 600. I don’t subscribe to the Total View of population ethics so I don’t place moral value on the “possible lives that could have existed” in those extra 100 years.
The total view is not the only view on which future good lives starting has moral value. You can also think that if you believe in (amongst other things):
-Maximizing average utility across all people who ever live, in which case future people coming into existence is good if their level of well-being is above the mean level of well-being of the people before them.
-A view on which adding happy lives gets less and less valuable the more happy people have lived, but never reaches zero. (Possibly helpful with avoiding the repugnant conclusion.)
-A view like the previous one on which both the total amount of utility and how fairly it is distributed matter, so that more utility is always in itself better, and so adding happy people is always intrinsically good in itself, but a population with less total utility but a fairer distribution of utility can sometimes be better than a population with more utility, less fairly distributed.
This isn’t just nitpicking: the total view is extreme in various ways that the mere idea that happy people coming into existence is good is not.
Also, even if you reject the view that creating happy people is intrinsically valuable, you might want to ensure there are happy people in the future just to satisfy the preferences of current people, most of whom probably have at least some desire for happy descendants of at least one of their family/culture/humanity as whole, although it is true that this won’t get you the view that preventing extinction is astronomically valuable.
Thanks for the considered response. You’re right that the Total View is not the only view on which future good lives has moral value (though that does seem to be the main one bandied about). Perhaps I should have written “I don’t subscribe to the idea that adding happy people is intrinsically good in itself” as I think that better reflects my position — I subscribe to the Person-Affecting View (PAV).
The reason I prefer the PAV is not because of the repugnant conclusion (which I don’t actually find “repugnant”) but more the problem of existence comparativism — I don’t think that, for a given person, existing can be better or worse than not existing.
Given my PAV, I agree with your last point that there is some moral value to ensuring happy people in the future, if that would satisfy the preferences of current people. But in my experience, most people seem to have very weak preferences for the continued existence of “humanity” as a whole. Most people seem very concerned about the immediate impacts on those within their moral circle (i.e. themselves and their children, maybe grandchildren), but not that much beyond that. So on that basis, I don’t think reducing extinction risk will beat out increasing the value of futures where we survive.
To be clear, I don’t have an objection to the extinction risk work EA endorses that is robustly good on a variety of worldviews (e.g. preventing all-out nuclear war is great on the PAV, too). But I don’t have a problem with humans or digital minds going extinct per se. For example, if humans went extinct because of declining fertility rates (which I don’t think is likely), I wouldn’t see that as a big moral catastrophe that requires intervention.
“I don’t think that, for a given person, existing can be better or worse than not existing. ”
Presumably even given this, you wouldn’t create a person who would spending their entire life in terrible agony, begging for death. If that can be a bad thing to do even though existing can’t be worse than not existing, then why can’t it be a good thing to create happy people, even though existing can’t be better than not existing?
No I wouldn’t create a person who would spend their entire life in agony. But I think the reason many people including myself hold the PAV despite the procreation asymmetry is because we recognise that, in real life, two thing are separate: (1) creating a person; (2) making that person happy. I disagree that (1) alone is good. At best, it is neutral. I only think that (2) is good.
If I were to create a child and abandon it, I do not think that is better than not creating the child in the first place. That is true even if the child ends up being happy for whatever reason (e.g. it ends up being adopted by a great parent).
In contrast, it is indeed possible to create a child who would spend their entire life in agony. In fact, if I created a child and did nothing more, that child’s life would likely be miserable and short. So I see any asymmetric preference to avoid creating unhappy lives, without wanting to create happy lives, as entirely reasonable.
Moreover, I do not think moral realism is correct and see different views of population ethics as being subjective. They depend on each person’s intrinsic values. And no intrinsic values are logical. Logic can help you find ways to achieve your intrinsic values. But it cannot tell you what your intrinsic values should be. Logic is a powerful tool, but it has limits. I think it is important to recognise where logic can help—and where it can’t.
I think humans will go extinct at some point, so reducing extinction risk just kicks the can down the road.
On a selfish level, I don’t want humans to go extinct anytime soon. But on an impartial level, I don’t care really care whether humans go extinct, say, 500 years from now vs 600. I don’t subscribe to the Total View of population ethics so I don’t place moral value on the “possible lives that could have existed” in those extra 100 years.
The total view is not the only view on which future good lives starting has moral value. You can also think that if you believe in (amongst other things):
-Maximizing average utility across all people who ever live, in which case future people coming into existence is good if their level of well-being is above the mean level of well-being of the people before them.
-A view on which adding happy lives gets less and less valuable the more happy people have lived, but never reaches zero. (Possibly helpful with avoiding the repugnant conclusion.)
-A view like the previous one on which both the total amount of utility and how fairly it is distributed matter, so that more utility is always in itself better, and so adding happy people is always intrinsically good in itself, but a population with less total utility but a fairer distribution of utility can sometimes be better than a population with more utility, less fairly distributed.
This isn’t just nitpicking: the total view is extreme in various ways that the mere idea that happy people coming into existence is good is not.
Also, even if you reject the view that creating happy people is intrinsically valuable, you might want to ensure there are happy people in the future just to satisfy the preferences of current people, most of whom probably have at least some desire for happy descendants of at least one of their family/culture/humanity as whole, although it is true that this won’t get you the view that preventing extinction is astronomically valuable.
Thanks for the considered response. You’re right that the Total View is not the only view on which future good lives has moral value (though that does seem to be the main one bandied about). Perhaps I should have written “I don’t subscribe to the idea that adding happy people is intrinsically good in itself” as I think that better reflects my position — I subscribe to the Person-Affecting View (PAV).
The reason I prefer the PAV is not because of the repugnant conclusion (which I don’t actually find “repugnant”) but more the problem of existence comparativism — I don’t think that, for a given person, existing can be better or worse than not existing.
Given my PAV, I agree with your last point that there is some moral value to ensuring happy people in the future, if that would satisfy the preferences of current people. But in my experience, most people seem to have very weak preferences for the continued existence of “humanity” as a whole. Most people seem very concerned about the immediate impacts on those within their moral circle (i.e. themselves and their children, maybe grandchildren), but not that much beyond that. So on that basis, I don’t think reducing extinction risk will beat out increasing the value of futures where we survive.
To be clear, I don’t have an objection to the extinction risk work EA endorses that is robustly good on a variety of worldviews (e.g. preventing all-out nuclear war is great on the PAV, too). But I don’t have a problem with humans or digital minds going extinct per se. For example, if humans went extinct because of declining fertility rates (which I don’t think is likely), I wouldn’t see that as a big moral catastrophe that requires intervention.
“I don’t think that, for a given person, existing can be better or worse than not existing. ”
Presumably even given this, you wouldn’t create a person who would spending their entire life in terrible agony, begging for death. If that can be a bad thing to do even though existing can’t be worse than not existing, then why can’t it be a good thing to create happy people, even though existing can’t be better than not existing?
No I wouldn’t create a person who would spend their entire life in agony. But I think the reason many people including myself hold the PAV despite the procreation asymmetry is because we recognise that, in real life, two thing are separate: (1) creating a person; (2) making that person happy. I disagree that (1) alone is good. At best, it is neutral. I only think that (2) is good.
If I were to create a child and abandon it, I do not think that is better than not creating the child in the first place. That is true even if the child ends up being happy for whatever reason (e.g. it ends up being adopted by a great parent).
In contrast, it is indeed possible to create a child who would spend their entire life in agony. In fact, if I created a child and did nothing more, that child’s life would likely be miserable and short. So I see any asymmetric preference to avoid creating unhappy lives, without wanting to create happy lives, as entirely reasonable.
Moreover, I do not think moral realism is correct and see different views of population ethics as being subjective. They depend on each person’s intrinsic values. And no intrinsic values are logical. Logic can help you find ways to achieve your intrinsic values. But it cannot tell you what your intrinsic values should be. Logic is a powerful tool, but it has limits. I think it is important to recognise where logic can help—and where it can’t.