It’s great to have a short description of the difficulties for person-affecting intuitions!
Any reasonable theory of population ethics must surely accept that C is better than B. C and B contain all of the same people, but one of them is significantly better off in C (with all the others equally well off in both cases). Invoking a person-affecting view implies that B and C are equally as good as each other, but this is clearly wrong.
That a good argument. Still, I find person-affecting views underrated because I suspect that many people have not given much thought to whether it even makes sense to treat population ethics in the same way as other ethical domains.
Why do we think we have to be able rate all possible world states according to how impartially good or bad they are? Population ethics seems underspecified on exactly the dimension where many moral philosophers derive “objective” principles from: others’ interests. It’s the one ethical discipline where others’ interests are not fixed. The principles that underlie preference utilitarianism aren’t sufficiently far-reaching to specify what to do with newly created people. And preference utilitarianism is itself incomplete, because of the further question: What are my preferences? (If everyone’s preference was to be a preference utilitarian, we’d all be standing around waiting until someone has a problem or forms a preference that’s different from selflessly adhering to preference utilitarianism.)
Preference utilitarianism seems like a good answer to some important question(s) that fall(s) under the “morality” heading. But it can’t cover everything. Population ethics is separate from the rest of ethics.
And there’s an interesting relation between how we choose to conceptualize population ethics and how we then come to think about “What are my life goals?”
If we think population ethics has a uniquely correct solution that ranks all world states without violations of transitivity or other, similar problems, we have to think that, in some way, there’s a One Compelling Axiology telling us the goal criteria for every sentient mind. That axiology would specify how to answer “What are my life goals?”
By contrast, if axiology is underdetermined, then different people can rationally adopt different types of life goals.
I self-identify as a moral anti-realist because I’m convinced there’s no One Compelling Axiology. Insofar as there’s something fundamental and objective to ethics, it’s this notion of “respecting others’ interests.” People’s life goals (their “interests”) won’t converge.
Some people take personal hedonism as their life goals, some just want to Kill Bill, some want to have a meaningful family life and die from natural causes here on earth, some don’t think about the future at all and live the party life, some discount any aspirations of personal happiness in favor of working toward positively affecting transformative AI, some want to live forever but also do things to help others realize their dreams along the way, some want to just become famous, etc.
If you think of humans as the biological algorithm we express, rather than the things we come to believe and identify with at some particular point in our biography (based on what we’ve lived), then you might be tempted to seek a One Compelling Axiology with the question “What’s the human policy?” (“Policy” in analogy to machine learning.) For instance, you could plan to devote the future’s large-scale simulation resources to figuring out the structure of what different humans come to value in different simulated environments, with different experienced histories. You could do science about this and identify general patterns. But suppose you’ve figured out the general patterns and tell the result to the Bride in Kill Bill. You tell her “the objective human policy is X.” She might reply “Hold on with your philosophizing, I’m going to have to kill Bill first. Maybe I’ll come back to you and consider doing X afterwards.” Similarly, if you tell a European woman with husband and children about the arguments to move to San Francisco to work on reducing AI risks, because that’s what she ended up caring about on many runs of simulations of her in environments where she had access to all the philosophical arguments, she might say “Maybe I’d be receptive to that in another life, but I love my husband in this world here, and I don’t want to uproot my children, so I’m going to stay here and devote less of my caring capacity to longtermism. Maybe I’ll consider wanting to donate 10% of my income, though.” So, regardless of questions about their “human policy,” in terms of what actual people care about at given points in time, life goals may differ tremendously between people, and even between copies of the same person in different simulated environments. That’s because life goals also track things that relate to the identities we have adopted and the for-us meaningful social connections we have made.
If you say that population ethics is all-encompassing, you’re implicitly saying that all the complexities in the above paragraphs count for nothing (or not much), and that people should just adopt the same types of life goals, no matter their level of novelty-seeking, achievement striving, proscociality, embeddedness in meaningful social connections, views on death, etc. You’re implicitly saying that the way the future should ideally go has almost nothing to do with the goals of presently existing people. To me, that stance is more incomprehensible than some problem with transitivity.
Alternatively, you can say that maybe all of this can’t be put under a single impartial utility function. If so, it seems that you’re correct that you have to accept something similar to the violation of transitivity you describe. But is it really so bad if we look at it with my framing?
It’s not “Even though there’s a One Compelling Axiology, I’ll go ahead and decide to do the grossly inelegant thing with it.” Instead, it’s “Ethics is about life goals and how to relate to other people with different life goals, as well as asking what types of life goals are good for people. Probably, different life goals are good for different people. Therefore, as long as we don’t know which people exist, not everything can be determined. There also seems to be a further issue about how to treat cases where we create new people: that’s population ethics, and it’s a bit underdetermined, which gives more freedom for us to choose what to do with our future lightcone.”
So, I propose to consider the possibility of drawing a more limited role for population ethics than it is typically conceptualized under. We could maybe think of it as: A set of appeals or principles by which beings can hold accountable the decision-makers that created them. This places some constraints on the already existing population, but it leaves room for personal life projects (as opposed to “dictatorship of the future,” where all our choices about the future light cone are predetermined by the One Compelling Axiology, and so have no relation to which exact people are actually alive and care about it).
To give a few examples for population-ethical principles:
All else equal, it seems objectionable on other-regarding grounds to create minds that lament their existence.
It also seems objectionable, all else equal, to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have provided them with better circumstances.
Likewise, it seems objectionable, all else equal, to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(Note that the first principle is about objecting to the fact of being created, while the latter two principles are about objecting to how one was created.)
We can also ask: Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
(From a preference-utilitarian perspective, it seems left open whether the creation of some types of minds can be intrinsically important. Satisfied preferences are good because satisfying preferences is just what it means to consider the interests of others. Also counting the interests of not-yet-existent beings is a possible extension of that, but a somewhat peculiar one. The choice looks underdetermined, again.)
Ironically, the perspective I have described becomes very similar to how non-philosophers commonly think about the ethics of having children:
Parents are obligated to provide a very high standard of care for their children (universal principle)
People are free to decide against becoming parents (personal principle)
Parents are free to want to have as many children as possible (personal principle), as long as the children are happy in expectation (universal principle)
People are free to try to influence other people’s stances and parenting choices (personal principle), as long as they remain within the boundaries of what is acceptable in a civil society (universal principle)
Universal principles fall out of considerations about respecting others’ interests. Personal principles fall out of considerations about “What are my life goals?”
Personal principles can be inspired by considerations of morality, i.e., they can be about choosing to give stronger weight to universal principles and filling out underdetermined stuff with one’s most deeply held moral intuitions. Many people find existence meaningless without dedication to something greater than oneself.
Because there are different types of considerations at play in all of this, there’s probably no super-elegant way to pack everything into a single, impartially valuable utility function. There will have to be some messy choices about how to make tradeoffs, but there isn’t really a satisfying alternative. Just like people have to choose some arbitrary-seeming percentage of how much caring capacity they dedicate toward self-oriented life goals versus other-regarding ones (insofar as the separation is clean; it often isn’t so clean), we have to also somehow choose how much weight to give to different moral domains, including the considerations commonly discussed under the heading of population ethics, and how they relate to my life goals and those of other existing people.
Thanks. There’s a lot to digest there. It’s an interesting idea that population ethics is simply separate to the rest of ethics. That’s something I want to think about a bit more.
It’s great to have a short description of the difficulties for person-affecting intuitions!
That a good argument. Still, I find person-affecting views underrated because I suspect that many people have not given much thought to whether it even makes sense to treat population ethics in the same way as other ethical domains.
Why do we think we have to be able rate all possible world states according to how impartially good or bad they are? Population ethics seems underspecified on exactly the dimension where many moral philosophers derive “objective” principles from: others’ interests. It’s the one ethical discipline where others’ interests are not fixed. The principles that underlie preference utilitarianism aren’t sufficiently far-reaching to specify what to do with newly created people. And preference utilitarianism is itself incomplete, because of the further question: What are my preferences? (If everyone’s preference was to be a preference utilitarian, we’d all be standing around waiting until someone has a problem or forms a preference that’s different from selflessly adhering to preference utilitarianism.)
Preference utilitarianism seems like a good answer to some important question(s) that fall(s) under the “morality” heading. But it can’t cover everything. Population ethics is separate from the rest of ethics.
And there’s an interesting relation between how we choose to conceptualize population ethics and how we then come to think about “What are my life goals?”
If we think population ethics has a uniquely correct solution that ranks all world states without violations of transitivity or other, similar problems, we have to think that, in some way, there’s a One Compelling Axiology telling us the goal criteria for every sentient mind. That axiology would specify how to answer “What are my life goals?”
By contrast, if axiology is underdetermined, then different people can rationally adopt different types of life goals.
I self-identify as a moral anti-realist because I’m convinced there’s no One Compelling Axiology. Insofar as there’s something fundamental and objective to ethics, it’s this notion of “respecting others’ interests.” People’s life goals (their “interests”) won’t converge.
Some people take personal hedonism as their life goals, some just want to Kill Bill, some want to have a meaningful family life and die from natural causes here on earth, some don’t think about the future at all and live the party life, some discount any aspirations of personal happiness in favor of working toward positively affecting transformative AI, some want to live forever but also do things to help others realize their dreams along the way, some want to just become famous, etc.
If you think of humans as the biological algorithm we express, rather than the things we come to believe and identify with at some particular point in our biography (based on what we’ve lived), then you might be tempted to seek a One Compelling Axiology with the question “What’s the human policy?” (“Policy” in analogy to machine learning.) For instance, you could plan to devote the future’s large-scale simulation resources to figuring out the structure of what different humans come to value in different simulated environments, with different experienced histories. You could do science about this and identify general patterns. But suppose you’ve figured out the general patterns and tell the result to the Bride in Kill Bill. You tell her “the objective human policy is X.” She might reply “Hold on with your philosophizing, I’m going to have to kill Bill first. Maybe I’ll come back to you and consider doing X afterwards.” Similarly, if you tell a European woman with husband and children about the arguments to move to San Francisco to work on reducing AI risks, because that’s what she ended up caring about on many runs of simulations of her in environments where she had access to all the philosophical arguments, she might say “Maybe I’d be receptive to that in another life, but I love my husband in this world here, and I don’t want to uproot my children, so I’m going to stay here and devote less of my caring capacity to longtermism. Maybe I’ll consider wanting to donate 10% of my income, though.” So, regardless of questions about their “human policy,” in terms of what actual people care about at given points in time, life goals may differ tremendously between people, and even between copies of the same person in different simulated environments. That’s because life goals also track things that relate to the identities we have adopted and the for-us meaningful social connections we have made.
If you say that population ethics is all-encompassing, you’re implicitly saying that all the complexities in the above paragraphs count for nothing (or not much), and that people should just adopt the same types of life goals, no matter their level of novelty-seeking, achievement striving, proscociality, embeddedness in meaningful social connections, views on death, etc. You’re implicitly saying that the way the future should ideally go has almost nothing to do with the goals of presently existing people. To me, that stance is more incomprehensible than some problem with transitivity.
Alternatively, you can say that maybe all of this can’t be put under a single impartial utility function. If so, it seems that you’re correct that you have to accept something similar to the violation of transitivity you describe. But is it really so bad if we look at it with my framing?
It’s not “Even though there’s a One Compelling Axiology, I’ll go ahead and decide to do the grossly inelegant thing with it.” Instead, it’s “Ethics is about life goals and how to relate to other people with different life goals, as well as asking what types of life goals are good for people. Probably, different life goals are good for different people. Therefore, as long as we don’t know which people exist, not everything can be determined. There also seems to be a further issue about how to treat cases where we create new people: that’s population ethics, and it’s a bit underdetermined, which gives more freedom for us to choose what to do with our future lightcone.”
So, I propose to consider the possibility of drawing a more limited role for population ethics than it is typically conceptualized under. We could maybe think of it as: A set of appeals or principles by which beings can hold accountable the decision-makers that created them. This places some constraints on the already existing population, but it leaves room for personal life projects (as opposed to “dictatorship of the future,” where all our choices about the future light cone are predetermined by the One Compelling Axiology, and so have no relation to which exact people are actually alive and care about it).
To give a few examples for population-ethical principles:
All else equal, it seems objectionable on other-regarding grounds to create minds that lament their existence.
It also seems objectionable, all else equal, to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have provided them with better circumstances.
Likewise, it seems objectionable, all else equal, to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(Note that the first principle is about objecting to the fact of being created, while the latter two principles are about objecting to how one was created.)
We can also ask: Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
(From a preference-utilitarian perspective, it seems left open whether the creation of some types of minds can be intrinsically important. Satisfied preferences are good because satisfying preferences is just what it means to consider the interests of others. Also counting the interests of not-yet-existent beings is a possible extension of that, but a somewhat peculiar one. The choice looks underdetermined, again.)
Ironically, the perspective I have described becomes very similar to how non-philosophers commonly think about the ethics of having children:
Parents are obligated to provide a very high standard of care for their children (universal principle)
People are free to decide against becoming parents (personal principle)
Parents are free to want to have as many children as possible (personal principle), as long as the children are happy in expectation (universal principle)
People are free to try to influence other people’s stances and parenting choices (personal principle), as long as they remain within the boundaries of what is acceptable in a civil society (universal principle)
Universal principles fall out of considerations about respecting others’ interests. Personal principles fall out of considerations about “What are my life goals?”
Personal principles can be inspired by considerations of morality, i.e., they can be about choosing to give stronger weight to universal principles and filling out underdetermined stuff with one’s most deeply held moral intuitions. Many people find existence meaningless without dedication to something greater than oneself.
Because there are different types of considerations at play in all of this, there’s probably no super-elegant way to pack everything into a single, impartially valuable utility function. There will have to be some messy choices about how to make tradeoffs, but there isn’t really a satisfying alternative. Just like people have to choose some arbitrary-seeming percentage of how much caring capacity they dedicate toward self-oriented life goals versus other-regarding ones (insofar as the separation is clean; it often isn’t so clean), we have to also somehow choose how much weight to give to different moral domains, including the considerations commonly discussed under the heading of population ethics, and how they relate to my life goals and those of other existing people.
Thanks. There’s a lot to digest there. It’s an interesting idea that population ethics is simply separate to the rest of ethics. That’s something I want to think about a bit more.