I tend to think that the arguments against any theory of the good that encodes the intuition of neutrality are extremely strong. Here’s one that I think I owe to Teru Thomas (who may have got it from Tomi Francis?).
Imagine the following outcomes, A—D, where the columns are possible people, the numbers represent the welfare of each person when they exist, and # indicates non-existence.
A 5 −2 # # B 5 # 2 2 C 5 2 2 # D 5 −2 6 #
I claim that if you think it’s neutral to make happy people, there’s a strong case that you should think that B isn’t better than A. In other words, it’s not better to prevent someone from coming to exist and enduring a life that’s not worth living if you simultaneously create two people with lives worth living. And that’s absurd. I also think it’s really hard to believe if you believe the other side of the asymmetry: that it’s bad to create people whose lives are overwhelmed by suffering.
Why is there pressure on you to accept that B isn’t better than A? Well, first off, it seems plausible that B and C are equally good, since they have the same number of people at the same welfare levels. So let’s assume this is so.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C, since all the same people exist in these outcomes, and there’s more total/average welfare. (I’m pretty sure you can support this kind of verdict on weaker assumptions if necessary.)
So let’s suppose, on this basis, that D is better than C. B and C are equally good. I assume it follows that D is better than B.
Suppose that B were better than A. Since D is better than B, it would follow that D is better than A as well. But we know this can’t be so, if it’s neutral to make happy people, because D and A differ only in the existence of an extra person who has a life worth living. The neutrality principle entails that D isn’t better than B. But it’s absurd to think that B isn’t better than A.
Arguments like this make me feel pretty confident that the intuition of neutrality is mistaken.
I might go back and forth on whether “the good” exists, as my subjective order over each set of outcomes (or set of outcome distributions). This example seems pretty compelling against it.
However, I’m first concerned with “good/bad/better/worse to someone” or “good/bad/better/worse from a particular perspective”. Then, ethics is about doing better by and managing tradeoffs between these perspectives, including as they change (e.g. with additional perspectives created through additional moral patients). This is what my sequence is about. Whether “the good” exists doesn’t seem very important.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we’d want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you’re setting up this argument turns controversial, though, is when you suggest that “D>C” is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing people) under the stipulation of starting out in one of the worlds (that already contains all the relevant people).
Let’s think about the case where no one exists so far, where we’re the population planners for a new planet that can either shape into C or D. (In that scenario, there’s no relevant difference between B and C, btw.) I’d argue that both options are now equally defensible because the interests of possible people are under-defined* and there are defensible personal stances on population ethics for justifying either.**
*The interests of possible people are underdefined not just because it’s open how many people we might create. In addiiton, it’s also open who we might create: Some human psychological profiles are such that when someone’s born into a happy/priviledged life, they adopt a Buddhist stance towards existence and think of themselves as not having benefitted from being born. Other psychological profiles are such that people do think of themselves as grateful and lucky for having been born. (In fact, others yet even claim that they’d consider themselves lucky/grateful even if their lives consisted of nothing but torture). These varying intuitions towards existence can inspire people’s population-ethical leanings. But there’s no fact of the matter of “which intuitions are more true.” These are just difference interpretations for the same sets of facts. There’s no uniquely correct way to approach population ethics.
**Namely, C is better on anti-natalist harm reduction grounds (at least depending on how we interpret the scale/negative numbers on the scale), whereas D is better on totalist grounds.
All of that was assuming that C or D are the only options. If we add a third alternative, say, “create no one,” the ranking between C and D (previously they were equally defensible) can change.
At this point, the moral realist proponents of an objective “theory of the good” might shriek in agony and think I have gone mad. But hear me out. It’s not crazy at all to think that choices depend on the alternatives we have available. If we also get the option, “create no one,” then I’d say C becomes worse than the two other options because there’s no approach to population ethics according to which C is optimal from the three options. My person-affecting stance on population ethics says that we’re free to do a bunch of things, but the one thing we cannot do is do things that reflect a negligent disregard for the interests of potential people/beings.
Why? Essentially for similar reasons why common-sense morality says that struggling lower-class families are permitted to have children that they raise under harship with little means (assuming their lives are still worth living in expectation), but if a millionaire were to do the same to their child, he’d be an asshole. The fact that the millionaire has the option “give my child enough resources to have a high chance at happiness” makes it worse if he then proceeds to give his child hardly any resources at all. Bringing people into existence makes you responsible for them. If you have the option to make your children really well off, but you decide not to do that, you’re not taking into consideration the interests of your child, which is bad. (Of course, if the millionaire donates all their money to effective causes and then raises a child in relative poverty, that’s acceptable again.)
I think where the proponents of an objective theory of the good go wrong is the idea that you keep track, on the same objective scoreboard, no matter whether it concerns existing people or potential people. But those are not commensurate perspectives. This whole idea of an “objective axiology/theory of the good” is dubious to me. It also has pretty counterintuitive implications to try to squeeze these perspectives under one umbrella. As I wrote elsewhere:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
Many effective altruists hesitate to say, “One of you must be wrong!” when one person cares greatly about living forever while the other doesn’t. By contrast, when two people disagree on population ethics “One of you must be wrong!” seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/pursue, I suggest they lean in on this belief. I expect that resolving the tension in that way – leaning in on the belief “people are free to choose their life goals;” giving up on “there’s an axiology that applies to everyone” – makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
Here’s a framework for doing population ethics without an objective axiology. In this framework, person-affecting views seem quite intuitive because we can motivate them as follows:
“Preference utilitarianism for existing (and sure-to-exist) people/beings, but subject to also giving some consideration—in ways that aren’t highly demanding—to the (underdefined) interests of possible people/beings.
That’s a common approach in situations where there are two possible things to value and care about, but someone primilarly chooses one of them, as opposed to crafting a theory that unifies both of these things and stipulates tradeoffs for all situations. For instance, on the topic of “do you care about yourself or everyone?” a self-oriented individual will choose to mostly benefit themselves, but they might feel like they should take low-demanding effective altruist actions.
So, when figuring out how to do “systematized altruism,” someone can decide that for their interpretation of the concept (note that there is no uniquely correct answer here!) “systematized altruism” wants to incorporate the slogan Michael mentioned earlier, “make people happy, not make happy people.” So, the person focuses on doing what existing people want. However, when it comes to possible people/beings, instead of going “anything goes,” they still want to follow low-effort ways of doing good according to the interests of possible people/beings, at least in the sense of “don’t take actions that violate what possible people/beings could agree on as an interest group.” (And it’s not the case that they’d all want to be born, because like I said, there are psychological profiles of possible people/beings that prefer no chance of being born if there’s a chance of suffering. But maybe you could argue that if you have the chance to create happy people at no cost to yourself and no risk of suffering, it would be bad not to take that chance, since at least some possible people/beings would stake their weight into that.)
I tend to think that the arguments against any theory of the good that encodes the intuition of neutrality are extremely strong. Here’s one that I think I owe to Teru Thomas (who may have got it from Tomi Francis?).
Imagine the following outcomes, A—D, where the columns are possible people, the numbers represent the welfare of each person when they exist, and # indicates non-existence.
A 5 −2 # #
B 5 # 2 2
C 5 2 2 #
D 5 −2 6 #
I claim that if you think it’s neutral to make happy people, there’s a strong case that you should think that B isn’t better than A. In other words, it’s not better to prevent someone from coming to exist and enduring a life that’s not worth living if you simultaneously create two people with lives worth living. And that’s absurd. I also think it’s really hard to believe if you believe the other side of the asymmetry: that it’s bad to create people whose lives are overwhelmed by suffering.
Why is there pressure on you to accept that B isn’t better than A? Well, first off, it seems plausible that B and C are equally good, since they have the same number of people at the same welfare levels. So let’s assume this is so.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C, since all the same people exist in these outcomes, and there’s more total/average welfare. (I’m pretty sure you can support this kind of verdict on weaker assumptions if necessary.)
So let’s suppose, on this basis, that D is better than C. B and C are equally good. I assume it follows that D is better than B.
Suppose that B were better than A. Since D is better than B, it would follow that D is better than A as well. But we know this can’t be so, if it’s neutral to make happy people, because D and A differ only in the existence of an extra person who has a life worth living. The neutrality principle entails that D isn’t better than B. But it’s absurd to think that B isn’t better than A.
Arguments like this make me feel pretty confident that the intuition of neutrality is mistaken.
I might go back and forth on whether “the good” exists, as my subjective order over each set of outcomes (or set of outcome distributions). This example seems pretty compelling against it.
However, I’m first concerned with “good/bad/better/worse to someone” or “good/bad/better/worse from a particular perspective”. Then, ethics is about doing better by and managing tradeoffs between these perspectives, including as they change (e.g. with additional perspectives created through additional moral patients). This is what my sequence is about. Whether “the good” exists doesn’t seem very important.
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we’d want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you’re setting up this argument turns controversial, though, is when you suggest that “D>C” is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing people) under the stipulation of starting out in one of the worlds (that already contains all the relevant people).
Let’s think about the case where no one exists so far, where we’re the population planners for a new planet that can either shape into C or D. (In that scenario, there’s no relevant difference between B and C, btw.) I’d argue that both options are now equally defensible because the interests of possible people are under-defined* and there are defensible personal stances on population ethics for justifying either.**
*The interests of possible people are underdefined not just because it’s open how many people we might create. In addiiton, it’s also open who we might create: Some human psychological profiles are such that when someone’s born into a happy/priviledged life, they adopt a Buddhist stance towards existence and think of themselves as not having benefitted from being born. Other psychological profiles are such that people do think of themselves as grateful and lucky for having been born. (In fact, others yet even claim that they’d consider themselves lucky/grateful even if their lives consisted of nothing but torture). These varying intuitions towards existence can inspire people’s population-ethical leanings. But there’s no fact of the matter of “which intuitions are more true.” These are just difference interpretations for the same sets of facts. There’s no uniquely correct way to approach population ethics.
**Namely, C is better on anti-natalist harm reduction grounds (at least depending on how we interpret the scale/negative numbers on the scale), whereas D is better on totalist grounds.
All of that was assuming that C or D are the only options. If we add a third alternative, say, “create no one,” the ranking between C and D (previously they were equally defensible) can change.
At this point, the moral realist proponents of an objective “theory of the good” might shriek in agony and think I have gone mad. But hear me out. It’s not crazy at all to think that choices depend on the alternatives we have available. If we also get the option, “create no one,” then I’d say C becomes worse than the two other options because there’s no approach to population ethics according to which C is optimal from the three options. My person-affecting stance on population ethics says that we’re free to do a bunch of things, but the one thing we cannot do is do things that reflect a negligent disregard for the interests of potential people/beings.
Why? Essentially for similar reasons why common-sense morality says that struggling lower-class families are permitted to have children that they raise under harship with little means (assuming their lives are still worth living in expectation), but if a millionaire were to do the same to their child, he’d be an asshole. The fact that the millionaire has the option “give my child enough resources to have a high chance at happiness” makes it worse if he then proceeds to give his child hardly any resources at all. Bringing people into existence makes you responsible for them. If you have the option to make your children really well off, but you decide not to do that, you’re not taking into consideration the interests of your child, which is bad. (Of course, if the millionaire donates all their money to effective causes and then raises a child in relative poverty, that’s acceptable again.)
I think where the proponents of an objective theory of the good go wrong is the idea that you keep track, on the same objective scoreboard, no matter whether it concerns existing people or potential people. But those are not commensurate perspectives. This whole idea of an “objective axiology/theory of the good” is dubious to me. It also has pretty counterintuitive implications to try to squeeze these perspectives under one umbrella. As I wrote elsewhere:
Here’s a framework for doing population ethics without an objective axiology. In this framework, person-affecting views seem quite intuitive because we can motivate them as follows:
“Preference utilitarianism for existing (and sure-to-exist) people/beings, but subject to also giving some consideration—in ways that aren’t highly demanding—to the (underdefined) interests of possible people/beings.
That’s a common approach in situations where there are two possible things to value and care about, but someone primilarly chooses one of them, as opposed to crafting a theory that unifies both of these things and stipulates tradeoffs for all situations. For instance, on the topic of “do you care about yourself or everyone?” a self-oriented individual will choose to mostly benefit themselves, but they might feel like they should take low-demanding effective altruist actions.
So, when figuring out how to do “systematized altruism,” someone can decide that for their interpretation of the concept (note that there is no uniquely correct answer here!) “systematized altruism” wants to incorporate the slogan Michael mentioned earlier, “make people happy, not make happy people.” So, the person focuses on doing what existing people want. However, when it comes to possible people/beings, instead of going “anything goes,” they still want to follow low-effort ways of doing good according to the interests of possible people/beings, at least in the sense of “don’t take actions that violate what possible people/beings could agree on as an interest group.” (And it’s not the case that they’d all want to be born, because like I said, there are psychological profiles of possible people/beings that prefer no chance of being born if there’s a chance of suffering. But maybe you could argue that if you have the chance to create happy people at no cost to yourself and no risk of suffering, it would be bad not to take that chance, since at least some possible people/beings would stake their weight into that.)
I’ll get back to you on this, since I think this will take me longer to answer and can get pretty technical.