Thinking in terms of “something has intrinsic value” privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
[...] why do we have reason to prevent what is bad but no reason to bring about what is good?”
The comment presupposes that there’s “something that is bad” and “something that is good” (in a sense independent of particular people’s judgments – this is what I meant by “objective”). If we grant this framing, any arguments for why “create what’s good” is less important than “don’t create what’s bad” will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like “what’s good” or “something has intrinsic value.” I think things are good when they’re connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) “conditional value,” but I don’t understand “intrinsic value.”
The longer answer:
Here’s a related intuition:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
Many effective altruists hesitate to say, “One of you must be wrong!” when one person cares greatly about living forever while the other doesn’t. By contrast, when two people disagree on population ethics “One of you must be wrong!” seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/pursue, I suggest they lean in on this belief. I expect that resolving the tension in that way – leaning in on the belief “people are free to choose their life goals;” giving up on “there’s an axiology that applies to everyone” – makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesn’t give sufficient weight to things that are intrinsically good according to the objective axiology, then I’m making some kind of mistake. I think it’s occasionally possible for people to make “mistakes” about their goals/values if they’re insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I don’t think it’s possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I don’t think “becoming well-informed” leads to convergence of life goals among people/reasoners.
I’d say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:
“We want to figure out what’s best for morally relevant others. Well-being differences in morally relevant others should always matter – if they don’t matter on someone’s account, then this particular account couldn’t be concerned with what’s best for morally relevant others.”
As you know, person-affecting views tend to come out in such a way that they say things like “it’s neutral to create the perfect life and (equally) neutral to create a merely quite good life.” (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)
These features of person-affecting views show that well-being differences don’t always matter on those views. Some people will interpret this as “person-affecting views are incompatible with the goal of ethics – figuring out what’s best for morally relevant others.”
However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/beings as well as possible people/beings? If there’s an objective axiology, it’s implicit that the same rules would apply (why wouldn’t they?). However, without an objective axiology, all we’re left is the following:
Ethics is about interests/goals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” has a lot of overlap with something like preference utilitarianism. (Though there are instances where people’s life goals are under-defined, in which case people with different takes on “do the most moral/altruistic thing” may wish to fill in the gaps according to subjectivist “axiologies” that they endorse.)
On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results:
(1) The number of interests/goals isn’t fixed
(2) The types of interests/goals aren’t fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.
So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:
“‘Doing the most moral/altruistic thing’ isn’t about creating new people with new interests/goals. Instead, it’s about benefitting existing (or sure-to-exist) people/beings according to their interests/goals.”
In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!
Still, we’re left with the question, “If your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?”
Someone with person-affecting views could reply the following:
“While I concentrate my caring budget on one perspective (existing and sure-to-exist people/beings), that doesn’t mean my concern for the interests of possible people/beings is zero. My approach to dealing with merely possible people is essentially ‘don’t be a jerk.’ That’s exactly why I’m sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/beings, but since I concentrate my caring budget on existing (and sure-to-exist) people/beings, bringing the happier person into existence usually isn’t a priority to me. Lastly, you’re probably going to ask ‘why is your notion of ‘don’t be a jerk’ asymmetric?.′ I.e., why not ‘don’t be a jerk’ by creating people who would be grateful to be alive (at least in instances where it’s easy/cheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/beings) in a way that not creating them does not. There’s no answer to ‘What do possible people/beings want?’ that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that I’m arguably failing to benefit a particular subset of possible people/beings (the ones who would be grateful to get the slot). Still, other possible people/beings don’t mind not getting the spot, so there’s at least a sense in which I didn’t disrespect possible people/beings as a whole interest group. By contrast, if I create someone who hates being alive, saying ‘Other people would be grateful in your spot’ doesn’t seem like a defensible excuse. ‘Not creating happy people’ only means I’m not giving maximum concern to possible people/beings, whereas ‘creating a miserable person’ means I’m flat-out disrespecting someone specific, who I chose to ‘highlight’ from the sea of all possible people/beings (in the most real sense) – there doesn’t seem to be a defensible excuse for that.”
I’m not sure I really follow (though I admit I’ve only read the comment, not the post you’ve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already? Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference? (Also, on the word “objective”: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of “objective”. Hence why I think “objective” should be tabooed.)
Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already?
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.
As I say in the longer post:
Just like the concept “athletic fitness” has several defensible interpretations (e.g., the difference between a 100m sprinter and a marathon runner), so (I argue) does “doing the most moral/altruistic thing.”
I agree with what you write about “objective” – I’m guilty of violating your advice.
(That said, I think there’s a sense in which preference utilitarianism would be unsatisfying as a “moral realist” answer to all of ethics because it doesn’t say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism – what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesn’t resonate with me?)
Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference?
I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because I’m relying on a distinction between “ambitious morality” and “minimal morality” ( = “don’t be a jerk”) which also only makes sense if there’s no objective axiology.
I don’t expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section “minimal morality vs. ambitious morality” here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. (“Care morality” vs. “cooperation morality” is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.
The short answer:
Thinking in terms of “something has intrinsic value” privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
The comment presupposes that there’s “something that is bad” and “something that is good” (in a sense independent of particular people’s judgments – this is what I meant by “objective”). If we grant this framing, any arguments for why “create what’s good” is less important than “don’t create what’s bad” will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like “what’s good” or “something has intrinsic value.” I think things are good when they’re connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) “conditional value,” but I don’t understand “intrinsic value.”
The longer answer:
Here’s a related intuition:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
In my post, “Population Ethics Without [an Objective] Axiology,” I defended a specific framework for thinking about population ethics. From the post:
If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesn’t give sufficient weight to things that are intrinsically good according to the objective axiology, then I’m making some kind of mistake. I think it’s occasionally possible for people to make “mistakes” about their goals/values if they’re insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I don’t think it’s possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I don’t think “becoming well-informed” leads to convergence of life goals among people/reasoners.
I’d say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:
“We want to figure out what’s best for morally relevant others. Well-being differences in morally relevant others should always matter – if they don’t matter on someone’s account, then this particular account couldn’t be concerned with what’s best for morally relevant others.”
As you know, person-affecting views tend to come out in such a way that they say things like “it’s neutral to create the perfect life and (equally) neutral to create a merely quite good life.” (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)
These features of person-affecting views show that well-being differences don’t always matter on those views. Some people will interpret this as “person-affecting views are incompatible with the goal of ethics – figuring out what’s best for morally relevant others.”
However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/beings as well as possible people/beings? If there’s an objective axiology, it’s implicit that the same rules would apply (why wouldn’t they?). However, without an objective axiology, all we’re left is the following:
Ethics is about interests/goals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” has a lot of overlap with something like preference utilitarianism. (Though there are instances where people’s life goals are under-defined, in which case people with different takes on “do the most moral/altruistic thing” may wish to fill in the gaps according to subjectivist “axiologies” that they endorse.)
On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results: (1) The number of interests/goals isn’t fixed (2) The types of interests/goals aren’t fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.
So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:
“‘Doing the most moral/altruistic thing’ isn’t about creating new people with new interests/goals. Instead, it’s about benefitting existing (or sure-to-exist) people/beings according to their interests/goals.”
In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!
Still, we’re left with the question, “If your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?”
Someone with person-affecting views could reply the following:
“While I concentrate my caring budget on one perspective (existing and sure-to-exist people/beings), that doesn’t mean my concern for the interests of possible people/beings is zero. My approach to dealing with merely possible people is essentially ‘don’t be a jerk.’ That’s exactly why I’m sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/beings, but since I concentrate my caring budget on existing (and sure-to-exist) people/beings, bringing the happier person into existence usually isn’t a priority to me. Lastly, you’re probably going to ask ‘why is your notion of ‘don’t be a jerk’ asymmetric?.′ I.e., why not ‘don’t be a jerk’ by creating people who would be grateful to be alive (at least in instances where it’s easy/cheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/beings) in a way that not creating them does not. There’s no answer to ‘What do possible people/beings want?’ that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that I’m arguably failing to benefit a particular subset of possible people/beings (the ones who would be grateful to get the slot). Still, other possible people/beings don’t mind not getting the spot, so there’s at least a sense in which I didn’t disrespect possible people/beings as a whole interest group. By contrast, if I create someone who hates being alive, saying ‘Other people would be grateful in your spot’ doesn’t seem like a defensible excuse. ‘Not creating happy people’ only means I’m not giving maximum concern to possible people/beings, whereas ‘creating a miserable person’ means I’m flat-out disrespecting someone specific, who I chose to ‘highlight’ from the sea of all possible people/beings (in the most real sense) – there doesn’t seem to be a defensible excuse for that.”
The long answer: My post Population Ethics Without ((an Objective)) Axiology: A Framework.
I’m not sure I really follow (though I admit I’ve only read the comment, not the post you’ve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already? Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference? (Also, on the word “objective”: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of “objective”. Hence why I think “objective” should be tabooed.)
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.
As I say in the longer post:
I agree with what you write about “objective” – I’m guilty of violating your advice.
(That said, I think there’s a sense in which preference utilitarianism would be unsatisfying as a “moral realist” answer to all of ethics because it doesn’t say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism – what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesn’t resonate with me?)
I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because I’m relying on a distinction between “ambitious morality” and “minimal morality” ( = “don’t be a jerk”) which also only makes sense if there’s no objective axiology.
I don’t expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section “minimal morality vs. ambitious morality” here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. (“Care morality” vs. “cooperation morality” is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.