What exactly do you mean by âhave an objective axiologyâ and why do you think it makes it (distinctively) hard to defend asymmetry? (I have an eccentric philosophical view that the word âobjectiveâ nearly always causes more trouble than itâs worth and should be tabooed.)
Thinking in terms of âsomething has intrinsic valueâ privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
[...] why do we have reason to prevent what is bad but no reason to bring about what is good?â
The comment presupposes that thereâs âsomething that is badâ and âsomething that is goodâ (in a sense independent of particular peopleâs judgments â this is what I meant by âobjectiveâ). If we grant this framing, any arguments for why âcreate whatâs goodâ is less important than âdonât create whatâs badâ will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like âwhatâs goodâ or âsomething has intrinsic value.â I think things are good when theyâre connected to the interests/âgoals of people/âbeings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) âconditional value,â but I donât understand âintrinsic value.â
The longer answer:
Hereâs a related intuition:
Thereâs a tension between the beliefs âthereâs an objective axiologyâ and âpeople are free to choose their life goals.â
Many effective altruists hesitate to say, âOne of you must be wrong!â when one person cares greatly about living forever while the other doesnât. By contrast, when two people disagree on population ethics âOne of you must be wrong!â seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/âpursue, I suggest they lean in on this belief. I expect that resolving the tension in that way â leaning in on the belief âpeople are free to choose their life goals;â giving up on âthereâs an axiology that applies to everyoneâ â makes my framework more intuitive and gives a better sense of what the framework is for, what itâs trying to accomplish.
If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesnât give sufficient weight to things that are intrinsically good according to the objective axiology, then Iâm making some kind of mistake. I think itâs occasionally possible for people to make âmistakesâ about their goals/âvalues if theyâre insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I donât think itâs possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I donât think âbecoming well-informedâ leads to convergence of life goals among people/âreasoners.
Iâd say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:
âWe want to figure out whatâs best for morally relevant others. Well-being differences in morally relevant others should always matter â if they donât matter on someoneâs account, then this particular account couldnât be concerned with whatâs best for morally relevant others.â
As you know, person-affecting views tend to come out in such a way that they say things like âitâs neutral to create the perfect life and (equally) neutral to create a merely quite good life.â (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)
These features of person-affecting views show that well-being differences donât always matter on those views. Some people will interpret this as âperson-affecting views are incompatible with the goal of ethics â figuring out whatâs best for morally relevant others.â
However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/âbeings as well as possible people/âbeings? If thereâs an objective axiology, itâs implicit that the same rules would apply (why wouldnât they?). However, without an objective axiology, all weâre left is the following:
Ethics is about interests/âgoals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someoneâs interests/âgoals.
The rule âfocus on interests/âgoalsâ has comparatively clear implications in fixed population contexts. The minimal morality of âdonât be a jerkâ means we shouldnât violate othersâ interests/âgoals (and perhaps even help them where itâs easy and our comparative advantage). The ambitious morality of âdo the most moral/âaltruistic thingâ has a lot of overlap with something like preference utilitarianism. (Though there are instances where peopleâs life goals are under-defined, in which case people with different takes on âdo the most moral/âaltruistic thingâ may wish to fill in the gaps according to subjectivist âaxiologiesâ that they endorse.)
On creating new people/âbeings, âfocus on interests/âgoalsâ no longer gives unambiguous results:
(1) The number of interests/âgoals isnât fixed
(2) The types of interests/âgoals arenât fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/âbeings (what they want from the future) and that of possible people/âbeings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.
So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:
ââDoing the most moral/âaltruistic thingâ isnât about creating new people with new interests/âgoals. Instead, itâs about benefitting existing (or sure-to-exist) people/âbeings according to their interests/âgoals.â
In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!
Still, weâre left with the question, âIf your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?â
Someone with person-affecting views could reply the following:
âWhile I concentrate my caring budget on one perspective (existing and sure-to-exist people/âbeings), that doesnât mean my concern for the interests of possible people/âbeings is zero. My approach to dealing with merely possible people is essentially âdonât be a jerk.â Thatâs exactly why Iâm sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/âbeings, but since I concentrate my caring budget on existing (and sure-to-exist) people/âbeings, bringing the happier person into existence usually isnât a priority to me. Lastly, youâre probably going to ask âwhy is your notion of âdonât be a jerkâ asymmetric?.ⲠI.e., why not âdonât be a jerkâ by creating people who would be grateful to be alive (at least in instances where itâs easy/âcheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/âbeings) in a way that not creating them does not. Thereâs no answer to âWhat do possible people/âbeings want?â that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that Iâm arguably failing to benefit a particular subset of possible people/âbeings (the ones who would be grateful to get the slot). Still, other possible people/âbeings donât mind not getting the spot, so thereâs at least a sense in which I didnât disrespect possible people/âbeings as a whole interest group. By contrast, if I create someone who hates being alive, saying âOther people would be grateful in your spotâ doesnât seem like a defensible excuse. âNot creating happy peopleâ only means Iâm not giving maximum concern to possible people/âbeings, whereas âcreating a miserable personâ means Iâm flat-out disrespecting someone specific, who I chose to âhighlightâ from the sea of all possible people/âbeings (in the most real sense) â there doesnât seem to be a defensible excuse for that.â
Iâm not sure I really follow (though I admit Iâve only read the comment, not the post youâve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesnât automatically do that, so thereâs no general reason to add happy people if it doesnât satisfy a preference of someone who is here already? Couldnât you show that adding suffering people isnât automatically bad by the same reasoning, since it doesnât necessarily violate an existing preference? (Also, on the word âobjectiveâ: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of âobjectiveâ. Hence why I think âobjectiveâ should be tabooed.)
Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesnât automatically do that, so thereâs no general reason to add happy people if it doesnât satisfy a preference of someone who is here already?
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.
As I say in the longer post:
Just like the concept âathletic fitnessâ has several defensible interpretations (e.g., the difference between a 100m sprinter and a marathon runner), so (I argue) does âdoing the most moral/âaltruistic thing.â
I agree with what you write about âobjectiveâ â Iâm guilty of violating your advice.
(That said, I think thereâs a sense in which preference utilitarianism would be unsatisfying as a âmoral realistâ answer to all of ethics because it doesnât say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism â what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesnât resonate with me?)
Couldnât you show that adding suffering people isnât automatically bad by the same reasoning, since it doesnât necessarily violate an existing preference?
I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because Iâm relying on a distinction between âambitious moralityâ and âminimal moralityâ ( = âdonât be a jerkâ) which also only makes sense if thereâs no objective axiology.
I donât expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section âminimal morality vs. ambitious moralityâ here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. (âCare moralityâ vs. âcooperation moralityâ is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.
What exactly do you mean by âhave an objective axiologyâ and why do you think it makes it (distinctively) hard to defend asymmetry? (I have an eccentric philosophical view that the word âobjectiveâ nearly always causes more trouble than itâs worth and should be tabooed.)
The short answer:
Thinking in terms of âsomething has intrinsic valueâ privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
The comment presupposes that thereâs âsomething that is badâ and âsomething that is goodâ (in a sense independent of particular peopleâs judgments â this is what I meant by âobjectiveâ). If we grant this framing, any arguments for why âcreate whatâs goodâ is less important than âdonât create whatâs badâ will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like âwhatâs goodâ or âsomething has intrinsic value.â I think things are good when theyâre connected to the interests/âgoals of people/âbeings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) âconditional value,â but I donât understand âintrinsic value.â
The longer answer:
Hereâs a related intuition:
Thereâs a tension between the beliefs âthereâs an objective axiologyâ and âpeople are free to choose their life goals.â
In my post, âPopulation Ethics Without [an Objective] Axiology,â I defended a specific framework for thinking about population ethics. From the post:
If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesnât give sufficient weight to things that are intrinsically good according to the objective axiology, then Iâm making some kind of mistake. I think itâs occasionally possible for people to make âmistakesâ about their goals/âvalues if theyâre insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I donât think itâs possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I donât think âbecoming well-informedâ leads to convergence of life goals among people/âreasoners.
Iâd say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:
âWe want to figure out whatâs best for morally relevant others. Well-being differences in morally relevant others should always matter â if they donât matter on someoneâs account, then this particular account couldnât be concerned with whatâs best for morally relevant others.â
As you know, person-affecting views tend to come out in such a way that they say things like âitâs neutral to create the perfect life and (equally) neutral to create a merely quite good life.â (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)
These features of person-affecting views show that well-being differences donât always matter on those views. Some people will interpret this as âperson-affecting views are incompatible with the goal of ethics â figuring out whatâs best for morally relevant others.â
However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/âbeings as well as possible people/âbeings? If thereâs an objective axiology, itâs implicit that the same rules would apply (why wouldnât they?). However, without an objective axiology, all weâre left is the following:
Ethics is about interests/âgoals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someoneâs interests/âgoals.
The rule âfocus on interests/âgoalsâ has comparatively clear implications in fixed population contexts. The minimal morality of âdonât be a jerkâ means we shouldnât violate othersâ interests/âgoals (and perhaps even help them where itâs easy and our comparative advantage). The ambitious morality of âdo the most moral/âaltruistic thingâ has a lot of overlap with something like preference utilitarianism. (Though there are instances where peopleâs life goals are under-defined, in which case people with different takes on âdo the most moral/âaltruistic thingâ may wish to fill in the gaps according to subjectivist âaxiologiesâ that they endorse.)
On creating new people/âbeings, âfocus on interests/âgoalsâ no longer gives unambiguous results: (1) The number of interests/âgoals isnât fixed (2) The types of interests/âgoals arenât fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/âbeings (what they want from the future) and that of possible people/âbeings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.
So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:
ââDoing the most moral/âaltruistic thingâ isnât about creating new people with new interests/âgoals. Instead, itâs about benefitting existing (or sure-to-exist) people/âbeings according to their interests/âgoals.â
In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!
Still, weâre left with the question, âIf your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?â
Someone with person-affecting views could reply the following:
âWhile I concentrate my caring budget on one perspective (existing and sure-to-exist people/âbeings), that doesnât mean my concern for the interests of possible people/âbeings is zero. My approach to dealing with merely possible people is essentially âdonât be a jerk.â Thatâs exactly why Iâm sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/âbeings, but since I concentrate my caring budget on existing (and sure-to-exist) people/âbeings, bringing the happier person into existence usually isnât a priority to me. Lastly, youâre probably going to ask âwhy is your notion of âdonât be a jerkâ asymmetric?.ⲠI.e., why not âdonât be a jerkâ by creating people who would be grateful to be alive (at least in instances where itâs easy/âcheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/âbeings) in a way that not creating them does not. Thereâs no answer to âWhat do possible people/âbeings want?â that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that Iâm arguably failing to benefit a particular subset of possible people/âbeings (the ones who would be grateful to get the slot). Still, other possible people/âbeings donât mind not getting the spot, so thereâs at least a sense in which I didnât disrespect possible people/âbeings as a whole interest group. By contrast, if I create someone who hates being alive, saying âOther people would be grateful in your spotâ doesnât seem like a defensible excuse. âNot creating happy peopleâ only means Iâm not giving maximum concern to possible people/âbeings, whereas âcreating a miserable personâ means Iâm flat-out disrespecting someone specific, who I chose to âhighlightâ from the sea of all possible people/âbeings (in the most real sense) â there doesnât seem to be a defensible excuse for that.â
The long answer: My post Population Ethics Without ((an Objective)) Axiology: A Framework.
Iâm not sure I really follow (though I admit Iâve only read the comment, not the post youâve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesnât automatically do that, so thereâs no general reason to add happy people if it doesnât satisfy a preference of someone who is here already? Couldnât you show that adding suffering people isnât automatically bad by the same reasoning, since it doesnât necessarily violate an existing preference? (Also, on the word âobjectiveâ: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of âobjectiveâ. Hence why I think âobjectiveâ should be tabooed.)
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.
As I say in the longer post:
I agree with what you write about âobjectiveâ â Iâm guilty of violating your advice.
(That said, I think thereâs a sense in which preference utilitarianism would be unsatisfying as a âmoral realistâ answer to all of ethics because it doesnât say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism â what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesnât resonate with me?)
I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because Iâm relying on a distinction between âambitious moralityâ and âminimal moralityâ ( = âdonât be a jerkâ) which also only makes sense if thereâs no objective axiology.
I donât expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section âminimal morality vs. ambitious moralityâ here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. (âCare moralityâ vs. âcooperation moralityâ is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.