if people generally treat it as bad when Bob takes action A with mildly good first-order consequences when Bob could instead have taken action B with much better first-order consequences,
On my favored view, this isn’t the case. I think of creating new people/beings as a special category.
I also am mostly on board with consequentialism applied to limited domains of ethics, but I’m against treating all of ethics under consequentialism, especially if people try to do the latter in a moral realist way where they look for a consequentialist theory that defines everyone’s standards of ideally moral conduct.
I am working on a post titled “Population Ethics Without an Objective Axiology.” Here’s a summary from that post:
The search for an objective axiology assumes that there’s a well-defined “impartial perspective” that determines what’s intrinsically good/valuable. Within my framework, there’s no such perspective.
Another way of saying this goes as follows. My framework conceptualizes ethics as being about goals/interests.[There are, I think, good reasons for this – see my post Dismantling Hedonism-inspired Moral Realism for why I object to ethics being about experiences, and my post Against Irreducible Normativity on why I don’t think ethics is about things that we can’t express in non-normative terminology.] Goals can differ between people and there’s no goal correct goal for everyone to adopt.
In fixed-population contexts, a focus on goals/interests can tell us exactly what to do: we best benefit others by doing what these others (people/beings) would want us to do.
In population ethics, this approach no longer works so well – it introduces ambiguities. Creating new people/beings changes the number of interests/goals to look out for. Relatedly, creating people/beings of type A instead of type B changes the types of interests/goals to look out for. In light of these options, a “focus on interests/goals” leaves many things under-defined.
To gain back some clarity, we can note that population ethics has two separate perspectives: that of existing people/beings and that of newly created people/beings. (Without an objective axiology, these perspectives cannot be unified.)
Population ethics from the perspective of existing people is analogous to settlers standing in front of a giant garden: There’s all this unused land and there’s a long potential future ahead of us – what do we want to do with it? How do we address various tradeoffs?
In practice, newly created beings are at the whims of their creators. However, “might makes right” is not an ideal that altruistically-inclined/morally-motivated creators would endorse. Population ethics from the perspective of newly created people/beings is like a court hearing: newly created people/beings speak up for their interests/goals. (Newly created people/beings have the opportunity to appeal to their creators’ moral motivations and altruism, or at least hold them accountable to some minimal standards of pro-social conduct.)
The degree to which someone’s life goals are self-oriented vs. inspired by altruism/morality produces a distinction between minimalist morality and maximally ambitious morality. Minimalist morality iswhere someone respects both population-ethical perspectives sufficiently to avoid harm on both of them, while following self-oriented interests/goals otherwise. By contrast, effective altruists want to spend (at least a portion of) their effort and resources to “do what’s most moral/altruistic.” They’re interested in maximally ambitious morality.
Without an objective axiology, the placeholder “do what’s most moral/altruistic” is under-defined. In particular, there’s a tradeoff where cashing out “doing what’s most moral/altruistic” primarily according to the perspective of existing people leaves less room for altruism on the second perspective (that of newly created people), and vice versa.
Besides, what counts as “doing what’s most moral/altruistic” according to the second perspective is under-defined. Without an objective axiology, the interests of newly created people/beings depend on who we create. (E.g, some newly created people would rather not be created than be at a small risk of experiencing intense suffering; others would gladly take significant risks and care immensely about a chance of a happy existence. It is impossible to do right from both perspectives.)
--
Some more thoughts to help make the above intelligible:
I think there’s an incongruence behind how people think of population ethics in the standard way. (The standard way being something like: look for an objective axiology, something that has “intrinsic value,” then figure out how we are to relate to that value/axiology and whether to add extra principles around it.)
The following two beliefs seem incongruent:
There’s an objective axiology
People’s life goals are theirs to choose: they aren’t making a mistake of rationality if they don’t all share the same life goal
There’s a tension between these beliefs – if there was an objective axiology, wouldn’t the people who don’t orient their goals around that axiology be making a mistake?
I expect that many effective altruists would hesitate to say “One of you must be wrong!” when two people discuss their self-oriented life goals and one cares greatly about living forever and the other doesn’t.
The claim “people are free to choose their life goals” may not be completely uncontroversial. Still, I expect many effective altruists to already agree with it. To the degree that they do, I suggest they lean in on this particular belief and explore what it implies for comparing the “axiology first” framework to my framework “population ethics without an objective axiology.” I expect that leaning in on the belief “people are free to choose their life goals” makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
To help understand what I mean by minimalist morality vs. maximally amibitious morality, I’ll now give some examples of how to think about procreation. These will closely track common sense morality. I’ll indicate for each example whether it arises from minimalist morality or some person’s take on maximally ambitious morality. Some examples also arise from there not being an objective axiology:
Parents are obligated to provide a very high standard of care for their children (principle from minimalist morality).
People are free to decide against becoming parents (“there’s no objective axiology”).
Parents are free to want to have as many children as possible (“there’s no objective axiology”), as long as the children are happy in expectation (principle from minimalist morality).
People are free to try to influence other people’s moral stances and parenting choices (“there’s no objective axiology”) – for instance, Joanne could promote anti-natalism and Marianne could promote totalism (their respective interpretations of “doing what’s most moral/altruistic”) – as long as they remain within the boundaries of what is acceptable in a civil society(principle from minimalist morality).
So, what’s the role for (something like) person-affecting principles in population ethics? Basically, if you only want minimalist morality and otherwise want to pursue self-oriented goals, person-affecting principles seem like a pretty good answer to “what should be ethical constraints to your option space for creating new people/beings.”In addition,I think person-affecting principles have some appeal even for specific flavors of “doing what’s most moral/altruistic,” but only in people who lean toward interpretations of this that highlight benefitting people who already exist or will exist regardless of your actions. As I said in the bullet point summary, there’s a tradeoff where cashing out “doing what’s most moral/altruistic” primarily according to the perspective of existing people leaves less room for altruism on the second perspective (that of newly created people), and vice versa. (For the “vice versa,” note that, e.g., a totalist classical utilitarian would be leaving less room for benefitting already existing people. They would privilege an arguably defensible but certainly not ‘objectively correct’ interpretation of what it means to benefit newly created people, and they would lean in on that perspective more so than they lean into the perspective “What are existing people’s life goals and how do I benefit them.”)
I can’t tell what you mean by an objective axiology. It seems to me like you’re equivocating between a bunch of definitions:
An axiology is objective if it is universally true / independent of the decision-maker / not reliant on goals / implied by math. (I’m pointing to a cluster of intuitions rather than giving a precise definition.)
An axiology is objective if it provides a decision for every possible situation you could be in. (I would prefer to call this a “complete” axiology, perhaps.)
An axiology is objective if its decisions can be computed by taking each world, summing some welfare function over all the people in that world, and choosing the decision that leads to the world with a higher number. (I would prefer to call this an “aggregative” axiology, perhaps.)
Examples of definition 1:
The search for an objective axiology assumes that there’s a well-defined “impartial perspective” that determines what’s intrinsically good/valuable. [...]
if there was an objective axiology, wouldn’t the people who don’t orient their goals around that axiology be making a mistake?
Examples of definition 2:
Without an objective axiology, the placeholder “do what’s most moral/altruistic” is under-defined. [...]
I think there’s an incongruence behind how people think of population ethics in the standard way. (The standard way being something like: look for an objective axiology, something that has “intrinsic value,” then figure out how we are to relate to that value/axiology and whether to add extra principles around it.)
Examples of definition 3:
we can note that population ethics has two separate perspectives: that of existing people/beings and that of newly created people/beings. (Without an objective axiology, these perspectives cannot be unified.)
I don’t think I’m relying on an objective-axiology-by-definition-1. Any time I say “good” you can think of it as “good according to the decision-maker” rather than “objectively good”. I think this doesn’t affect any of my arguments.
It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a “complete axiology”). I don’t really see from your comment why this is a problem.
I agree this is “maximally ambitious morality” rather than “minimal morality”. Personally if I were designing “minimal morality” I’d figure out what “maximally ambitious morality” would recommend we design as principles that everyone could agree on and follow, and then implement those. I’m skeptical that if I ran through such a procedure I’d end up choosing person-affecting intuitions (in the sense of “Making People Happy, Not Making Happy People”, I think I plausibly would choose something along the lines of “if you create new people make sure they have lives well-beyond-barely worth living”). Other people might differ from me, since they have different goals, but I suspect not.
I agree that if your starting point is “I want to ensure that people’s preferences are satisfied” you do not yet have a complete axiology, and in particular there’s an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying “if you resolve this ambiguity in this particular way, you get Dutch booked”. I agree that you could avoid the Dutch book by resolving the ambiguity as “I will only create individuals whose preferences I have satisfied as best as I can”.
Personally if I were designing “minimal morality” I’d figure out what “maximally ambitious morality” would recommend we design as principles that everyone could agree on and follow, and then implement those.
I think this is a crux between us (or at least an instance where I didn’t describe very well how I think of “minimal morality”). (A lot of the other points I’ve been making, I see mostly as “here’s a defensible alternative to Rohin’s view” rather than “here’s why Rohin is wrong to not find (something like) person-affecting principles appealing.”)
In my framework, it wouldn’t be fair to derive minimal morality from a specific take on maximally ambitious morality. People who want to follow some maximally ambitious morality (this includes myself) won’t all pick the same interpretation of what that means. Not just for practical reasons, but fundamentally: for maximally ambitious morality, different interpretations are equally philosophically defensible.
Some people may have the objection “Wait, if maximally ambitious morality is under-defined, why adopt confident and specific views for how you want things to be? Why not keep your views on it under-defined, too?” (See Richard Ngo’s post on Moral indefinability.) I have answered this objection in this section of my post The Moral Uncertainty Rabbit Hole, Fully Excavated. In short, I give an analogy between “doing what’s maximally moral” and “becoming ideally athletically fit.” In the analogy, someone grows up with the childhood dream of becoming “ideally athletically fit” in a not-further-specified way. They then have the insight that “becoming ideally athletically fit” has different defensible interpretations – e.g., the difference between a marathon runner or a 100m-sprinter ((or someone who is maximally fit in reducing heart attack risks – which are actually elevated for professional athletes!)). Now, it is an open question for them whether to care about a specific interpretation of the target concept or whether to embrace under-definedness. My advice to them for resolving this question is “think about which aspects of fitness you feel most drawn to, if any.”
Minimal morality is the closest we can come to something “objective” in the sense that it’s possible for philosophically sophisticated reasoners to all agree on it (your first interpretation). (This is precisely because minimal morality is unambitious – it only tells us to not be jerks; it doesn’t give clear guidance for what else to do.)
Minimal morality will feel unsatisfying to anyone who finds effective altruism appealing, so we want to go beyond it in places. However, within my framework, we can only go beyond it by forming morality/altruism-inspired life goals that, while we try to make them impartial/objective, have to inevitably lock in subjective judgment calls. (E.g., “Given that you can’t be both at once, do you want to be maximally impartially altruistic towards existing people or towards newly created people?” or “Assuming the latter, given that different types of newly created people will have different views on what’s good or bad for them, how will define for yourself what it means to maximally benefit (which?) newly created people?”)
It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a “complete axiology”). I don’t really see from your comment why this is a problem.
I agree it’s not a problem as long as you’re choosing that sort of success criterion (that you want a complete axiology) freely, rather than thinking it’s a forced move. (My sense is that you already don’t think of it as a forced move, so I should have been more clear that I wasn’t necessarily arguing against your views.)
I agree that if your starting point is “I want to ensure that people’s preferences are satisfied” you do not yet have a complete axiology, and in particular there’s an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying “if you resolve this ambiguity in this particular way, you get Dutch booked”. I agree that you could avoid the Dutch book by resolving the ambiguity as “I will only create individuals whose preferences I have satisfied as best as I can”.
Yes, that describes it very well!
That said, I’m mostly arguing for a framework* for how to think about population ethics rather than a specific, object-level normative theory. So, I’m not saying the solution to population ethics is “existing people’s life goals get comparatively a lot of weight.” I’m only pointing out how that seems like a defensible position, given that the alternative would be to somewhat arbitrarily give them comparatively very little weight.
*By “framework,” I mean a set of assumptions for thinking about a domain, answering questions like “What am I trying to figure out?”, “What makes for a good solution?” and “What are the concepts I want to use to reason successfully about this domain?”
I can’t tell what you mean by an objective axiology. It seems to me like you’re equivocating between a bunch of definitions:
I like the three interpretations of “objective” that you distilled!
I use the word “objective” in the first sense, but you noted correctly that I’m arguing as though rejecting “there’s an objective axiology” in that sense implies other things, too. (I should make these hidden inferences explicit in future versions of the summary!)
I’d say I’ve been arguing hard against the first interpretation of “objective axiology” and softly against your second and third descriptions of desirable features of an axiology/”answer to population ethics.”
By “arguing hard” I mean “anyone who thinks this is wrong.” By “arguing softly” I mean “that may be defensible, but there are other defensible alternatives.”
So, on the question of success criteria for answers to population ethics (whether we’re looking for a complete axiology, per your 2nd description, and whether the axiology should “fall out” naturally from world states, rather than be specific to histories or to “who’s the person with the choice?”, per your 3rd description)… On those questions, I think it’s perfectly defensible to end up with answers that satisfy each respective criterion, but I think it’s important to keep the option space open while we’re discussing population ethics within a community, so we aren’t prematurely locking in that “solutions to population ethics” need to be of a specific form. (It shouldn’t become uncool within EA to conceptualize things differently, if the alternatives are well formed / well argued.)
I think there’s a practical effect where people who think “ethics is objective” (in the first sense) might prematurely restrict their option space. (This won’t apply to you.) I think they’re looking for the sort of (object-level) normative theories that can fulfill the steep demands of objectivity – theories that all philosophically sophisticated others could agree on despite the widespread differences in people’s moral intuitions. With this constraint, one is likely to view it as a positive feature that a theory is elegantly simple, even if it demands a lot of “bullet biting.” (Moral reasoners couldn’t agree on the same answer if they all relied too much on their moral intuitions, which are different from person to person.)
In other words, if we thought that morality was a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on (and we also have priors that the answer is about “altruism” and “impartiality”), then we’d come up with different solutions than if we started without the “coordination game” assumption.
In any case, theories that fit your second and third description tend to be simpler, so they’re more appealing to people who endorse “ethics is objective” (in the first sense). That’s the link I see between the three descriptions. It’s no coincidence that the examples I gave in my previous comment (moral issues around procreation) track common sense ethics. The less we think “morality is objective” (in the first sense), the more alternatives we have to biting specific bullets.
On my favored view, this isn’t the case. I think of creating new people/beings as a special category.
I also am mostly on board with consequentialism applied to limited domains of ethics, but I’m against treating all of ethics under consequentialism, especially if people try to do the latter in a moral realist way where they look for a consequentialist theory that defines everyone’s standards of ideally moral conduct.
I am working on a post titled “Population Ethics Without an Objective Axiology.” Here’s a summary from that post:
The search for an objective axiology assumes that there’s a well-defined “impartial perspective” that determines what’s intrinsically good/valuable. Within my framework, there’s no such perspective.
Another way of saying this goes as follows. My framework conceptualizes ethics as being about goals/interests.[There are, I think, good reasons for this – see my post Dismantling Hedonism-inspired Moral Realism for why I object to ethics being about experiences, and my post Against Irreducible Normativity on why I don’t think ethics is about things that we can’t express in non-normative terminology.] Goals can differ between people and there’s no goal correct goal for everyone to adopt.
In fixed-population contexts, a focus on goals/interests can tell us exactly what to do: we best benefit others by doing what these others (people/beings) would want us to do.
In population ethics, this approach no longer works so well – it introduces ambiguities. Creating new people/beings changes the number of interests/goals to look out for. Relatedly, creating people/beings of type A instead of type B changes the types of interests/goals to look out for. In light of these options, a “focus on interests/goals” leaves many things under-defined.
To gain back some clarity, we can note that population ethics has two separate perspectives: that of existing people/beings and that of newly created people/beings. (Without an objective axiology, these perspectives cannot be unified.)
Population ethics from the perspective of existing people is analogous to settlers standing in front of a giant garden: There’s all this unused land and there’s a long potential future ahead of us – what do we want to do with it? How do we address various tradeoffs?
In practice, newly created beings are at the whims of their creators. However, “might makes right” is not an ideal that altruistically-inclined/morally-motivated creators would endorse. Population ethics from the perspective of newly created people/beings is like a court hearing: newly created people/beings speak up for their interests/goals. (Newly created people/beings have the opportunity to appeal to their creators’ moral motivations and altruism, or at least hold them accountable to some minimal standards of pro-social conduct.)
The degree to which someone’s life goals are self-oriented vs. inspired by altruism/morality produces a distinction between minimalist morality and maximally ambitious morality. Minimalist morality is where someone respects both population-ethical perspectives sufficiently to avoid harm on both of them, while following self-oriented interests/goals otherwise. By contrast, effective altruists want to spend (at least a portion of) their effort and resources to “do what’s most moral/altruistic.” They’re interested in maximally ambitious morality.
Without an objective axiology, the placeholder “do what’s most moral/altruistic” is under-defined. In particular, there’s a tradeoff where cashing out “doing what’s most moral/altruistic” primarily according to the perspective of existing people leaves less room for altruism on the second perspective (that of newly created people), and vice versa.
Besides, what counts as “doing what’s most moral/altruistic” according to the second perspective is under-defined. Without an objective axiology, the interests of newly created people/beings depend on who we create. (E.g, some newly created people would rather not be created than be at a small risk of experiencing intense suffering; others would gladly take significant risks and care immensely about a chance of a happy existence. It is impossible to do right from both perspectives.)
--
Some more thoughts to help make the above intelligible:
I think there’s an incongruence behind how people think of population ethics in the standard way. (The standard way being something like: look for an objective axiology, something that has “intrinsic value,” then figure out how we are to relate to that value/axiology and whether to add extra principles around it.)
The following two beliefs seem incongruent:
There’s an objective axiology
People’s life goals are theirs to choose: they aren’t making a mistake of rationality if they don’t all share the same life goal
There’s a tension between these beliefs – if there was an objective axiology, wouldn’t the people who don’t orient their goals around that axiology be making a mistake?
I expect that many effective altruists would hesitate to say “One of you must be wrong!” when two people discuss their self-oriented life goals and one cares greatly about living forever and the other doesn’t.
The claim “people are free to choose their life goals” may not be completely uncontroversial. Still, I expect many effective altruists to already agree with it. To the degree that they do, I suggest they lean in on this particular belief and explore what it implies for comparing the “axiology first” framework to my framework “population ethics without an objective axiology.” I expect that leaning in on the belief “people are free to choose their life goals” makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
To help understand what I mean by minimalist morality vs. maximally amibitious morality, I’ll now give some examples of how to think about procreation. These will closely track common sense morality. I’ll indicate for each example whether it arises from minimalist morality or some person’s take on maximally ambitious morality. Some examples also arise from there not being an objective axiology:
Parents are obligated to provide a very high standard of care for their children (principle from minimalist morality).
People are free to decide against becoming parents (“there’s no objective axiology”).
Parents are free to want to have as many children as possible (“there’s no objective axiology”), as long as the children are happy in expectation (principle from minimalist morality).
People are free to try to influence other people’s moral stances and parenting choices (“there’s no objective axiology”) – for instance, Joanne could promote anti-natalism and Marianne could promote totalism (their respective interpretations of “doing what’s most moral/altruistic”) – as long as they remain within the boundaries of what is acceptable in a civil society (principle from minimalist morality).
So, what’s the role for (something like) person-affecting principles in population ethics? Basically, if you only want minimalist morality and otherwise want to pursue self-oriented goals, person-affecting principles seem like a pretty good answer to “what should be ethical constraints to your option space for creating new people/beings.” In addition, I think person-affecting principles have some appeal even for specific flavors of “doing what’s most moral/altruistic,” but only in people who lean toward interpretations of this that highlight benefitting people who already exist or will exist regardless of your actions. As I said in the bullet point summary, there’s a tradeoff where cashing out “doing what’s most moral/altruistic” primarily according to the perspective of existing people leaves less room for altruism on the second perspective (that of newly created people), and vice versa. (For the “vice versa,” note that, e.g., a totalist classical utilitarian would be leaving less room for benefitting already existing people. They would privilege an arguably defensible but certainly not ‘objectively correct’ interpretation of what it means to benefit newly created people, and they would lean in on that perspective more so than they lean into the perspective “What are existing people’s life goals and how do I benefit them.”)
I can’t tell what you mean by an objective axiology. It seems to me like you’re equivocating between a bunch of definitions:
An axiology is objective if it is universally true / independent of the decision-maker / not reliant on goals / implied by math. (I’m pointing to a cluster of intuitions rather than giving a precise definition.)
An axiology is objective if it provides a decision for every possible situation you could be in. (I would prefer to call this a “complete” axiology, perhaps.)
An axiology is objective if its decisions can be computed by taking each world, summing some welfare function over all the people in that world, and choosing the decision that leads to the world with a higher number. (I would prefer to call this an “aggregative” axiology, perhaps.)
Examples of definition 1:
Examples of definition 2:
Examples of definition 3:
I don’t think I’m relying on an objective-axiology-by-definition-1. Any time I say “good” you can think of it as “good according to the decision-maker” rather than “objectively good”. I think this doesn’t affect any of my arguments.
It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a “complete axiology”). I don’t really see from your comment why this is a problem.
I agree this is “maximally ambitious morality” rather than “minimal morality”. Personally if I were designing “minimal morality” I’d figure out what “maximally ambitious morality” would recommend we design as principles that everyone could agree on and follow, and then implement those. I’m skeptical that if I ran through such a procedure I’d end up choosing person-affecting intuitions (in the sense of “Making People Happy, Not Making Happy People”, I think I plausibly would choose something along the lines of “if you create new people make sure they have lives well-beyond-barely worth living”). Other people might differ from me, since they have different goals, but I suspect not.
I agree that if your starting point is “I want to ensure that people’s preferences are satisfied” you do not yet have a complete axiology, and in particular there’s an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying “if you resolve this ambiguity in this particular way, you get Dutch booked”. I agree that you could avoid the Dutch book by resolving the ambiguity as “I will only create individuals whose preferences I have satisfied as best as I can”.
I think this is a crux between us (or at least an instance where I didn’t describe very well how I think of “minimal morality”). (A lot of the other points I’ve been making, I see mostly as “here’s a defensible alternative to Rohin’s view” rather than “here’s why Rohin is wrong to not find (something like) person-affecting principles appealing.”)
In my framework, it wouldn’t be fair to derive minimal morality from a specific take on maximally ambitious morality. People who want to follow some maximally ambitious morality (this includes myself) won’t all pick the same interpretation of what that means. Not just for practical reasons, but fundamentally: for maximally ambitious morality, different interpretations are equally philosophically defensible.
Some people may have the objection “Wait, if maximally ambitious morality is under-defined, why adopt confident and specific views for how you want things to be? Why not keep your views on it under-defined, too?” (See Richard Ngo’s post on Moral indefinability.) I have answered this objection in this section of my post The Moral Uncertainty Rabbit Hole, Fully Excavated. In short, I give an analogy between “doing what’s maximally moral” and “becoming ideally athletically fit.” In the analogy, someone grows up with the childhood dream of becoming “ideally athletically fit” in a not-further-specified way. They then have the insight that “becoming ideally athletically fit” has different defensible interpretations – e.g., the difference between a marathon runner or a 100m-sprinter ((or someone who is maximally fit in reducing heart attack risks – which are actually elevated for professional athletes!)). Now, it is an open question for them whether to care about a specific interpretation of the target concept or whether to embrace under-definedness. My advice to them for resolving this question is “think about which aspects of fitness you feel most drawn to, if any.”
Minimal morality is the closest we can come to something “objective” in the sense that it’s possible for philosophically sophisticated reasoners to all agree on it (your first interpretation). (This is precisely because minimal morality is unambitious – it only tells us to not be jerks; it doesn’t give clear guidance for what else to do.)
Minimal morality will feel unsatisfying to anyone who finds effective altruism appealing, so we want to go beyond it in places. However, within my framework, we can only go beyond it by forming morality/altruism-inspired life goals that, while we try to make them impartial/objective, have to inevitably lock in subjective judgment calls. (E.g., “Given that you can’t be both at once, do you want to be maximally impartially altruistic towards existing people or towards newly created people?” or “Assuming the latter, given that different types of newly created people will have different views on what’s good or bad for them, how will define for yourself what it means to maximally benefit (which?) newly created people?”)
I agree it’s not a problem as long as you’re choosing that sort of success criterion (that you want a complete axiology) freely, rather than thinking it’s a forced move. (My sense is that you already don’t think of it as a forced move, so I should have been more clear that I wasn’t necessarily arguing against your views.)
Yes, that describes it very well!
That said, I’m mostly arguing for a framework* for how to think about population ethics rather than a specific, object-level normative theory. So, I’m not saying the solution to population ethics is “existing people’s life goals get comparatively a lot of weight.” I’m only pointing out how that seems like a defensible position, given that the alternative would be to somewhat arbitrarily give them comparatively very little weight.
*By “framework,” I mean a set of assumptions for thinking about a domain, answering questions like “What am I trying to figure out?”, “What makes for a good solution?” and “What are the concepts I want to use to reason successfully about this domain?”
I like the three interpretations of “objective” that you distilled!
I use the word “objective” in the first sense, but you noted correctly that I’m arguing as though rejecting “there’s an objective axiology” in that sense implies other things, too. (I should make these hidden inferences explicit in future versions of the summary!)
I’d say I’ve been arguing hard against the first interpretation of “objective axiology” and softly against your second and third descriptions of desirable features of an axiology/”answer to population ethics.”
By “arguing hard” I mean “anyone who thinks this is wrong.” By “arguing softly” I mean “that may be defensible, but there are other defensible alternatives.”
So, on the question of success criteria for answers to population ethics (whether we’re looking for a complete axiology, per your 2nd description, and whether the axiology should “fall out” naturally from world states, rather than be specific to histories or to “who’s the person with the choice?”, per your 3rd description)… On those questions, I think it’s perfectly defensible to end up with answers that satisfy each respective criterion, but I think it’s important to keep the option space open while we’re discussing population ethics within a community, so we aren’t prematurely locking in that “solutions to population ethics” need to be of a specific form. (It shouldn’t become uncool within EA to conceptualize things differently, if the alternatives are well formed / well argued.)
I think there’s a practical effect where people who think “ethics is objective” (in the first sense) might prematurely restrict their option space. (This won’t apply to you.) I think they’re looking for the sort of (object-level) normative theories that can fulfill the steep demands of objectivity – theories that all philosophically sophisticated others could agree on despite the widespread differences in people’s moral intuitions. With this constraint, one is likely to view it as a positive feature that a theory is elegantly simple, even if it demands a lot of “bullet biting.” (Moral reasoners couldn’t agree on the same answer if they all relied too much on their moral intuitions, which are different from person to person.)
In other words, if we thought that morality was a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on (and we also have priors that the answer is about “altruism” and “impartiality”), then we’d come up with different solutions than if we started without the “coordination game” assumption.
In any case, theories that fit your second and third description tend to be simpler, so they’re more appealing to people who endorse “ethics is objective” (in the first sense). That’s the link I see between the three descriptions. It’s no coincidence that the examples I gave in my previous comment (moral issues around procreation) track common sense ethics. The less we think “morality is objective” (in the first sense), the more alternatives we have to biting specific bullets.