Here’s my best attempt at a short version of my argument:
The standard critiques of person-affecting views are right in pointing out how person-affecting views don’t give satisfying answers to “what’s best for possible people/beings.”
However, they are wrong in thinking that this is a problem.
It’s only within the axiology-focused approach (common in EA and utilitarian-tradition academic philosophy) that a theory of population ethics must tell us what’s best for both possible people/beings and for existing (or sure-to-exist) people/beings simultaneously.
Instead, I think it’s okay for EAs who find Narveson’s slogan compelling to reason as follows: (1) I care primarily about what’s best for existing (and sure-to-exist) people/beings. (2) When it comes to creating or not creating people/beings whose existence depends on my actions, all I care about is following some minimal notion of “don’t be a jerk.” That is, I wouldn’t want to do anything that disregards the interests of such possible people/beings according to all plausible axiological accounts, but I’m okay with otherwise just not focusing on possible people/beings all that much.
We can think of this stance as analogous to:
The utilitarian parent: “I care primarily about doing what’s best for humanity at large, but I wouldn’t want to neglect my children to such a strong degree that all defensible notions of how to be a decent parent state that I fucked up.”
Just like the utilitarian parent had to choose between two separate values (their own children vs humanity at large), the person with person-affecting life goals had to choose between two values as well (existing-and-sure-to-exist people/beings vs possible people/beings).
The person with person-affecting life goals: “I care primarily about doing what’s best for existing and sure-to-exist people/beings, but I wouldn’t want to neglect the interests of possible people/beings to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.”
Note that it’s not like only advocates of person-affecting morality have to make such a choice. Analogously:
The person with totalist/strong longtermist life goals: “I care primarily about doing what’s best according to my totalist axiology (i.e., future generations whose existence is optional), but I wouldn’t want to neglect the interests of existing people to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.”
Anyway, for the person with person-affecting life goals, when it comes to cases like whether it’s permissible for them to create individual new people, or bundles of people (one at welfare level 100, the other at 1), or similar cases spread out over time, etc., it seems okay that there isn’t a single theory that fulfills both of the following conditions: (1) The theory has the ‘person-affecting’ properties (e.g., it is the sort of theory that people who find Narveson’s slogan compelling would want). (2) The theory gives us precise, coherent, non-contradictory guidelines on what’s best for newly created people/beings.
Instead, I’d say what we want is to drop (2), and come up with an alternative theory that fulfills only (1) and (3): (1) The theory has the ‘person-affecting’ properties (e.g., it is the sort of theory that people who find Narveson’s slogan compelling would want). (3) The theory contains some minimal guidelines of the form “don’t be a jerk” that tell us what NOT to do when it comes to creating new people/beings. The things it allows us to do are acceptable, even though it’s true that someone who cares maximally about possible people/beings on a specific axiological notion of caring [but remember that there’s no universally compelling solution here!]) could have done “better”. (I put “better” in quotation marks because it’s not better in an objectivist moral realist way, just “better” in a sense where we introduce a premise that our actions’ effects on possible people/beings are super important.)
What I’m envisioning under (3) is quite similar to how common-sense morality thinks about the ethics of having children. IMO, common-sense morality would say that:
People are free to decide against becoming parents.
People who become parents are responsible towards their children.
It’s not okay to have a child and then completely abandon them, or to decide to have an unhappy child if you could’ve chosen a happier child at basically no cost.
If the parents can handle it, it’s okay for parents to have 8+ children, even if this lowers the resources available per child.
The responsibility towards one’s children isn’t absolute (e.g., if the children are okay, parents aren’t prohibited from donating to charity even though the money could further support their children).
The point being: The ethics of having children is more about “here’s how not to do it” rather than “here’s the only acceptable best way to do it.”
--
The longer version of the argument is in my post. My view there relies on a few important premises:
Moral anti-realism
Adopting a different ethical ontology from “something has intrinsic value”
I can say a bit more about these here.
As I write in the post:
I see the axiology-focused approach, the view that “something has intrinsic value,” as an assumption in people’s ethical ontology.
The way I’m using it here, someone’s “ontology” consists of the concepts they use for thinking about a domain – how they conceptualize their option space. By proposing a framework for population ethics, I’m (implicitly) offering answers to questions like “What are we trying to figure out?”, “What makes for a good solution?” and “What are the concepts we want to use to reason successfully about this domain?”
Discussions about changing one’s reasoning framework can be challenging because people are accustomed to hearing object-level arguments and interpreting them within their preferred ontology.
For instance, when first encountering utilitarianism, someone who thinks about ethics primarily in terms of “there are fundamental rights; ethics is about the particular content of those rights” would be turned off. Utilitarianism doesn’t respect “fundamental rights,” so it’ll seem crazy to them. However, asking, “How does utilitarianism address the all-important issue of [concept that doesn’t exist within the utilitarian ontology]” begs the question. To give utilitarianism a fair hearing, someone with a rights-based ontology would have to ponder a more nuanced set of questions.
So, let it be noted that I’m arguing for a change to our reasoning frameworks. To get the most out of this post, I encourage readers with the “axiology-focused” ontology to try to fully inhabit[8] my alternative framework, even if that initially means reasoning in a way that could seem strange.
To get a better sense of what I mean by the framework that I’m arguing against, see here:
>Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”
[...]
The axiology-focused approach goes as follows. First, there’s the search for an axiology, a theory of (intrinsic) value. (E.g., the axiology may state that good experiences are what’s valuable.) Then, there’s further discussion on whether ethics contains other independent parts or whether everything derives from that axiology. For instance, a consequentialist may frame their disagreement with deontology as follows. “Consequentialism is the view that making the world a better place is all that matters, while deontologists think that other things (e.g., rights, duties) matter more.” Similarly, someone could frame population-ethical disagreements as follows. “Some philosophers think that all that matters is more value in the world and less disvalue (“totalism”). Others hold that further considerations also matter – for instance, it seems odd to compare someone’s existence to never having been born, so we can discuss what it means to benefit a person in such contexts.”
In both examples, the discussion takes for granted that there’s something that’s valuable in itself. The still-open questions come afterward, after “here’s what’s valuable.”
In my view, the axiology-focused approach prematurely directs moral discourse toward particular answers. I want to outline what it could look like to “do population ethics” without an objective axiology or the assumption that “something has intrinsic value.”
To be clear, there’s a loose, subjective meaning of “axiology” where anyone who takes systematizing stances[1] on moral issues implicitly “has an axiology.” This subjective sense isn’t what I’m arguing against. Instead, I’m arguing against the stronger claim that there exists a “true theory of value” based on which some things are “objectively good” (good regardless of circumstance, independently of people’s interests/goals).[2]
(This doesn’t leave me with “anything goes.” In my sequence on moral anti-realism, I argued that rejecting moral realism doesn’t deserve any of the connotations people typically associate with “nihilism.” See also the endnote that follows this sentence.[3])
Note also that when I criticize the concept of “intrinsic value,” this isn’t about whether good things can outweigh bad things. Within my framework, one can still express beliefs like “specific states of the world are worthy of taking serious effort (and even risks, if necessary) to bring about.” Instead, I’m arguing against the idea that good things are good because of “intrinsic value.”
So, the above quote described the framework I want to push back against.
The alternative ethical ontology I’m proposing is ‘anti-realist’ in the sense of: There’s no such thing as “intrinsic value.”
Instead, I view ethics as being largely about interests/goals.
“There’s no objective axiology” implies (among other things) that there’s no goal that’s correct for everyone who’s self-oriented to adopt. Accordingly, goals can differ between people (see my post, The Life-Goals Framework: How I Reason About Morality as an Anti-Realist). There are, I think, good reasons for conceptualizing ethics as being about goals/interests. (Dismantling Hedonism-inspired Moral Realism explains why I don’t see ethics as being about experiences. Against Irreducible Normativity explains why I don’t conceptualize ethics as being about things we can’t express in non-normative terminology.)
From that “ethics is about interests/goals” perspective, population ethics seems clearly under-defined. First off, it’s under-defined how many new people/beings there will be (with interests and goals). And secondly, it’s under-defined which interests/goals new people/beings will have. (This depends on who you choose to create!)
With these building blocks, I can now sketch the summary of my overall population-ethical reasoning framework (this summary is copied from my post but lightly adapted):
Ethics is about interests/goals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” coincides with something like preference utilitarianism.
On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results:[4]
The number of interests/goals isn’t fixed
The types of interests/goals aren’t fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls. (In other words: It likely won’t be possible to unify these perspectives in a way that’ll be satisfying for anyone.)
People with the motivation to dedicate (some of) their life to “doing the most moral/altruistic thing” will want clear guidance on what to do/pursue. To get this, they must adopt personal (but defensible), population-ethically-complete specifications of the target concept of “doing the most moral/altruistic thing.” (Or they could incorporate a compromise, as in a moral parliament between different plausible specifications.)
Just like the concept “athletic fitness” has several defensible interpretations (e.g., the difference between a 100m sprinter and a marathon runner), so (I argue) does “doing the most moral/altruistic thing.”
In particular, there’s a tradeoff where cashing out this target concept primarily according to the perspective of other existing people leaves less room for altruism on the second perspective (that of newly created people/beings) and vice versa.
Accordingly, people can think of “population ethics” in several different (equally defensible)[5] ways:
Subjectivist person-affecting views: I pay attention to creating new people/beings only to the minimal degree of “don’t be a jerk” while focusing my caring budget on helping existing (and sure-to-exist) people/beings.
Subjectivist totalism: I count appeals from possible people/beings just as much as existing (or sure-to-exist) people/beings. On the question “Which appeals do I prioritize?” my view is, “Ones that see themselves as benefiting from being given a happy existence.”
Subjectivist anti-natalism: I count appeals from possible people/beings just as much as existing (or sure-to-exist) people/beings. On the question “Which appeals do I prioritize?” my view is, “Ones that don’t mind non-existence but care to avoid a negative existence.”
The above descriptions (non-exhaustively) represent “morality-inspired” views about what to do with the future. The minimal morality of “don’t be a jerk” still applies to each perspective and recommends cooperating with those who endorse different specifications of ambitious morality.
One arguably interesting feature of my framework is that it makes standard objections against person-affecting views no longer seem (as) problematic. A common opinion among effective altruists is that person-affecting views are difficult to make work.[6] In particular, the objection is that they give unacceptable answers to “What’s best for new people/beings.”[7] My framework highlights that maybe person-affecting views aren’t meant to answer that question. Instead, I’d argue that someone with a person-affecting view has answered a relevant earlier question so that “What’s best for new people/beings” no longer holds priority. Specifically, to the question “What’s the most moral altruistic/thing?,” they answered “Benefitting existing (or sure-to-exist) people/beings.” In that light, under-definedness around creating new people/beings is to be expected – it’s what happens when there’s a tradeoff between two possible values (here: the perspective of existing/sure-to-exist people and that of possible people) and someone decides that one option matters more than the other.
You should read the post! Section 4.1.1 makes the move that you suggest (rescuing PAVs by de-emphasising axiology). Section 5 then presents arguments against PAVs that don’t appeal to axiology.
Sorry, I hate it when people comment on something that has already been addressed.
FWIW, though, I had read the paper the day it was posted on the GPI fb page. At that time, I didn’t feel like my point about “there is no objective axiology” fit into your discussion.
I feel like even though you discuss views that are “purely deontic” instead of “axiological,” there are still some assumptions from the axiology-based framework that underly your conclusion about how to reason about such views. Specifically, when explaining why a view says that it would be wrong to create only Amy but not Bobby, you didn’t say anything that suggests understanding of “there is no objective axiology about creating new people/beings.”
That said, re-reading the sections you point to, I think it’s correct that I’d need to give some kind of answer to your dilemmas, and what I’m advocating for seems most relevant to this paragraph:
5.2.3. Intermediate wide views
Given the defects of permissive and restrictive views, we might seek an intermediate wide view: a wide view that is sometimes permissive and sometimes restrictive. Perhaps (for example) wide views should say that there’s something wrong with creating Amy and then later declining to create Bobby in Two-Shot Non-Identity if and only if you foresee at the time of creating Amy that you will later have the opportunity to create Bobby. Or perhaps our wide view should say that there’s something wrong with creating Amy and then later declining to create Bobby if and only if you intend at the time of creating Amy to later decline to create Bobby.
At the very least, I owe you an explanation of what I would say here.
I would indeed advocate for what you call the “intermediate wide view,” but I’d motivate this view a bit differently.
All else equal, IMO, the problem with creating Amy and then not creating Bobby is that these specific choices, in combination, and if it would have been low-effort to choose differently (or the other way around), indicate that you didn’t consider the interests of possible people/beings even to a minimum degree. Considering them to a minimum degree would mean being willing to at least take low-effort actions to ensure your choices aren’t objectionable from their perspective (the perspective of possible people/beings). Adding someone with +1 when you could’ve easily added someone else with +100 just seems careless. If Alice and Bobby sat behind a veil of ignorance, not knowing which of them will be created with +1 or +100 (if someone gets created at all), the one view they would never advocate for is “only create the +1 person.” If they favor anti-natalist views, they advocate for creating no one. If they favor totalist views, they’d advocate for creating both. If one favors anti-natalism and the other favors totalism, they might compromise on creating only the +100 person. So, most options here really are defensible, but you don’t want to do the one thing that shows you weren’t trying at all.
So, it would be bad to only create the +1 person, but it’s not “99 units bad” in some objective sense, so this is not always the dominant concern and seems less problematic if we dial up the degree of effort that’s needed to choose differently, or when there are externalities like “by creating Amy at +1 instead of Bob at +100, you create a lot of value for existing people.” I don’t remember if it was Parfit or Singer who first gave this example of delaying pregnancy for a short number of days (or maybe it was three months?) to avoid your future child suffering from a serious illness. There, it seems mainly objectionable not to wait because of how easy it would be to wait. (Quite a few people, when trying to have children, try for years, so a few months is not that significant.)
So, if you’re at age 20 and contemplate having a child at happiness level 1, knowing that 15 years later they’ll invent embryo-selection therapy to make new babies happier and guarantee happiness level 100, having only the child at 20 is a little selfish, but it’s not like “wait 15 years,” when you really want a child, is a low-effort accommodation. (Also, I personally think having children is under pretty much all circumstances “a little selfish,” at least in the sense of “you could spend your resources on EA instead.” But that’s okay. Lots of things people choose are a bit selfish.) I think it would be commendable to wait, but not mandatory. (And like Michael ST Jules points out, not waiting is the issue here; after that’s happened, it’s done, and when you contemplate having a second child 15 years later, it’s now a new decision and it no longer matters what you did earlier.)
And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if they were committed to that controversial claim.
The intentions are relevant here in the sense of: You should always act with the intention of at least taking low-effort ways to consider the interests of possible people/beings. It’s morally frivolous if someone has children on a whim, especially if that leads to them making worse choices for these children than they could otherwise have easily made. But it’s okay if the well-being of their future children was at least an important factor in their decision, even if it wasn’t the decisive factor. Basically, “if you bring a child into existence and it’s not the happiest child you could have, you better have a good reason for why you did things that way, but it’s conceivable for there to be good reasons, and then it’s okay.”
The utilitarian parent: “I care primarily about doing what’s best for humanity at large, but I wouldn’t want to neglect my children to such a strong degree that all defensible notions of how to be a decent parent state that I fucked up.”
I wonder if we don’t mind people privileging their own children because:
People love their kids too damn much and it just doesn’t seem realistic for people to neglect their children to help others.
A world in which it is normalised to neglect your children to “focus on humanity” is probably a bad world by utilitarian lights. A world full of child neglect just doesn’t seem like it would produce productive individuals who can make the world great. So even on an impartial view we wouldn’t want to promote child neglect.
Neither of these points are relevant in the case of privileging existing-and-sure-to-exist people/beings vs possible people/beings:
We don’t have some intense biologically-driven urge to help present people. For example, most people don’t seem to care all that much that a lot of present people are dying from malaria. So focusing on helping possible people/beings seems at least feasible.
We can’t use the argument that it is better from an impartial view to focus on existing-and-sure-to-exist people/beings because of the classic ‘future could be super-long’ argument.
And when you say that a person with totalist/strong longtermist life goals also chooses between two separate values (what their totalist axiology says versus existing people), I’m not entirely sure that’s true. Again, massive neglect of existing people just doesn’t seem like it would work out well for the long term—existing people are the ones that can make the future great! So even pure strong longtermists will want some decent investment into present people.
We can’t use the argument that it is better from an impartial view to focus on existing-and-sure-to-exist people/beings because of the classic ‘future could be super-long’ argument.
I’d say the two are tied contenders for “what’s best from an impartial view.”
I believe the impartial view is under-defined for cases of population ethics, and both of these views are defensible options in the sense that some morally-motivated people would continue to endorse them even after reflection in an idealized reflection procedure.
For fixed population contexts, the “impartial stance” is arguably better defined and we want equal considering of [existing] interests, which gives us some form of preference utilitarianism. However, once we go beyond the fixed population context, I think it’s just not clear how to expand those principles, and Narveson’s slogan isn’t necessarily a worse justification than “the future could be super-long/big.”
In my post Population Ethics Without [An Objective] Axiology, I argued that person-affecting views are IMO underappreciated among effective altruists.
Here’s my best attempt at a short version of my argument:
The standard critiques of person-affecting views are right in pointing out how person-affecting views don’t give satisfying answers to “what’s best for possible people/beings.”
However, they are wrong in thinking that this is a problem.
It’s only within the axiology-focused approach (common in EA and utilitarian-tradition academic philosophy) that a theory of population ethics must tell us what’s best for both possible people/beings and for existing (or sure-to-exist) people/beings simultaneously.
Instead, I think it’s okay for EAs who find Narveson’s slogan compelling to reason as follows:
(1) I care primarily about what’s best for existing (and sure-to-exist) people/beings.
(2) When it comes to creating or not creating people/beings whose existence depends on my actions, all I care about is following some minimal notion of “don’t be a jerk.” That is, I wouldn’t want to do anything that disregards the interests of such possible people/beings according to all plausible axiological accounts, but I’m okay with otherwise just not focusing on possible people/beings all that much.
We can think of this stance as analogous to:
The utilitarian parent: “I care primarily about doing what’s best for humanity at large, but I wouldn’t want to neglect my children to such a strong degree that all defensible notions of how to be a decent parent state that I fucked up.”
Just like the utilitarian parent had to choose between two separate values (their own children vs humanity at large), the person with person-affecting life goals had to choose between two values as well (existing-and-sure-to-exist people/beings vs possible people/beings).
The person with person-affecting life goals: “I care primarily about doing what’s best for existing and sure-to-exist people/beings, but I wouldn’t want to neglect the interests of possible people/beings to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.”
Note that it’s not like only advocates of person-affecting morality have to make such a choice. Analogously:
The person with totalist/strong longtermist life goals: “I care primarily about doing what’s best according to my totalist axiology (i.e., future generations whose existence is optional), but I wouldn’t want to neglect the interests of existing people to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.”
Anyway, for the person with person-affecting life goals, when it comes to cases like whether it’s permissible for them to create individual new people, or bundles of people (one at welfare level 100, the other at 1), or similar cases spread out over time, etc., it seems okay that there isn’t a single theory that fulfills both of the following conditions:
(1) The theory has the ‘person-affecting’ properties (e.g., it is the sort of theory that people who find Narveson’s slogan compelling would want).
(2) The theory gives us precise, coherent, non-contradictory guidelines on what’s best for newly created people/beings.
Instead, I’d say what we want is to drop (2), and come up with an alternative theory that fulfills only (1) and (3):
(1) The theory has the ‘person-affecting’ properties (e.g., it is the sort of theory that people who find Narveson’s slogan compelling would want).
(3) The theory contains some minimal guidelines of the form “don’t be a jerk” that tell us what NOT to do when it comes to creating new people/beings. The things it allows us to do are acceptable, even though it’s true that someone who cares maximally about possible people/beings on a specific axiological notion of caring [but remember that there’s no universally compelling solution here!]) could have done “better”. (I put “better” in quotation marks because it’s not better in an objectivist moral realist way, just “better” in a sense where we introduce a premise that our actions’ effects on possible people/beings are super important.)
What I’m envisioning under (3) is quite similar to how common-sense morality thinks about the ethics of having children. IMO, common-sense morality would say that:
People are free to decide against becoming parents.
People who become parents are responsible towards their children.
It’s not okay to have a child and then completely abandon them, or to decide to have an unhappy child if you could’ve chosen a happier child at basically no cost.
If the parents can handle it, it’s okay for parents to have 8+ children, even if this lowers the resources available per child.
The responsibility towards one’s children isn’t absolute (e.g., if the children are okay, parents aren’t prohibited from donating to charity even though the money could further support their children).
The point being: The ethics of having children is more about “here’s how not to do it” rather than “here’s the only acceptable best way to do it.”
--
The longer version of the argument is in my post. My view there relies on a few important premises:
Moral anti-realism
Adopting a different ethical ontology from “something has intrinsic value”
I can say a bit more about these here.
As I write in the post:
To get a better sense of what I mean by the framework that I’m arguing against, see here:
So, the above quote described the framework I want to push back against.
The alternative ethical ontology I’m proposing is ‘anti-realist’ in the sense of: There’s no such thing as “intrinsic value.”
Instead, I view ethics as being largely about interests/goals.
“There’s no objective axiology” implies (among other things) that there’s no goal that’s correct for everyone who’s self-oriented to adopt. Accordingly, goals can differ between people (see my post, The Life-Goals Framework: How I Reason About Morality as an Anti-Realist). There are, I think, good reasons for conceptualizing ethics as being about goals/interests. (Dismantling Hedonism-inspired Moral Realism explains why I don’t see ethics as being about experiences. Against Irreducible Normativity explains why I don’t conceptualize ethics as being about things we can’t express in non-normative terminology.)
From that “ethics is about interests/goals” perspective, population ethics seems clearly under-defined. First off, it’s under-defined how many new people/beings there will be (with interests and goals). And secondly, it’s under-defined which interests/goals new people/beings will have. (This depends on who you choose to create!)
With these building blocks, I can now sketch the summary of my overall population-ethical reasoning framework (this summary is copied from my post but lightly adapted):
Maybe worth writing this as a separate post (a summary post) you can link to, given its length?
You should read the post! Section 4.1.1 makes the move that you suggest (rescuing PAVs by de-emphasising axiology). Section 5 then presents arguments against PAVs that don’t appeal to axiology.
Sorry, I hate it when people comment on something that has already been addressed.
FWIW, though, I had read the paper the day it was posted on the GPI fb page. At that time, I didn’t feel like my point about “there is no objective axiology” fit into your discussion.
I feel like even though you discuss views that are “purely deontic” instead of “axiological,” there are still some assumptions from the axiology-based framework that underly your conclusion about how to reason about such views. Specifically, when explaining why a view says that it would be wrong to create only Amy but not Bobby, you didn’t say anything that suggests understanding of “there is no objective axiology about creating new people/beings.”
That said, re-reading the sections you point to, I think it’s correct that I’d need to give some kind of answer to your dilemmas, and what I’m advocating for seems most relevant to this paragraph:
At the very least, I owe you an explanation of what I would say here.
I would indeed advocate for what you call the “intermediate wide view,” but I’d motivate this view a bit differently.
All else equal, IMO, the problem with creating Amy and then not creating Bobby is that these specific choices, in combination, and if it would have been low-effort to choose differently (or the other way around), indicate that you didn’t consider the interests of possible people/beings even to a minimum degree. Considering them to a minimum degree would mean being willing to at least take low-effort actions to ensure your choices aren’t objectionable from their perspective (the perspective of possible people/beings). Adding someone with +1 when you could’ve easily added someone else with +100 just seems careless. If Alice and Bobby sat behind a veil of ignorance, not knowing which of them will be created with +1 or +100 (if someone gets created at all), the one view they would never advocate for is “only create the +1 person.” If they favor anti-natalist views, they advocate for creating no one. If they favor totalist views, they’d advocate for creating both. If one favors anti-natalism and the other favors totalism, they might compromise on creating only the +100 person. So, most options here really are defensible, but you don’t want to do the one thing that shows you weren’t trying at all.
So, it would be bad to only create the +1 person, but it’s not “99 units bad” in some objective sense, so this is not always the dominant concern and seems less problematic if we dial up the degree of effort that’s needed to choose differently, or when there are externalities like “by creating Amy at +1 instead of Bob at +100, you create a lot of value for existing people.” I don’t remember if it was Parfit or Singer who first gave this example of delaying pregnancy for a short number of days (or maybe it was three months?) to avoid your future child suffering from a serious illness. There, it seems mainly objectionable not to wait because of how easy it would be to wait. (Quite a few people, when trying to have children, try for years, so a few months is not that significant.)
So, if you’re at age 20 and contemplate having a child at happiness level 1, knowing that 15 years later they’ll invent embryo-selection therapy to make new babies happier and guarantee happiness level 100, having only the child at 20 is a little selfish, but it’s not like “wait 15 years,” when you really want a child, is a low-effort accommodation. (Also, I personally think having children is under pretty much all circumstances “a little selfish,” at least in the sense of “you could spend your resources on EA instead.” But that’s okay. Lots of things people choose are a bit selfish.) I think it would be commendable to wait, but not mandatory. (And like Michael ST Jules points out, not waiting is the issue here; after that’s happened, it’s done, and when you contemplate having a second child 15 years later, it’s now a new decision and it no longer matters what you did earlier.)
The intentions are relevant here in the sense of: You should always act with the intention of at least taking low-effort ways to consider the interests of possible people/beings. It’s morally frivolous if someone has children on a whim, especially if that leads to them making worse choices for these children than they could otherwise have easily made. But it’s okay if the well-being of their future children was at least an important factor in their decision, even if it wasn’t the decisive factor. Basically, “if you bring a child into existence and it’s not the happiest child you could have, you better have a good reason for why you did things that way, but it’s conceivable for there to be good reasons, and then it’s okay.”
I wonder if we don’t mind people privileging their own children because:
People love their kids too damn much and it just doesn’t seem realistic for people to neglect their children to help others.
A world in which it is normalised to neglect your children to “focus on humanity” is probably a bad world by utilitarian lights. A world full of child neglect just doesn’t seem like it would produce productive individuals who can make the world great. So even on an impartial view we wouldn’t want to promote child neglect.
Neither of these points are relevant in the case of privileging existing-and-sure-to-exist people/beings vs possible people/beings:
We don’t have some intense biologically-driven urge to help present people. For example, most people don’t seem to care all that much that a lot of present people are dying from malaria. So focusing on helping possible people/beings seems at least feasible.
We can’t use the argument that it is better from an impartial view to focus on existing-and-sure-to-exist people/beings because of the classic ‘future could be super-long’ argument.
And when you say that a person with totalist/strong longtermist life goals also chooses between two separate values (what their totalist axiology says versus existing people), I’m not entirely sure that’s true. Again, massive neglect of existing people just doesn’t seem like it would work out well for the long term—existing people are the ones that can make the future great! So even pure strong longtermists will want some decent investment into present people.
I’d say the two are tied contenders for “what’s best from an impartial view.”
I believe the impartial view is under-defined for cases of population ethics, and both of these views are defensible options in the sense that some morally-motivated people would continue to endorse them even after reflection in an idealized reflection procedure.
For fixed population contexts, the “impartial stance” is arguably better defined and we want equal considering of [existing] interests, which gives us some form of preference utilitarianism. However, once we go beyond the fixed population context, I think it’s just not clear how to expand those principles, and Narveson’s slogan isn’t necessarily a worse justification than “the future could be super-long/big.”