I upvoted this comment Michael as I also prefer rigorously engaging with these topics as opposed to “poetically tugging on readers’ intuitions” (although Joe I did enjoy reading this post and upvoted it so please don’t take this as a snarky comment!).
Michael—do you have any thoughts on Hilary Greaves’ work with John Cusbert on defending existence comparativism? She discusses it on the 80,000 Hours podcast and you can see some slides here. I don’t know if a full draft of the paper is available yet.
Also, do you have any thoughts on the argument that neutrality leads to breaking transitivity (which I present here) implying that we should probably only have neutrality for a single zero-level of wellbeing—which might lead to adopting total utilitarianism?
Yes, I’ve heard Hilary’s 80k podcast where she mentions her paper. It’s not available on her website. If it’s the same theme as in the slides you linked, then it I don’t think it responds to the claims above. Bader supposes ‘better for’ is a dyadic (two-place) relation between the two lives. Hilary is responding to arguments that suppose ‘better for’ is a triadic (three-place) relation: between two worlds and the person. I don’t think I understand why one would want to formulate it the latter way. I’ll take a look at Hilary’s paper when it’s available.
Re your last point: I’m not 100% what you’re claiming in the other post because I found the diagrams hard to follow. You’re stating a standard version of the non-identity problem, right? I don’t think person-affecting views do face intransitivity, but that’s a promissory note that, if I’m honest, I don’t expect to get around to writing up until maybe 2022 at the earliest.
If it’s the same theme as in the slides you linked, then it I don’t think it responds to the claims above. Bader supposes ‘better for’ is a dyadic (two-place) relation between the two lives. Hilary is responding to arguments that suppose ‘better for’ is a triadic (three-place) relation: between two worlds and the person. I don’t think I understand why one would want to formulate it the latter way. I’ll take a look at Hilary’s paper when it’s available.
OK fair enough!
Re your last point: I’m not 100% what you’re claiming in the other post because I found the diagrams hard to follow. You’re stating a standard version of the non-identity problem, right? I don’t think person-affecting views do face intransitivity, but that’s a promissory note that, if I’m honest, I don’t expect to get around to writing up until maybe 2022 at the earliest.
No it’s not the non-identity problem. Disappointed my diagrams didn’t work haha. Let me copy what Greaves says about this in section 5.2 of this paper:
5.2 The ‘Principle of equal existence’
If adding an extra person makes a state of affairs neither better nor worse, perhaps it results in a state of affairs that is equally as good as the original state of affairs. That is, one might try to capture the intuition of neutrality via the following principle:
The Principle of Equal Existence: Let A be any state of affairs. Let B be a state of affairs that is just like A, except that an additional person exists who does not exist in A. (In particular, all the people who exist in A also exist in B, and have the same well-being level in B as in A.) Then A and B are equally good.
As Broome (1994, 2004, pp.146-9) points out, however, this principle is all but self-contradictory. This is because there is more than one way of adding an extra person to A — one might add an extra person with well-being level 5, say (leading to state of affairs B1), or (instead) add the same extra person with well-being level 100 (leading to state of affairs B2) — and these ways are not all equally as good as one another. In our example, B2 is clearly better than B1; but the Principle of Equal Existence would require that B1 and A are equally good, and that A and B2 are equally good, in which case (by transitivity of ‘equally as good as’) B1 and B2 would have to be equally as good as one another. The Principle of Equal Existence therefore cannot be correct.
Right. Yeah, I don’t share Hilary’s intuitions and I wouldn’t analyse the situation in this way. It’s a somewhat subtle move, but I think about comparing pairs of outcomes by comparing how much better/worse they are for each person who exists in both, then adding up the individual differences (i.e. focusing on ‘personal value’; to count ‘impersonal value’ you just aggregate the welfare in each outcome and then compare those totals). I’m inclined to say A, B1, and B2 are equally as good—they are equally good for the necessary people (those who exist in all outcomes under consideration).
FWIW, I think discussants should agree that the personal value of A, B1, and B2 are the same (there are some extra complexities related to harm-minimisation views I won’t get into here). And I think discussants should also agree that the impersonal value of the outcomes is B2 > B1 > A. There is however reasonable scope for disagreement about the final value (aka ‘ultimate value’, ‘value simpliciter’, etc) of B2 vs B1 vs A, but that disagreement rests on whether one accepts the significance of impersonal and/or personal value. Neither I, nor anyone else in this post (I think) has advanced any arguments about the significance of personal vs impersonal value. That’s a separate debate. We’ve been talking about comparativism vs non-comparativism.
I think about comparing pairs of outcomes by comparing how much better/worse they are for each person who exists in both, then adding up the individual differences
If you do this (I think) the problem remains? B1 and B2 have the same people but one of the people is better off in B2.
Therefore focusing on personal value, and adopting neutrality, we have:
A is as good as B1
A is as good as B2
By transitivity, B1 is as good as B2 (but this is clearly wrong from both a personal and impersonal point of view)
We’ve been talking about comparativism vs non-comparativism.
I think the non-comparitivist has to adopt some sort of principle of neutrality (right?), and Greaves’ (well originally Broome’s) example shows why neutrality violates some important constraint. Therefore this example should undermine non-comparativism. Joe actually mentions this argument briefly in his post (search for “Broome”).
Ah, okay. I missed that the people in B1 and B2 were supposed to be the same—it’s a so-called ‘three-choice’ case; ‘two-choice’ cases are where the only two options are the person doesn’t exist or exist with a certain welfare level. I’m inclined to think three-choice cases, even though they are relied on lots in the literature, as also metaphysically problematic for reasons that I’ve not seen pointed out in the literature so far. I’ve sketched my answer below, but this is also a promissory note, sorry, even though it’s ended up being rather long.
Roughly, the gist of my concern is this. A standard person-affecting route is to say the only persons who matter are those who exist necessarily (i.e. under all circumstances under consideration). This is based on the ideas discussed above that we just can’t compare existence to non-existence for someone. To generate the three-choice case, what’s needed is some action that (1) occurs to a future, necessarily-existing person and (2) benefits that person, whilst retaining that they are a future, necessary person, (3) leaves us with three outcomes to choose between. I don’t see how (1) - (3) are jointly possible. Let’s walk through that.
Why do we need (1)? Well, if they aren’t a future person, but they are, instead, a necessarily existing present person, then we’re in a choice between B1 and B2, not a choice between A, B1, and B2. Recall A is the outcome where the person doesn’t exist. So we’re down to two choices, not three.
Why do we need (2)? The type of 3-choice case that most often comes up in the literature—when people flesh out the details, rather than just stipulating that the case is possible—is where we are talking about providing medical treatment to cure an as-yet-unborn child of some genetic condition. Usually claim is “look, obviously you should provide the treatment and that will benefit that child without changing its identity.” A usual observation made in these debates is that your genetics are a necessary condition for your identity: if you had had different genetics, you wouldn’t have existed—consider non-identical twins being different people. Let’s consider the two options: the intervention causes a different person to exist or it doesn’t.
Suppose the former is true: the genetic intervention leads person, C, rather than person B to be created. Okay, so now the choice-set is really <A, B1, C1> not <A, B1, B2>. This is the familiar non-identity problem case.
Suppose the latter is true: the genetic intervention doesn’t change identity. Recall the person must, crucially, be a future, necessary person. But how can you change anyone’s genetics prior to their existence whilst maintaining that the original person will necessarily exist(! ). This, I’m afraid, is metaphysically problematic or, to put it in ordinary British English, bonkers.
The three-case enthusiast might try again by suggesting something like the following: they are considering whether to invest money for their future nephew to give to him when he turns 21. Now, we can imagine a case where you doing this sort of thing is identity changing: you tell your sibling and their spouse, who haven’t yet had the child, you’re going to do this. It causes them to conceive later and create a different child. Fine, but here we’re back to <A, B1, C1> as we’re talking about stopping one child existing, creating a different one instead, and benefitting that second one.
But suppose, for some reason, it’s not identity changing. Maybe the child is already in utero, or it’s a fertilised egg in a sperm bank and your sibling and their spouse are 100% going to have it, whatever you do, or something. Recall, future person needs to exist necessarily for the three-option case to arise. Well, if there is no possibility of the child not existing, there is no outcome A anymore—at least, not as far are you are concerned; you now face a choice-set of <B1, B2> and can say normal things about why you should choose B2: it’s better for that particular child whose identity remains unchanged.
All told, I doubt the choice-set <A, B1, B2> is (metaphysically?) possible. This is important because its existence is taken as a strong objection to person-affecting views. I don’t think the existence of choice sets like <A, B1, C1> - which is the ordinary non-identity problem - are nearly so problematic.
All told, I doubt the choice-set <A, B1, B2> is (metaphysically?) possible. This is important because its existence is taken as a strong objection to person-affecting views. I don’t think the existence of choice sets like <A, B1, C1> - which is the ordinary non-identity problem—are nearly so problematic.
I think I agree with MichaelStJules here. I don’t think that the “practical” possibility of a choice set like <A, B1, B2> is in fact important. The important thing I think is that we can conceive of such a choice set—it’s not difficult to consider a scenario where I don’t exist, a scenario where I exist with happiness +5, and a scenario where I exist with happiness +10. Broome’s example is essentially a thought experiment, and thought experiments can be weird and unrealistic whilst still being very powerful (that’s what makes them so fun!).
A standard person-affecting route is to say the only persons who matter are those who exist necessarily (i.e. under all circumstances under consideration)
I find this bizarre. So if you have a choice between (A) not having a baby or (B) having a baby and then immediately torturing it to death we can ignore the “torture to death” aspect when making this decision because the child isn’t “existing necessarily”? Maybe I’m misunderstanding, but I find any such person-affecting view very easy to dismiss.
As I said to MichaelStJules, I’m inclined to say the possibility of three-choice cases rests on a confusion for the reasons I already gave. The A, B1, B2 case should at least be structured differently into a sequence of two choices (1) create/don’t create, (2) benefit/don’t benefit. (1) is incomparable in value for someone, (2) is not. Should you create a child? Well, on necessitarianism, that depends solely on the effects this has on other, necessary people (and thus not the child). Okay, once you’ve had/are going to have a child, should you torture it. Um, what do you think? If this is puzzling, recall we are trying to think in terms of personal value (‘good for’). I don’t think we can say anything is good/bad for an entity that doesn’t exist necessarily (i.e. in all contexts at hand).
FWIW, I find people do tend to very easily dismiss the view, but usually without really understanding how it works! It’s a bit like when people say “oh, utilitarianism allows murder? Clearly false. What’s the next topic?”
FWIW, I find people do tend to very easily dismiss the view, but usually without really understanding how it works!
I would just point out that Greaves and Broome probably understand how person-affecting views work and seem to find this <A, B1, B2> argument highly problematic. I used to hold a person-affecting view (genuinely I did) and when I came across this argument I found my faith in it severely tested. I haven’t really been convinced by your defence (partly because I still find necessitarianism a bit bonkers—more on this below), but I may need to think about it more.
Should you create a child? Well, on necessitarianism, that depends solely on the effects this has on other, necessary people (and thus not the child). Okay, once you’ve had/are going to have a child, should you torture it. Um, what do you think?
Perhaps I’m misunderstanding necessitarianism but it doesn’t seem hard to find bizarre implications of it (my first one was sloppy). What about the choice between:
A) Not having a child
B) Having a child you know/have very strong reasons to expectwill live a dreadful life for reasons other than what you will do to the child when it is born (a fairly realistic scenario in my opinion e.g. terrible genetic defect)
Necessitarianism would seem to imply the two are equally permissible and I’m pretty comfortable in saying that they are not.
Ah, well maybe we should just defer to Broome and Greaves and not engage in the object-level discussions at all! That would certainly save time… FWIW, it’s pretty common in philosophy to say “Person X conceptualises problem P in such and such a way. What they miss out is such and such.”
All views in pop ethics have bonkers results, something that is widely agreed by population ethicists. Your latest example is about the procreative asymmetry (creating happy lives neutral, creating unhappy lives bad). Quite of lot of people with person-affecting intuitions think there is a procreative asymmetry, so would agree with you, but it’s proved quite hard to defend. Ralph Bader, here, has a rather interesting and novel defence of it: https://homeweb.unifr.ch/BaderR/Pub/Asymmetry (R. Bader).pdf. Another strategy is to say you have no reason not to create the miserable child, but you have reason to end it’s life once it starts existing; this doesn’t help with scenarios where you can’t end the life.
You may just write me off as a monster, but I quite like symmetries and I’m minded to accept a symmetrical person-affecting view (at least, I quite a bunch of credence in it). The line of thought is that existence and non-existence are not comparable. The challenge in defending an asymmetric person-affecting view is arguing why it’s not good for someone to be creatied with a happy life, but why it is bad for them to have an unhappy life.
The line of thought is that existence and non-existence are not comparable. The challenge in defending an asymmetric person-affecting view is arguing why it’s not good for someone to be creatied with a happy life, but why it is bad for them to have an unhappy life.
Maybe the first is good in a sense, but the goodness and badness should be thought of as moral reasons directed from outcomes in which they exist to (the same or other) outcomes, or something like world-dependent rankings. Existence and non-existence are comparable for an individual, but only in outcomes in which the individual actually exists (or comes to exist). You might imagine this like a process of deliberation, starting from one outcome/choice, and then following the moral reasons to others whenever compelled to do so. You would check what happens starting from each choice/outcome. To illustrate the procreation asymmetry, which is pretty simple:
There’s no arrow starting from Nonexistence, and the person who doesn’t exist wouldn’t rank any outcomes (or have outcomes ranked for them) precisely because they don’t/won’t exist. So Nonexistence is permissible despite the presence of Positive existence as an option, since from Nonexistence, nothing is strictly better; there’s no reason from this outcome to choose otherwise.
From Negative existence, Nonexistence and Positive existence look better, since the individual would rank Nonexistence better for themself, or this is done for them.
From Positive existence, Positive existence is ranked higher than Nonexistence and Negative existence and not worse than any option, so it is permissible. It is not obligatory because of 1.
Ah, well maybe we should just defer to Broome and Greaves and not engage in the object-level discussions at all!
Hah perhaps I deserved this. I was just trying to indicate that there are people who both ‘understand the theory’ and hold that the <A, B1, B2> argument is important which was a response to your “I find people do tend to very easily dismiss the view, but usually without really understanding how it works!” comment. I concede though that you weren’t saying that of everyone.
All views in pop ethics have bonkers results, something that is widely agreed by population ethicists.
Yes I understand that it’s a matter of accepting the least bonkers result. Personally I find the idea that it might be neutral to bring miserable lives into this world is up there with some of the more bonkers results.
You may just write me off as a monster, but I quite like symmetries and I’m minded to accept a symmetrical person-affecting view
I don’t write you off as a monster! We all have different intuitions about what is repugnant. It is useful to have (I think) reached a better understanding of both of our views.
My view goes something like:
I am not willing to concede that it might be neutral to bring terrible lives into this world which means I reject necessitarianism and therefore feel the force of the <A, B1, B2> argument (as I also hold transitivity to be an important axiom). I’m not sure if I’m convinced by your argument that necessitarianism gets you out the quandary (maybe it does, I would have to think about it more) but ultimately it doesn’t matter to me as I reject necessitarianism anyway.
I note that MichaelStJules says that you can hold onto transitivity at the expense of IIA, but I don’t think this does a whole lot for me. I am also concerned by the non-identity problem. Ultimately I’m not really convinced by arguably the least objectionable person-affecting view out there (you can see my top-level comment on this post), and this all leads me to having more credence in total utilitarianism than person-affecting views (which certainly wasn’t always the case).
The ‘bonkers result’ with total utilitarianism is the repugnant conclusion which I don’t find to be repugnant as I think “lives barely worth living” are actually pretty decent—they are worth living after all! But then there’s the “very repugnant conclusion” which still somewhat bothers me. (EDIT: I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven’t actually read through the paper yet to understand it completely).
So overall I’m still somewhat morally uncertain about population axiology, but probably have highest credence in total utilitarianism. In any case it is interesting to note that it has been argued that even minimal credence in total utilitarianism can justify acting as a total utilitarian, if one resolves moral uncertainty by maximising expected moral value.
So all in all I’m content to act as a total utilitarian, at least for now.
I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven’t actually read through the paper yet to understand it completely
I’d just check the definition of the Extended very repugnant conclusion (XVRC) on p. 19. Roughly, tiny changes in welfare (e.g. pin pricks, dust specks) to an appropriate base population can make up for the addition of any number of arbitrarily bad lives and the foregoing of any number of arbitrarily good lives. The base population depends on the magnitude of the change in welfare, and the bad and good lives.
The claim of the paper is that basically all theories so far have led to the XVRC.
It’s possible to come up with theories that don’t. Take Meacham’s approach, and instead of using the sum of harms, use the maximum individual harm (and the counterpart relations should be defined to minimize the max harm in the world).
Or do something like this for pairwise comparisons only, and then extend using some kind of voting method, like beatpath, as discussed in Thomas’s paper on the asymmetry.
This is similar to the view the animal rights ethicist Tom Regan described here:
Given that these conditions are fulfilled, the choice concerning who should be saved must be decided by what I term the harm principle. Space prevents me from explaining that principle fully here (see The Case, chapters 3 and 8, for my considered views). Suffice it to say that no one has a right to have his lesser harm count for more than the greater harm of another. Thus, if death would be a lesser harm for the dog than it would be for any of the human survivors—(and this is an assumption Singer does not dispute)—then the dog’s right not to be harmed would not be violated if he were cast overboard. In these perilous circumstances, assuming that no one’s right to be treated with respect has been part of their creation, the dog’s individual right not to be harmed must be weighed equitably against the same right of each of the individual human survivors.
To weigh these rights in this fashion is not to violate anyone’s right to be treated with respect; just the opposite is true, which is why numbers make no difference in such a case. Given, that is, that what we must do is weigh the harm faced by any one individual against the harm faced by each other individual, on an individual, not a group or collective basis, it then makes no difference how many individuals will each suffer a lesser, or who will each suffer a greater, harm. It would not be wrong to cast a million dogs overboard to save the four human survivors, assuming the lifeboat case were otherwise the same. But neither would it be wrong to cast a million humans overboard to save a canine survivor, if the harm death would be for the humans was, in each case, less than the harm death would be for the dog.
These approaches all sacrifice the independence of irrelevant alternatives or transitivity.
Another way to “avoid” it is to recognize gaps in welfare, so that the smallest change in welfare (in one direction from a given level) allowed is intuitively large. For example, maybe there’s a lexical threshold for sufficiently intense suffering, and a gap in welfare just before it. Suffering may be bearable to different degrees, but some kinds may just be completely unbearable, and the threshold could be where it becomes completely unbearable; see some discussion of thresholds here. Then people people past the threshold is extremely bad, no matter where they start, whether that’s right next to the threshold, or from non-existence.
Or, maybe there’s no gap, but just barely pushing people past that threshold is extremely bad anyway, and roughly as bad as bringing people into existence already past that threshold. I think a gap in welfare is functionally the same, but explains this better.
I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views
Not negative utilitarian axiology. The proof relies on the assumption that the utility variable u can be positive.
What if “utility” is meant to refer to the objective aspects of the beings’ experience etc. that axiologies would judge as good or bad—rather than to moral goodness or badness themselves? Then I think there are two problems:
1) Supposing it’s a fair move to aggregate all these aspects into one scalar, the theorem assumes the function f must be strictly increasing. Under this interpretation the NU function would be f(u) = min(u, 0).
2) I deny that such aggregation even is a reasonable move. Restricting to hedonic welfare for simplicity, it would be more appropriate for f to be a function of two variables, happiness and suffering. Collapsing this into a scalar input, I think, obscures some massive moral differences between different formulations of the Repugnant Conclusion, for example. Interestingly, though, if we formulate the VRC as in that paper by treating all positive values of u as “only happiness, no suffering” and all negative values as “only suffering, no happiness” (thereby making my objection on this point irrelevant) the theorem still goes through for all those axiologies. But not for NU.
Edit: The paper seems to acknowledge point #2, though not the implications for NU:
One way to see that a ε increase could be very repugnant is to recall Portmore’s (1999) suggestion that ε lives in the restricted RC could be “roller coaster” lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad. Here, one admitted possibility is that an ε-change could substantially increase the terrible suffering in a life, and also increase good components; such a ε-change is not the only possible ε-change, but it would have the consequence of increasing the total amount of suffering. … Moreover, if ε-changes are of the “roller coaster” form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering.
Plenty of theories avoid the RC and VRC, but this paper extends the VRC on p. 19. Basically, you can make up for the addition of an arbitrary number of arbitrarily bad lives instead of an arbitrary number of arbitrarily good lives with arbitrarily small changes to welfare to a base population, which depends on the previous factors.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)
Also, related to your edit, epsilon changes could flip a huge number of good or neutral lives in a base population to marginally bad lives.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)
This may be counterintuitive to an extent, but to me it doesn’t reach “very repugnant” territory. Misery is still reduced here; an epsilon change of the “reducing extreme suffering” sort, evenly if barely so, doesn’t seem morally frivolous like the creation of an epsilon-happy life or, worse, creation of an epsilon roller coaster life. But I’ll have to think about this more. It’s a good point, thanks for bringing it to my attention.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell.
What would it mean to repeat this step (up to an infinite number of times)?
Intuitively, it sounds to me like the suffering gets divided more equally between those who already exist and those who do not, which ultimately leads to an infinite population where everyone has a subjectively perfect experience.
In the finite case, it leads to an extremely large population of almost perfectly untroubled lives.
If extrapolated in this way, it seems quite plausible that the population we eventually get by repeating this step is much better than the initial population.
FWIW, there’s a sense in which total utilitarianism is my 2nd favourite view: I like its symmetry and I think it has the right approach to aggregation. In so far as I am totalist, I don’t find the repugnant conclusion repugant. I just have issues with comparativism and impersonal value.
It’s not obvious to me totalism does ‘swamp’ if one appeals to moral uncertainty, but that’s another promissory note.
Ralph Bader, here, has a rather interesting and novel defence of it: https://homeweb.unifr.ch/BaderR/Pub/Asymmetry (R. Bader).pdf. Another strategy is to say you have no reason not to create the miserable child, but you have reason to end it’s life once it starts existing; this doesn’t help with scenarios where you can’t end the life.
Ya, this is interesting. Bader’s approach basically is premised on the fact that you’d want to end the life of a miserable child, and you’d want to do it as soon as possible, and ensuring this as soon as possible (in theory, not in practice) basically looks like not bringing them into existence in the first place. You could do this with the amount of badness in general, too, e.g. intensity of experiences, as I described in point 2 here until the end of the comment for suffering specifically.
The second approach you mention seems like it would lead to dynamic inconsistency or a kind of money pump, which seems similar to Bader’s point (from this comment):
if people decide to have a child they know will be forever miserable because they don’t count the harm ahead of time, once the child is born (or the decision to have the child is made), the parent(s) may decide to euthanize (abort, etc.) them for the child’s sake. And then, they could do this [have a child expected to be miserable and then euthanize/abort them] again and again and again, knowing they’ll change their minds at each point, because at each point, although they might recognize the harm, they don’t count it until after the decision is made.
The reason they might do this is because they recognize some benefit to having the child at all, and do not anticipate the need to euthanize/abort them until after the child “counts”. Euthanizing/aborting the child could be costly and outweigh the initial benefits of having the child in the first place, so it seems best to not have the child in the first place. You might respond that not having the child is therefore in the parents’ interests, given expectations about how they will act in the future and this has nothing to do with the child’s interests, so can be handled with a symmetric person-affecting view. However, this is only true because they’re predicting they will take the child’s interests into account. So, they already are taking the child’s interests into account when deciding whether or not to have them at all, just indirectly.
And I can see some person-affecting views approaching mere/benign addition and the repugnant conclusion similarly. You bring the extra people with marginally good lives into existence to get A+, since it’s no worse than A (or better, by benign addition instead of mere addition), but then you’re compelled to redistribute welfare after the fact, and this puts you in an outcome you’d find significantly worse than had you not brought the extra people into existence in the first place. You should predict that you will want to redistribute welfare after the fact when deciding whether or not to bring the extra people into existence at all.
Yup. I suspect Bader’s approach is ultimately ad hoc (I saw him present it at a conf and haven’t been through the paper closely) but I do like it.
On the second bit, I think that’s right with the A, A+ bit: the person-affector can see that letting them new people arrive and then redistributing to everyone is worse for the original people. So if you think that’s what will happen, you should avoid it. Much the same thing to say about the child.
I think there are conceivable situations where you can’t easily just ask whether or not you should create the child first without looking at each option with the child, because how exactly you create them might matter for their welfare or the welfare of others, e.g. you can imagine choosing between a risky procedure and a safe procedure (for the child’s welfare, not whether or not they will be born) for implantation for an already fertilized egg or in vitro fertilization with an already chosen sperm and egg pair. Maybe the risky one is cheaper, which would be one kind of benefit for the parents.
To run through an example, how would you handle the benign addition argument for the repugnant conclusion (assuming world A’s population is in both world A+ and world Z, and the populations in world A+ and world Z are identical)? You could imagine the above example of in vitro fertilization being structurally similar, just an extra population of size 1 instead of 99, and smaller differences in welfare.
You can pick a pairwise comparison to rule out any option on a person-affecting view, since it looks like A < A +, A+ < Z, and Z < A. Maybe all three options should be permissible?
Or maybe something like Dasgupta’s approach? It has 2 steps:
Select the best available option for each possible population. This doesn’t require any stance on extra people or identity.
For one kind of wide person-affecting view, do this instead for each possible population size, rather than each population.
Choose between the best options from step 1. There are multiple ways to do this, and there may be multiple permissible options due to incomparability.
Give only weight to necessary people, who are common to all options, or all of the best options from 1. This seems closest to necessitarianism and what you’re suggesting.
Give more weight to necessary people than extra people. I think this is Dasgupta’s original approach.
Give only weight to necessary people and badly off people (equal or unequal weight). This captures the procreation asymmetry.
For a more natural kind of wide person-affecting view, only the identities of the necessary people should matter, whereas the identities of the extra people do not.
Applying this to the benign addition argument, if worlds A+ and Z have the same populations, then A+ would be ruled out in step 1, A would be chosen, and we’d avoid the repugnant conclusion. If the extra people (compared to A) are completely different between A+ and Z, and identity matters (not using any wide modifications), then no option is ruled out at step 1, and the necessitarian approach (2.1.) would lead to A+.
There are also presumably different ways to handle uncertainty. While many of your decisions may affect who will exist in the future, the probabilities that a given individual who hasn’t been conceived will exist in each outcome might still be positive in each option, and you can still compare the welfare of these probabilistic people, e.g.:
A exists with probability 2%.
A exists with probability 1%, but is expected to be better off than in 1, conditionally on existing in each. (Or expected to be worse off.)
We might also add that 1 would actually resolve with A existing if and only if 2 would actually resolve with A not existing.
What about cases involving abortion or death generally (before or after an individual becomes conscious), or genetic selection (two options with the same selected individual, but better or worse off for reasons unrelated to identity, like saving for education, or the mother’s diet)?
Also, this seems to be a response like “this isn’t a problem in practice”, but whether or not it’s a problem in practice, being a problem in theory is still a reason against (if we actually acknowledge it as a problem). Still, you can sacrifice the independence of irrelevant alternatives instead of transitivity and make your decisions choice-set-dependent.
Re your cases, those about abortion and death you might want to treat differently from those about creating lives. But then you might not. The cases like saving for education I’ve already discussed.
I might be inclined to say something stronger, such as the 3-choice sets are not metaphysically possible, potentially with a caveat like ‘at least from the perspective of the choosing agent’. I think the same thing about accusation person-affecting views and intransivity.
I upvoted this comment Michael as I also prefer rigorously engaging with these topics as opposed to “poetically tugging on readers’ intuitions” (although Joe I did enjoy reading this post and upvoted it so please don’t take this as a snarky comment!).
Michael—do you have any thoughts on Hilary Greaves’ work with John Cusbert on defending existence comparativism? She discusses it on the 80,000 Hours podcast and you can see some slides here. I don’t know if a full draft of the paper is available yet.
Also, do you have any thoughts on the argument that neutrality leads to breaking transitivity (which I present here) implying that we should probably only have neutrality for a single zero-level of wellbeing—which might lead to adopting total utilitarianism?
Hello Jack,
Yes, I’ve heard Hilary’s 80k podcast where she mentions her paper. It’s not available on her website. If it’s the same theme as in the slides you linked, then it I don’t think it responds to the claims above. Bader supposes ‘better for’ is a dyadic (two-place) relation between the two lives. Hilary is responding to arguments that suppose ‘better for’ is a triadic (three-place) relation: between two worlds and the person. I don’t think I understand why one would want to formulate it the latter way. I’ll take a look at Hilary’s paper when it’s available.
Re your last point: I’m not 100% what you’re claiming in the other post because I found the diagrams hard to follow. You’re stating a standard version of the non-identity problem, right? I don’t think person-affecting views do face intransitivity, but that’s a promissory note that, if I’m honest, I don’t expect to get around to writing up until maybe 2022 at the earliest.
OK fair enough!
No it’s not the non-identity problem. Disappointed my diagrams didn’t work haha. Let me copy what Greaves says about this in section 5.2 of this paper:
5.2 The ‘Principle of equal existence’
If adding an extra person makes a state of affairs neither better nor worse, perhaps it results in a state of affairs that is equally as good as the original state of affairs. That is, one might try to capture the intuition of neutrality via the following principle:
The Principle of Equal Existence: Let A be any state of affairs. Let B be a state of affairs that is just like A, except that an additional person exists who does not exist in A. (In particular, all the people who exist in A also exist in B, and have the same well-being level in B as in A.) Then A and B are equally good.
As Broome (1994, 2004, pp.146-9) points out, however, this principle is all but self-contradictory. This is because there is more than one way of adding an extra person to A — one might add an extra person with well-being level 5, say (leading to state of affairs B1), or (instead) add the same extra person with well-being level 100 (leading to state of affairs B2) — and these ways are not all equally as good as one another. In our example, B2 is clearly better than B1; but the Principle of Equal Existence would require that B1 and A are equally good, and that A and B2 are equally good, in which case (by transitivity of ‘equally as good as’) B1 and B2 would have to be equally as good as one another. The Principle of Equal Existence therefore cannot be correct.
Right. Yeah, I don’t share Hilary’s intuitions and I wouldn’t analyse the situation in this way. It’s a somewhat subtle move, but I think about comparing pairs of outcomes by comparing how much better/worse they are for each person who exists in both, then adding up the individual differences (i.e. focusing on ‘personal value’; to count ‘impersonal value’ you just aggregate the welfare in each outcome and then compare those totals). I’m inclined to say A, B1, and B2 are equally as good—they are equally good for the necessary people (those who exist in all outcomes under consideration).
FWIW, I think discussants should agree that the personal value of A, B1, and B2 are the same (there are some extra complexities related to harm-minimisation views I won’t get into here). And I think discussants should also agree that the impersonal value of the outcomes is B2 > B1 > A. There is however reasonable scope for disagreement about the final value (aka ‘ultimate value’, ‘value simpliciter’, etc) of B2 vs B1 vs A, but that disagreement rests on whether one accepts the significance of impersonal and/or personal value. Neither I, nor anyone else in this post (I think) has advanced any arguments about the significance of personal vs impersonal value. That’s a separate debate. We’ve been talking about comparativism vs non-comparativism.
If you do this (I think) the problem remains? B1 and B2 have the same people but one of the people is better off in B2.
Therefore focusing on personal value, and adopting neutrality, we have:
A is as good as B1
A is as good as B2
By transitivity, B1 is as good as B2 (but this is clearly wrong from both a personal and impersonal point of view)
I think the non-comparitivist has to adopt some sort of principle of neutrality (right?), and Greaves’ (well originally Broome’s) example shows why neutrality violates some important constraint. Therefore this example should undermine non-comparativism. Joe actually mentions this argument briefly in his post (search for “Broome”).
Ah, okay. I missed that the people in B1 and B2 were supposed to be the same—it’s a so-called ‘three-choice’ case; ‘two-choice’ cases are where the only two options are the person doesn’t exist or exist with a certain welfare level. I’m inclined to think three-choice cases, even though they are relied on lots in the literature, as also metaphysically problematic for reasons that I’ve not seen pointed out in the literature so far. I’ve sketched my answer below, but this is also a promissory note, sorry, even though it’s ended up being rather long.
Roughly, the gist of my concern is this. A standard person-affecting route is to say the only persons who matter are those who exist necessarily (i.e. under all circumstances under consideration). This is based on the ideas discussed above that we just can’t compare existence to non-existence for someone. To generate the three-choice case, what’s needed is some action that (1) occurs to a future, necessarily-existing person and (2) benefits that person, whilst retaining that they are a future, necessary person, (3) leaves us with three outcomes to choose between. I don’t see how (1) - (3) are jointly possible. Let’s walk through that.
Why do we need (1)? Well, if they aren’t a future person, but they are, instead, a necessarily existing present person, then we’re in a choice between B1 and B2, not a choice between A, B1, and B2. Recall A is the outcome where the person doesn’t exist. So we’re down to two choices, not three.
Why do we need (2)? The type of 3-choice case that most often comes up in the literature—when people flesh out the details, rather than just stipulating that the case is possible—is where we are talking about providing medical treatment to cure an as-yet-unborn child of some genetic condition. Usually claim is “look, obviously you should provide the treatment and that will benefit that child without changing its identity.” A usual observation made in these debates is that your genetics are a necessary condition for your identity: if you had had different genetics, you wouldn’t have existed—consider non-identical twins being different people. Let’s consider the two options: the intervention causes a different person to exist or it doesn’t.
Suppose the former is true: the genetic intervention leads person, C, rather than person B to be created. Okay, so now the choice-set is really <A, B1, C1> not <A, B1, B2>. This is the familiar non-identity problem case.
Suppose the latter is true: the genetic intervention doesn’t change identity. Recall the person must, crucially, be a future, necessary person. But how can you change anyone’s genetics prior to their existence whilst maintaining that the original person will necessarily exist(! ). This, I’m afraid, is metaphysically problematic or, to put it in ordinary British English, bonkers.
The three-case enthusiast might try again by suggesting something like the following: they are considering whether to invest money for their future nephew to give to him when he turns 21. Now, we can imagine a case where you doing this sort of thing is identity changing: you tell your sibling and their spouse, who haven’t yet had the child, you’re going to do this. It causes them to conceive later and create a different child. Fine, but here we’re back to <A, B1, C1> as we’re talking about stopping one child existing, creating a different one instead, and benefitting that second one.
But suppose, for some reason, it’s not identity changing. Maybe the child is already in utero, or it’s a fertilised egg in a sperm bank and your sibling and their spouse are 100% going to have it, whatever you do, or something. Recall, future person needs to exist necessarily for the three-option case to arise. Well, if there is no possibility of the child not existing, there is no outcome A anymore—at least, not as far are you are concerned; you now face a choice-set of <B1, B2> and can say normal things about why you should choose B2: it’s better for that particular child whose identity remains unchanged.
All told, I doubt the choice-set <A, B1, B2> is (metaphysically?) possible. This is important because its existence is taken as a strong objection to person-affecting views. I don’t think the existence of choice sets like <A, B1, C1> - which is the ordinary non-identity problem - are nearly so problematic.
I think I agree with MichaelStJules here. I don’t think that the “practical” possibility of a choice set like <A, B1, B2> is in fact important. The important thing I think is that we can conceive of such a choice set—it’s not difficult to consider a scenario where I don’t exist, a scenario where I exist with happiness +5, and a scenario where I exist with happiness +10. Broome’s example is essentially a thought experiment, and thought experiments can be weird and unrealistic whilst still being very powerful (that’s what makes them so fun!).
I find this bizarre. So if you have a choice between (A) not having a baby or (B) having a baby and then immediately torturing it to death we can ignore the “torture to death” aspect when making this decision because the child isn’t “existing necessarily”? Maybe I’m misunderstanding, but I find any such person-affecting view very easy to dismiss.
As I said to MichaelStJules, I’m inclined to say the possibility of three-choice cases rests on a confusion for the reasons I already gave. The A, B1, B2 case should at least be structured differently into a sequence of two choices (1) create/don’t create, (2) benefit/don’t benefit. (1) is incomparable in value for someone, (2) is not. Should you create a child? Well, on necessitarianism, that depends solely on the effects this has on other, necessary people (and thus not the child). Okay, once you’ve had/are going to have a child, should you torture it. Um, what do you think? If this is puzzling, recall we are trying to think in terms of personal value (‘good for’). I don’t think we can say anything is good/bad for an entity that doesn’t exist necessarily (i.e. in all contexts at hand).
FWIW, I find people do tend to very easily dismiss the view, but usually without really understanding how it works! It’s a bit like when people say “oh, utilitarianism allows murder? Clearly false. What’s the next topic?”
I would just point out that Greaves and Broome probably understand how person-affecting views work and seem to find this <A, B1, B2> argument highly problematic. I used to hold a person-affecting view (genuinely I did) and when I came across this argument I found my faith in it severely tested. I haven’t really been convinced by your defence (partly because I still find necessitarianism a bit bonkers—more on this below), but I may need to think about it more.
Perhaps I’m misunderstanding necessitarianism but it doesn’t seem hard to find bizarre implications of it (my first one was sloppy). What about the choice between:
A) Not having a child
B) Having a child you know/have very strong reasons to expect will live a dreadful life for reasons other than what you will do to the child when it is born (a fairly realistic scenario in my opinion e.g. terrible genetic defect)
Necessitarianism would seem to imply the two are equally permissible and I’m pretty comfortable in saying that they are not.
Ah, well maybe we should just defer to Broome and Greaves and not engage in the object-level discussions at all! That would certainly save time… FWIW, it’s pretty common in philosophy to say “Person X conceptualises problem P in such and such a way. What they miss out is such and such.”
All views in pop ethics have bonkers results, something that is widely agreed by population ethicists. Your latest example is about the procreative asymmetry (creating happy lives neutral, creating unhappy lives bad). Quite of lot of people with person-affecting intuitions think there is a procreative asymmetry, so would agree with you, but it’s proved quite hard to defend. Ralph Bader, here, has a rather interesting and novel defence of it: https://homeweb.unifr.ch/BaderR/Pub/Asymmetry (R. Bader).pdf. Another strategy is to say you have no reason not to create the miserable child, but you have reason to end it’s life once it starts existing; this doesn’t help with scenarios where you can’t end the life.
You may just write me off as a monster, but I quite like symmetries and I’m minded to accept a symmetrical person-affecting view (at least, I quite a bunch of credence in it). The line of thought is that existence and non-existence are not comparable. The challenge in defending an asymmetric person-affecting view is arguing why it’s not good for someone to be creatied with a happy life, but why it is bad for them to have an unhappy life.
Maybe the first is good in a sense, but the goodness and badness should be thought of as moral reasons directed from outcomes in which they exist to (the same or other) outcomes, or something like world-dependent rankings. Existence and non-existence are comparable for an individual, but only in outcomes in which the individual actually exists (or comes to exist). You might imagine this like a process of deliberation, starting from one outcome/choice, and then following the moral reasons to others whenever compelled to do so. You would check what happens starting from each choice/outcome. To illustrate the procreation asymmetry, which is pretty simple:
There’s no arrow starting from Nonexistence, and the person who doesn’t exist wouldn’t rank any outcomes (or have outcomes ranked for them) precisely because they don’t/won’t exist. So Nonexistence is permissible despite the presence of Positive existence as an option, since from Nonexistence, nothing is strictly better; there’s no reason from this outcome to choose otherwise.
From Negative existence, Nonexistence and Positive existence look better, since the individual would rank Nonexistence better for themself, or this is done for them.
From Positive existence, Positive existence is ranked higher than Nonexistence and Negative existence and not worse than any option, so it is permissible. It is not obligatory because of 1.
1 and 2 together are the procreation asymmetry.
I discuss this more here.
Hah perhaps I deserved this. I was just trying to indicate that there are people who both ‘understand the theory’ and hold that the <A, B1, B2> argument is important which was a response to your “I find people do tend to very easily dismiss the view, but usually without really understanding how it works!” comment. I concede though that you weren’t saying that of everyone.
Yes I understand that it’s a matter of accepting the least bonkers result. Personally I find the idea that it might be neutral to bring miserable lives into this world is up there with some of the more bonkers results.
I don’t write you off as a monster! We all have different intuitions about what is repugnant. It is useful to have (I think) reached a better understanding of both of our views.
My view goes something like:
I am not willing to concede that it might be neutral to bring terrible lives into this world which means I reject necessitarianism and therefore feel the force of the <A, B1, B2> argument (as I also hold transitivity to be an important axiom). I’m not sure if I’m convinced by your argument that necessitarianism gets you out the quandary (maybe it does, I would have to think about it more) but ultimately it doesn’t matter to me as I reject necessitarianism anyway.
I note that MichaelStJules says that you can hold onto transitivity at the expense of IIA, but I don’t think this does a whole lot for me. I am also concerned by the non-identity problem. Ultimately I’m not really convinced by arguably the least objectionable person-affecting view out there (you can see my top-level comment on this post), and this all leads me to having more credence in total utilitarianism than person-affecting views (which certainly wasn’t always the case).
The ‘bonkers result’ with total utilitarianism is the repugnant conclusion which I don’t find to be repugnant as I think “lives barely worth living” are actually pretty decent—they are worth living after all! But then there’s the “very repugnant conclusion” which still somewhat bothers me. (EDIT: I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views, although I haven’t actually read through the paper yet to understand it completely).
So overall I’m still somewhat morally uncertain about population axiology, but probably have highest credence in total utilitarianism. In any case it is interesting to note that it has been argued that even minimal credence in total utilitarianism can justify acting as a total utilitarian, if one resolves moral uncertainty by maximising expected moral value.
So all in all I’m content to act as a total utilitarian, at least for now.
It was actually fairly useful to write that out.
I’d just check the definition of the Extended very repugnant conclusion (XVRC) on p. 19. Roughly, tiny changes in welfare (e.g. pin pricks, dust specks) to an appropriate base population can make up for the addition of any number of arbitrarily bad lives and the foregoing of any number of arbitrarily good lives. The base population depends on the magnitude of the change in welfare, and the bad and good lives.
The claim of the paper is that basically all theories so far have led to the XVRC.
It’s possible to come up with theories that don’t. Take Meacham’s approach, and instead of using the sum of harms, use the maximum individual harm (and the counterpart relations should be defined to minimize the max harm in the world).
Or do something like this for pairwise comparisons only, and then extend using some kind of voting method, like beatpath, as discussed in Thomas’s paper on the asymmetry.
This is similar to the view the animal rights ethicist Tom Regan described here:
These approaches all sacrifice the independence of irrelevant alternatives or transitivity.
Another way to “avoid” it is to recognize gaps in welfare, so that the smallest change in welfare (in one direction from a given level) allowed is intuitively large. For example, maybe there’s a lexical threshold for sufficiently intense suffering, and a gap in welfare just before it. Suffering may be bearable to different degrees, but some kinds may just be completely unbearable, and the threshold could be where it becomes completely unbearable; see some discussion of thresholds here. Then people people past the threshold is extremely bad, no matter where they start, whether that’s right next to the threshold, or from non-existence.
Or, maybe there’s no gap, but just barely pushing people past that threshold is extremely bad anyway, and roughly as bad as bringing people into existence already past that threshold. I think a gap in welfare is functionally the same, but explains this better.
Not negative utilitarian axiology. The proof relies on the assumption that the utility variable u can be positive.
What if “utility” is meant to refer to the objective aspects of the beings’ experience etc. that axiologies would judge as good or bad—rather than to moral goodness or badness themselves? Then I think there are two problems:
1) Supposing it’s a fair move to aggregate all these aspects into one scalar, the theorem assumes the function f must be strictly increasing. Under this interpretation the NU function would be f(u) = min(u, 0).
2) I deny that such aggregation even is a reasonable move. Restricting to hedonic welfare for simplicity, it would be more appropriate for f to be a function of two variables, happiness and suffering. Collapsing this into a scalar input, I think, obscures some massive moral differences between different formulations of the Repugnant Conclusion, for example. Interestingly, though, if we formulate the VRC as in that paper by treating all positive values of u as “only happiness, no suffering” and all negative values as “only suffering, no happiness” (thereby making my objection on this point irrelevant) the theorem still goes through for all those axiologies. But not for NU.
Edit: The paper seems to acknowledge point #2, though not the implications for NU:
Plenty of theories avoid the RC and VRC, but this paper extends the VRC on p. 19. Basically, you can make up for the addition of an arbitrary number of arbitrarily bad lives instead of an arbitrary number of arbitrarily good lives with arbitrarily small changes to welfare to a base population, which depends on the previous factors.
For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)
Also, related to your edit, epsilon changes could flip a huge number of good or neutral lives in a base population to marginally bad lives.
This may be counterintuitive to an extent, but to me it doesn’t reach “very repugnant” territory. Misery is still reduced here; an epsilon change of the “reducing extreme suffering” sort, evenly if barely so, doesn’t seem morally frivolous like the creation of an epsilon-happy life or, worse, creation of an epsilon roller coaster life. But I’ll have to think about this more. It’s a good point, thanks for bringing it to my attention.
What would it mean to repeat this step (up to an infinite number of times)?
Intuitively, it sounds to me like the suffering gets divided more equally between those who already exist and those who do not, which ultimately leads to an infinite population where everyone has a subjectively perfect experience.
In the finite case, it leads to an extremely large population of almost perfectly untroubled lives.
If extrapolated in this way, it seems quite plausible that the population we eventually get by repeating this step is much better than the initial population.
I wrote some more about this here in reply to Jack.
Glad we made some progress!
FWIW, there’s a sense in which total utilitarianism is my 2nd favourite view: I like its symmetry and I think it has the right approach to aggregation. In so far as I am totalist, I don’t find the repugnant conclusion repugant. I just have issues with comparativism and impersonal value.
It’s not obvious to me totalism does ‘swamp’ if one appeals to moral uncertainty, but that’s another promissory note.
Anyway, a useful discussion.
Definitely a useful discussion and I look forward to seeing you write more on all of this!
Ya, this is interesting. Bader’s approach basically is premised on the fact that you’d want to end the life of a miserable child, and you’d want to do it as soon as possible, and ensuring this as soon as possible (in theory, not in practice) basically looks like not bringing them into existence in the first place. You could do this with the amount of badness in general, too, e.g. intensity of experiences, as I described in point 2 here until the end of the comment for suffering specifically.
The second approach you mention seems like it would lead to dynamic inconsistency or a kind of money pump, which seems similar to Bader’s point (from this comment):
The reason they might do this is because they recognize some benefit to having the child at all, and do not anticipate the need to euthanize/abort them until after the child “counts”. Euthanizing/aborting the child could be costly and outweigh the initial benefits of having the child in the first place, so it seems best to not have the child in the first place. You might respond that not having the child is therefore in the parents’ interests, given expectations about how they will act in the future and this has nothing to do with the child’s interests, so can be handled with a symmetric person-affecting view. However, this is only true because they’re predicting they will take the child’s interests into account. So, they already are taking the child’s interests into account when deciding whether or not to have them at all, just indirectly.
And I can see some person-affecting views approaching mere/benign addition and the repugnant conclusion similarly. You bring the extra people with marginally good lives into existence to get A+, since it’s no worse than A (or better, by benign addition instead of mere addition), but then you’re compelled to redistribute welfare after the fact, and this puts you in an outcome you’d find significantly worse than had you not brought the extra people into existence in the first place. You should predict that you will want to redistribute welfare after the fact when deciding whether or not to bring the extra people into existence at all.
Yup. I suspect Bader’s approach is ultimately ad hoc (I saw him present it at a conf and haven’t been through the paper closely) but I do like it.
On the second bit, I think that’s right with the A, A+ bit: the person-affector can see that letting them new people arrive and then redistributing to everyone is worse for the original people. So if you think that’s what will happen, you should avoid it. Much the same thing to say about the child.
I think there are conceivable situations where you can’t easily just ask whether or not you should create the child first without looking at each option with the child, because how exactly you create them might matter for their welfare or the welfare of others, e.g. you can imagine choosing between a risky procedure and a safe procedure (for the child’s welfare, not whether or not they will be born) for implantation for an already fertilized egg or in vitro fertilization with an already chosen sperm and egg pair. Maybe the risky one is cheaper, which would be one kind of benefit for the parents.
To run through an example, how would you handle the benign addition argument for the repugnant conclusion (assuming world A’s population is in both world A+ and world Z, and the populations in world A+ and world Z are identical)? You could imagine the above example of in vitro fertilization being structurally similar, just an extra population of size 1 instead of 99, and smaller differences in welfare.
You can pick a pairwise comparison to rule out any option on a person-affecting view, since it looks like A < A +, A+ < Z, and Z < A. Maybe all three options should be permissible?
Or maybe something like Dasgupta’s approach? It has 2 steps:
Select the best available option for each possible population. This doesn’t require any stance on extra people or identity.
For one kind of wide person-affecting view, do this instead for each possible population size, rather than each population.
Choose between the best options from step 1. There are multiple ways to do this, and there may be multiple permissible options due to incomparability.
Give only weight to necessary people, who are common to all options, or all of the best options from 1. This seems closest to necessitarianism and what you’re suggesting.
Give more weight to necessary people than extra people. I think this is Dasgupta’s original approach.
Give only weight to necessary people and badly off people (equal or unequal weight). This captures the procreation asymmetry.
For a more natural kind of wide person-affecting view, only the identities of the necessary people should matter, whereas the identities of the extra people do not.
Applying this to the benign addition argument, if worlds A+ and Z have the same populations, then A+ would be ruled out in step 1, A would be chosen, and we’d avoid the repugnant conclusion. If the extra people (compared to A) are completely different between A+ and Z, and identity matters (not using any wide modifications), then no option is ruled out at step 1, and the necessitarian approach (2.1.) would lead to A+.
For Dasgupta’s approach, see:
http://users.ox.ac.uk/~sfop0060/pdf/Welfare%20economics%20of%20population.pdf
https://philpapers.org/rec/DASSAF-2
There are also presumably different ways to handle uncertainty. While many of your decisions may affect who will exist in the future, the probabilities that a given individual who hasn’t been conceived will exist in each outcome might still be positive in each option, and you can still compare the welfare of these probabilistic people, e.g.:
A exists with probability 2%.
A exists with probability 1%, but is expected to be better off than in 1, conditionally on existing in each. (Or expected to be worse off.)
We might also add that 1 would actually resolve with A existing if and only if 2 would actually resolve with A not existing.
What about cases involving abortion or death generally (before or after an individual becomes conscious), or genetic selection (two options with the same selected individual, but better or worse off for reasons unrelated to identity, like saving for education, or the mother’s diet)?
Also, this seems to be a response like “this isn’t a problem in practice”, but whether or not it’s a problem in practice, being a problem in theory is still a reason against (if we actually acknowledge it as a problem). Still, you can sacrifice the independence of irrelevant alternatives instead of transitivity and make your decisions choice-set-dependent.
Re your cases, those about abortion and death you might want to treat differently from those about creating lives. But then you might not. The cases like saving for education I’ve already discussed.
I might be inclined to say something stronger, such as the 3-choice sets are not metaphysically possible, potentially with a caveat like ‘at least from the perspective of the choosing agent’. I think the same thing about accusation person-affecting views and intransivity.