You say that “sacrificing the welfare of just one person so that another could be born… seems wrong”. But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don’t understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you’d be “sacrificing” the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are “sacrificing” people no matter what you do, or (what seems more plausible) talk of “sacrificing” doesn’t really make sense in this context.
Even ignoring the above, I’m confused about why you think that “the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value” given your endorsement of asymmetrical views. How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would have been even more miserable, is not born?
You have said that you don’t share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you intuitively compare the value of two worlds that differ only in how much positive welfare they contain?
The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.
This comment seems to me to be requesting clarification in good faith. Might someone who downvoted it explain to why, if it wouldn’t take too much time or effort? I’m fairly new to the forum and would like a more complete view of the customs.
Edited to add: Perhaps because it was perceived as lower effort than the parent comment, and required another high-effort post in response, which might have been avoided by a closer reading?
I never downvoted his comments, and have (just now) instead upvoted them.
However, I would interpret all of Pablo’s points in his response not just as requesting clarification but also as objections to my answer, in a post that’s only asking for people’s reasons to object to the RC and is explicitly not about technical philosophical arguments (although it’s not clear this should extend to replies to answers), just basic intuitions.
I don’t personally mind, and these are interesting points to engage with. However, I can imagine others finding it too intimidating/adversarial/argumentative.
(I’ve made a bunch of edits to the following comment within 2 hours of posting it.)
You say that “sacrificing the welfare of just one person so that another could be born… seems wrong”. But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don’t understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you’d be “sacrificing” the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are “sacrificing” people no matter what you do, or (what seems more plausible) talk of “sacrificing” doesn’t really make sense in this context.
If you’re a consequentialist whose views are transitive and complete, and satisfy the independence of irrelevant alternatives, then the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not necessarily symmetrical in practice if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I’d recommend the “wide, hard view” in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I’m aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a “hard” view, not to make up for losses to “necessary” people, who would exist regardless. Because it’s “wide”, it “solves” the Nonidentity problem. The wide version would still reject the RC even if we’re choosing between two disjoint contingent populations, I think because “excess” (in number) contingent people with good lives wouldn’t count in this particular pairwise comparison. Another way to think about it would be like matching counterparts across worlds, and then we can talk about sacrifices as the differences in welfare between an individual and their counterpart, although I’m not sure the view entails something equivalent to this.
My own views are much more asymmetric than the views in Thomas’s work, and I lean towards negative utilitarianism, since I don’t think future contingent good lives can make up for future contingent bad lives at all.
How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would be even more miserable, is not born?
I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here that I think can explain this better.
You have said that you don’t share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you compare the value of two worlds that differ only in how much positive welfare they contain?
Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn’t necessarily differ only in how much positive welfare there is, depending on how exactly we’re imagining it. If we’re only talking about positive welfare and no negative welfare, preferences aren’t more frustrated/less satisfied than otherwise, and everyone is perfectly content in the “repugnant” world, then I wouldn’t object. If I had to make a personal sacrifice to bring someone into existence, I would probably not be perfectly content, possibly unless I thought it was the right thing to do (although I might feel some dissatisfaction either way, and less if I’m doing what I think is the right thing).
Plus, it’s worth sharing my more general objection regardless of my denial of positive welfare, since it may reflect others’ views, and they can upvote or comment to endorse it if they agree.
The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.
Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It’s not obvious that they should be, and I think common sense ethics does not treat them the same.
But even then, the intrapersonal version (+welfarist consequentialism) also violates autonomy and means I shouldn’t do whatever I want in my world, so my objection is similar. I think “preference-affecting” views (person-affecting views applied at the level of individual preferences/desires, especially Thomas’s “hard, wide view”) would likely fare better here for structurally similar reasons, so the “solution” could be similar or even the same.
Symmetric total preference utilitarianism and average preference utilitarianism would imply that it’s good for a person to create enough sufficiently strong satisfied preferences in them, even if it means violating their consent and the preferences they already have or will have. Classical utilitarianism implies involuntary wireheading (done right) is good for a person. Preference-affecting views and antifrustrationism (negative preference utilitarianism) would only endorse violating consent or preferences for a person’s own sake in ways that depend on preferences they would have otherwise or anyway, so you violate consent/some preferences to respect others (although I think antifrustrationism does worse than asymmetric preference-affecting views for respecting preferences/consent, and deontological constraints or limiting aggregation would likely do even better).
[ETA: You say you’ve made edits to your post, so it’s possible some of my replies are addressed by your revisions. I am always responding to the text I’m quoting, which may differ from the final version of your comment.]
If you’re a consequentialist whose views are transitive, complete and satisfy the independence of irrelevant alternatives, the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not symmetrical if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I’d recommend the “wide, hard view” in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I’m aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a “hard” view, not losses to “necessary” people, who would exist regardless. Because it’s “wide”, it “solves” the Nonidentity problem.
I don’t have time to look into this right now, but I also feel that this probably won’t provide an answer to the question I meant to ask. (Apologies if my wording was unclear.) Call the world with few, very happy people, A, and the world with lots of mildly happy people, Z. The question is, then, simply: “If bringing about Z sacrifices people in A, why doesn’t bringing about A sacrifice people in Z?” You say that you’d be sacrificing someone “even if they would be far better off than the first person”, which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.
I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here that I think can explain this better.
I don’t understand how this answer explains why you are not treating the person as a value receptacle, given that you believe this is what the total utilitarian does in the Repugnant Conclusion. I can see why a negative utilitarian and/or a person-affecting theorist would treat these two cases differently. What I don’t understand is why the difference is supposed to consist in that people are being treated as value receptacles in one case, but not in the other. This just seems to misdiagnose what’s going on here.
The comment you shared helps me understand the Asymmetry, but not your claim about value receptacles.
Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn’t necessarily differ only in how much positive welfare there is, depending on how exactly we’re imagining it.
I agree that you can have people with lifetime wellbeing just above neutrality either because they live their entire lives at that level or because they have lots of ups and downs that almost perfectly cancel each other out (and anything in between). I think discussions of the Repugnant Conclusion sometimes make the stronger assumption that people’s lives are continuously just above neutrality (“muzak and potatoes”), and that people may respond to the thought experiment differently depending on whether or not this assumption is made.
For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the “muzak and potatoes” life is as good as it can be (it lacks any unpleasantness) whereas lives in other Repugnant Conclusion scenarios could contain huge amounts of suffering. I handn’t appreciated this point when I wrote my previous comment, but now that I do, I feel even more confused.
Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It’s not obvious that they should be, and I think common sense ethics does not treat them the same.
Oh, I wasn’t saying they should be treated the same. It’s pretty clear that commonsense morality treats them differently.
My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases. Any explanation of the counterintuitiveness of the Repugnant Conclusion in terms of factors that are specific to the interpersonal case is therefore implausible.
Although I’m not sure I’m understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that people would also in some sense be sacrificed in the intrapersonal case. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.
[Of course, feel free to ignore any of this if you aren’t interested, etc.]
(FWIW, I never downvoted your comments and have upvoted them instead, and I appreciate the engagement and thoughtful questions/pushback, since it helps me make my own views clearer. Since I spent several hours on this thread, I might not respond quickly or at all to further comments.)
The question is, then, simply: “If bringing about Z sacrifices people in A, why doesn’t bringing about A sacrifice people in Z?” You say that you’d be sacrificing someone “even if they would be far better off than the first person”, which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.
Sorry, I tried to respond to that in an edit you must have missed, since I realized I didn’t after posting my reply. In short, a wide person-affecting view means that Z would involve “sacrifice” and A would not, if both populations are completely disjoint and contingent, roughly because the people in A have worse off “counterparts” in Z, and the excess positive welfare people in Z without counterparts don’t compensate for this. No one in Z is better off than anyone in A, so none are better off than their counterparts in A, so there can’t be any sacrifice in a “wide” way in this direction. The Nonidentity problem would involve “sacrifice” in one way only, too, under a wide view.
(If all the people in Z already exist, and none of the people in A exist, then going from Z to A by killing everyone in Z could indeed mean “sacrificing” the people in Z for those in A, under some person-affecting views, and be bad under some such views.
Under a narrow view (instead of a wide one), with disjoint contingent populations, we’d be indifferent between A and Z, or they’d be incomparable, and both or neither would involve “sacrifice”.)
On value receptacles, here’s a quote by Frick (on his website), from a paper in which he defends the procreation asymmetry:
For another, it feeds a common criticism of utilitarianism, namely that it treats people as fungible and views them in a quasi-instrumental fashion. Instrumental valuing is an attitude that we have towards particulars. However, to value something instrumentally is to value it, in essence, for its causal properties. But these same causal properties could just as well be instantiated by some other particular thing. Hence, insofar as a particular entity is valued only instrumentally, it is regarded as fungible. Similarly, a teleological view which regards our welfare-related reasons as purely state-regarding can be accused of taking a quasi-instrumental approach towards people. It views them as fungible receptacles for well-being, not as mattering qua individuals.29 Totalist utilitarianism, it is often said, does not take persons sufficiently seriously. By treating the moral significance of persons and their well-being as derivative of their contribution to valuable states of affairs, it reverses what strikes most of us as the correct order of dependence.30 Human wellbeing matters because people matter – not vice versa.
I haven’t thought much about this particular way of framing the receptacle objection, and what I have in mind is basically what Frick wrote later:
any reasons to confer well-being on a person are conditional on the fact of her existence.
This is a bit vague: what do we mean by “conditional”? But there are plausible interpretations that symmetric person-affecting views, asymmetric person-affecting views and negative axiologies satisfy, while the total view, reverse asymmetric person-affecting views and positive axiologies don’t really seem to have such plausible interpretations (or have fewer and/or less plausible interpretations).
I have two ways in mind that seem compatible with the procreation asymmetry, but not the total view:
First, in line with my linked shortform comment about the asymmetry, a person’s interests should only direct us from outcomes in which they (the person, or the given interests) exist or will exist to the same or other outcomes (possibly including outcomes in which they don’t exist), and all reasons with regards to a given person are of this form. I think this is basically an actualist argument (which Frick discusses and objects to in his paper). Having reasons regarding an individual A in an outcome in which they don’t exist direct us towards an outcome in which they do exist would not seem conditional on A’s existence. It’s more “conditional” if the reasons regarding a given outcome come from that outcome than from other outcomes.
Second, there’s Frick’s approach. Here’s a simplified evaluative version:
All of our reasons with regards to persons should be of the following form:
It is in one way better that the following is satisfied: if person A exists, then P(A),
where P is a predicate that depends terminally only on A’s interests.
Setting P(A)=”A has a life worth living” would give us reason to prevent lives not worth living. Plus, there’s no P(A) we could use that would imply that a given world with A is in one way better (due to the statement with P(A)) than a given world without A. So, this is compatible with the procreation asymmetry, but not the total view.
It could be “wide” and solve the Nonidentity problem, since we can find P such that P would be satisfied for B but not A, if B would be better off than A, so we would have more reasons for A not to exist than for B not to exist.
It’s also compatible with antifrustrationism and negative utilitarianism in a few ways:
If we apply it to preferences instead of whole persons, with predicates like P(A)=”A is satisfied”
If we use predicates like “P(A)=if A has interest y, then y is satisfied at least to degree d”
If we use predicates like “P(A)=A has welfare at least w”, allowing for the possibility of positive welfare being better than less in an existing individual, but being perfectionistic about it, so that anything worse than the best is worse than nonexistence.
I think part of what follows in Frick’s paper is about applying/extending this in a way that isn’t basically antinatalist.
For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the “muzak and potatoes” life is as good as it can be (it lacks any unpleasantness) whereas other lives could contain huge amounts of suffering.
Ya, this seems right to me.
My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases.
What do you mean by “the phenomenology of the intuitions” here?
One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It’s not clear they’re actually worse off overall or even at each moment in something that might “look” like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn’t seem so objectionable.
Although I’m not sure I’m understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that you’d be sacrificing your future selves or treating them as value receptacles. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.
It’s more about my interests/preferences than my future selves, and not sacrificing them or treating them as value receptacles. I think respect for autonomy/preferences requires not treating our preferences as mere value receptacles that you can just make more of to get more value and make things go better, and this can rule out both the interpersonal RC and the intrapersonal RC. This is in principle, ignoring other reasons, indirect effects, etc., so not necessarily in practice.
I have moral uncertainty, and I’m sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They’re all versions of negative prioritarianism/utilitarianism or very similar.
Thanks for the detailed reply. For now, I will only address your comments at the end, since I haven’t read the sources you cite and haven’t thought about this much beyond what I wrote previously. (As a note of color, Johann and I did the BPhil together and used to meet every week for several hours to discuss philosophy, although he kept developing his views about population ethics after he moved to Harvard; you have rekindled my interest in reading his dissertation.)
What do you mean by “the phenomenology of the intuitions” here?
I mean that the intuitions triggered by the interpersonal and the intrapersonal cases feel very similar from the inside. For example, if I try to describe why the interpersonal case feels repugnant, I’m inclined to say stuff like “it feels like something would be missing” or “there’s more to life than that”; and this is exactly what I would also say to describe why the intrapersonal case feels repugnant. How these two intuitions feel also makes me reasonably confident that fMRI scans of people presented with both cases would show very similar patterns of brain activity.
One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It’s not clear they’re actually worse off overall or even at each moment in something that might “look” like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn’t seem so objectionable.
I think that supposed difference is ruled out by the way the intrapersonal case is constructed. In any case, what I regard as the most interesting intrapersonal version is one where it is analogous to the interpersonal version in this respect. Of course, we can discuss a scenario of the sort you describe, but then I would no longer say that my intuitions about the two cases feel very similar, or that we can learn much by comparing the two cases.
I have moral uncertainty, and I’m sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They’re all versions of negative prioritarianism/utilitarianism or very similar.
I’m confused by your answer.
You say that “sacrificing the welfare of just one person so that another could be born… seems wrong”. But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don’t understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you’d be “sacrificing” the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are “sacrificing” people no matter what you do, or (what seems more plausible) talk of “sacrificing” doesn’t really make sense in this context.
Even ignoring the above, I’m confused about why you think that “the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value” given your endorsement of asymmetrical views. How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would have been even more miserable, is not born?
You have said that you don’t share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you intuitively compare the value of two worlds that differ only in how much positive welfare they contain?
The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.
This comment seems to me to be requesting clarification in good faith. Might someone who downvoted it explain to why, if it wouldn’t take too much time or effort? I’m fairly new to the forum and would like a more complete view of the customs.
Edited to add: Perhaps because it was perceived as lower effort than the parent comment, and required another high-effort post in response, which might have been avoided by a closer reading?
I never downvoted his comments, and have (just now) instead upvoted them.
However, I would interpret all of Pablo’s points in his response not just as requesting clarification but also as objections to my answer, in a post that’s only asking for people’s reasons to object to the RC and is explicitly not about technical philosophical arguments (although it’s not clear this should extend to replies to answers), just basic intuitions.
I don’t personally mind, and these are interesting points to engage with. However, I can imagine others finding it too intimidating/adversarial/argumentative.
Thank you for the explanation!
(I’ve made a bunch of edits to the following comment within 2 hours of posting it.)
If you’re a consequentialist whose views are transitive and complete, and satisfy the independence of irrelevant alternatives, then the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not necessarily symmetrical in practice if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I’d recommend the “wide, hard view” in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I’m aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a “hard” view, not to make up for losses to “necessary” people, who would exist regardless. Because it’s “wide”, it “solves” the Nonidentity problem. The wide version would still reject the RC even if we’re choosing between two disjoint contingent populations, I think because “excess” (in number) contingent people with good lives wouldn’t count in this particular pairwise comparison. Another way to think about it would be like matching counterparts across worlds, and then we can talk about sacrifices as the differences in welfare between an individual and their counterpart, although I’m not sure the view entails something equivalent to this.
My own views are much more asymmetric than the views in Thomas’s work, and I lean towards negative utilitarianism, since I don’t think future contingent good lives can make up for future contingent bad lives at all.
I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here that I think can explain this better.
Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn’t necessarily differ only in how much positive welfare there is, depending on how exactly we’re imagining it. If we’re only talking about positive welfare and no negative welfare, preferences aren’t more frustrated/less satisfied than otherwise, and everyone is perfectly content in the “repugnant” world, then I wouldn’t object. If I had to make a personal sacrifice to bring someone into existence, I would probably not be perfectly content, possibly unless I thought it was the right thing to do (although I might feel some dissatisfaction either way, and less if I’m doing what I think is the right thing).
Plus, it’s worth sharing my more general objection regardless of my denial of positive welfare, since it may reflect others’ views, and they can upvote or comment to endorse it if they agree.
Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It’s not obvious that they should be, and I think common sense ethics does not treat them the same.
But even then, the intrapersonal version (+welfarist consequentialism) also violates autonomy and means I shouldn’t do whatever I want in my world, so my objection is similar. I think “preference-affecting” views (person-affecting views applied at the level of individual preferences/desires, especially Thomas’s “hard, wide view”) would likely fare better here for structurally similar reasons, so the “solution” could be similar or even the same.
Symmetric total preference utilitarianism and average preference utilitarianism would imply that it’s good for a person to create enough sufficiently strong satisfied preferences in them, even if it means violating their consent and the preferences they already have or will have. Classical utilitarianism implies involuntary wireheading (done right) is good for a person. Preference-affecting views and antifrustrationism (negative preference utilitarianism) would only endorse violating consent or preferences for a person’s own sake in ways that depend on preferences they would have otherwise or anyway, so you violate consent/some preferences to respect others (although I think antifrustrationism does worse than asymmetric preference-affecting views for respecting preferences/consent, and deontological constraints or limiting aggregation would likely do even better).
[ETA: You say you’ve made edits to your post, so it’s possible some of my replies are addressed by your revisions. I am always responding to the text I’m quoting, which may differ from the final version of your comment.]
I don’t have time to look into this right now, but I also feel that this probably won’t provide an answer to the question I meant to ask. (Apologies if my wording was unclear.) Call the world with few, very happy people, A, and the world with lots of mildly happy people, Z. The question is, then, simply: “If bringing about Z sacrifices people in A, why doesn’t bringing about A sacrifice people in Z?” You say that you’d be sacrificing someone “even if they would be far better off than the first person”, which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.
I don’t understand how this answer explains why you are not treating the person as a value receptacle, given that you believe this is what the total utilitarian does in the Repugnant Conclusion. I can see why a negative utilitarian and/or a person-affecting theorist would treat these two cases differently. What I don’t understand is why the difference is supposed to consist in that people are being treated as value receptacles in one case, but not in the other. This just seems to misdiagnose what’s going on here.
The comment you shared helps me understand the Asymmetry, but not your claim about value receptacles.
I agree that you can have people with lifetime wellbeing just above neutrality either because they live their entire lives at that level or because they have lots of ups and downs that almost perfectly cancel each other out (and anything in between). I think discussions of the Repugnant Conclusion sometimes make the stronger assumption that people’s lives are continuously just above neutrality (“muzak and potatoes”), and that people may respond to the thought experiment differently depending on whether or not this assumption is made.
For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the “muzak and potatoes” life is as good as it can be (it lacks any unpleasantness) whereas lives in other Repugnant Conclusion scenarios could contain huge amounts of suffering. I handn’t appreciated this point when I wrote my previous comment, but now that I do, I feel even more confused.
Oh, I wasn’t saying they should be treated the same. It’s pretty clear that commonsense morality treats them differently.
My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases. Any explanation of the counterintuitiveness of the Repugnant Conclusion in terms of factors that are specific to the interpersonal case is therefore implausible.
Although I’m not sure I’m understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that people would also in some sense be sacrificed in the intrapersonal case. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.
[Of course, feel free to ignore any of this if you aren’t interested, etc.]
(FWIW, I never downvoted your comments and have upvoted them instead, and I appreciate the engagement and thoughtful questions/pushback, since it helps me make my own views clearer. Since I spent several hours on this thread, I might not respond quickly or at all to further comments.)
Sorry, I tried to respond to that in an edit you must have missed, since I realized I didn’t after posting my reply. In short, a wide person-affecting view means that Z would involve “sacrifice” and A would not, if both populations are completely disjoint and contingent, roughly because the people in A have worse off “counterparts” in Z, and the excess positive welfare people in Z without counterparts don’t compensate for this. No one in Z is better off than anyone in A, so none are better off than their counterparts in A, so there can’t be any sacrifice in a “wide” way in this direction. The Nonidentity problem would involve “sacrifice” in one way only, too, under a wide view.
(If all the people in Z already exist, and none of the people in A exist, then going from Z to A by killing everyone in Z could indeed mean “sacrificing” the people in Z for those in A, under some person-affecting views, and be bad under some such views.
Under a narrow view (instead of a wide one), with disjoint contingent populations, we’d be indifferent between A and Z, or they’d be incomparable, and both or neither would involve “sacrifice”.)
On value receptacles, here’s a quote by Frick (on his website), from a paper in which he defends the procreation asymmetry:
I haven’t thought much about this particular way of framing the receptacle objection, and what I have in mind is basically what Frick wrote later:
This is a bit vague: what do we mean by “conditional”? But there are plausible interpretations that symmetric person-affecting views, asymmetric person-affecting views and negative axiologies satisfy, while the total view, reverse asymmetric person-affecting views and positive axiologies don’t really seem to have such plausible interpretations (or have fewer and/or less plausible interpretations).
I have two ways in mind that seem compatible with the procreation asymmetry, but not the total view:
First, in line with my linked shortform comment about the asymmetry, a person’s interests should only direct us from outcomes in which they (the person, or the given interests) exist or will exist to the same or other outcomes (possibly including outcomes in which they don’t exist), and all reasons with regards to a given person are of this form. I think this is basically an actualist argument (which Frick discusses and objects to in his paper). Having reasons regarding an individual A in an outcome in which they don’t exist direct us towards an outcome in which they do exist would not seem conditional on A’s existence. It’s more “conditional” if the reasons regarding a given outcome come from that outcome than from other outcomes.
Second, there’s Frick’s approach. Here’s a simplified evaluative version:
Setting P(A)=”A has a life worth living” would give us reason to prevent lives not worth living. Plus, there’s no P(A) we could use that would imply that a given world with A is in one way better (due to the statement with P(A)) than a given world without A. So, this is compatible with the procreation asymmetry, but not the total view.
It could be “wide” and solve the Nonidentity problem, since we can find P such that P would be satisfied for B but not A, if B would be better off than A, so we would have more reasons for A not to exist than for B not to exist.
It’s also compatible with antifrustrationism and negative utilitarianism in a few ways:
If we apply it to preferences instead of whole persons, with predicates like P(A)=”A is satisfied”
If we use predicates like “P(A)=if A has interest y, then y is satisfied at least to degree d”
If we use predicates like “P(A)=A has welfare at least w”, allowing for the possibility of positive welfare being better than less in an existing individual, but being perfectionistic about it, so that anything worse than the best is worse than nonexistence.
I think part of what follows in Frick’s paper is about applying/extending this in a way that isn’t basically antinatalist.
Ya, this seems right to me.
What do you mean by “the phenomenology of the intuitions” here?
One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It’s not clear they’re actually worse off overall or even at each moment in something that might “look” like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn’t seem so objectionable.
It’s more about my interests/preferences than my future selves, and not sacrificing them or treating them as value receptacles. I think respect for autonomy/preferences requires not treating our preferences as mere value receptacles that you can just make more of to get more value and make things go better, and this can rule out both the interpersonal RC and the intrapersonal RC. This is in principle, ignoring other reasons, indirect effects, etc., so not necessarily in practice.
I have moral uncertainty, and I’m sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They’re all versions of negative prioritarianism/utilitarianism or very similar.
Thanks for the detailed reply. For now, I will only address your comments at the end, since I haven’t read the sources you cite and haven’t thought about this much beyond what I wrote previously. (As a note of color, Johann and I did the BPhil together and used to meet every week for several hours to discuss philosophy, although he kept developing his views about population ethics after he moved to Harvard; you have rekindled my interest in reading his dissertation.)
I mean that the intuitions triggered by the interpersonal and the intrapersonal cases feel very similar from the inside. For example, if I try to describe why the interpersonal case feels repugnant, I’m inclined to say stuff like “it feels like something would be missing” or “there’s more to life than that”; and this is exactly what I would also say to describe why the intrapersonal case feels repugnant. How these two intuitions feel also makes me reasonably confident that fMRI scans of people presented with both cases would show very similar patterns of brain activity.
I think that supposed difference is ruled out by the way the intrapersonal case is constructed. In any case, what I regard as the most interesting intrapersonal version is one where it is analogous to the interpersonal version in this respect. Of course, we can discuss a scenario of the sort you describe, but then I would no longer say that my intuitions about the two cases feel very similar, or that we can learn much by comparing the two cases.
Makes sense. Thanks for the clarification.