# tobycrisford comments on Is the reasoning of the Repugnant Conclusion valid?

• It sounds like I have misunderstood how to apply your methodology. I would like to understand it though. How would it apply to the following case?

Status quo (A): 1 person exists at very high welfare +X

Possible new situation (B): Original person has welfare reduced to X − 2 , 1000 people are created with very high welfare +X

Possible new situation (C): Original person has welfare X - , 1000 people are created with small positive welfare .

I’d like to understand how your theory would answer two cases: (1) We get to choose between all of A,B,C. (2) We are forced to choose between (B) and (C), because we know that the world is about to instantaneously transform into one of them.

• Neither (B) nor (C) are better than (A), because an instanataneous change from (A) to (B) or (C) would reduce real welfare (of the one already existing person).

• (A) is not better than (B) or (C) because to change (B) or (C) to (A) would cause 1000 people to disappear (which is a lot of negative real welfare).

• (B) and (C) are neither better or worse than each other, because an instantaneous change of one to the other would involve the loss of 1000 existing people (negative real welfare) which is only compensated by the creation of imaginary people (positive imaginary welfare). It’s important here that the 1000 people in (B) and (C) are not the same people. This is the non-identity problem.

From your reply it sounds like you’re coming up with a different answer when comparing (B) to (C), because both ways round the 1000 people are always considered imaginary, as they don’t literally exist in the status quo? Is that right?

If so, that still seems like it gives a non-sensical answer in this case, because it would then say that (C) is better than (B) (real welfare is reduced by less), when it seems obvious that (B) is actually better? This is an even worse version of the flaw you’ve already highlighted, because the existing person you’re prioritising over the imaginary people is already at a welfare well above the 0 level.

If I’ve got something wrong and your methodology can explain the intuitively obvious answer that (B) is better than (C), and should be chosen in example (2) (regardless of their comparison to A), then I would be interested to understand how that works.

• “From your reply it sounds like you’re coming up with a different answer when comparing (B) to (C), because both ways round the 1000 people are always considered imaginary, as they don’t literally exist in the status quo? Is that right?”

If the status quo is A, then with my methodology, you cannot compare B and C directly, and I don’t think this is a problem. As I said previously, ”… in particular, if you are a member of A, it’s not relevant that the population of Z disagree which is better”. Similarly, I don’t think it’s necessary that the people of A can compare B and C directly. The issue is that some of your comparisons do not have (A) as the status quo.

To fully clarify, if you are a member of X (or equivalently, X is your status quo), then you can only consider comparisons between X and other populations. You might find that B is better than X and C is not better than X. Even then, you could not objectively say B is better than C because you are working from your subjective viewpoint as a member of X. In my methodology, there is no “objective ordering” (which is what I perhaps inaccurately was referring to as a total ordering).

Thus,

“(A) is not better than (B) or (C) because to change (B) or (C) to (A) would cause 1000 people to disappear (which is a lot of negative real welfare).”

is true if you take the status quos to be (B),(C) respectively—but this is not our status quo. (Similarly for the third bullet point.)

“Neither (B) nor (C) are better than (A), because an instantaneous change from (A) to (B) or (C) would reduce real welfare (of the one already existing person).”

This is true from our viewpoint as a member of A. Hence, if we are forced to go from A to one of B or C, then it’s always a bad thing. We minimise our loss of welfare according to the methodology and pick B, the ‘least worst’ option.

• “We minimise our loss of welfare according to the methodology and pick B, the ‘least worst’ option.”

But (B) doesn’t minimise our loss of welfare. In B we have welfare X-2, and in C we have welfare X - , so wouldn’t your methodology tell us to pick (C)? And this is intuitively clearly wrong in this case. It’s telling us not tmake a negligible sacrifice to our welfare now in order to improve the lives of future generations, which is the same problematic conclusion that the non-identity problem gives to certain theories of population ethics.

I’m interested in how your approach would tell us to pick (B), because I still don’t understand that?

I won’t reply to your other comment just to keep the thread in one place from now on (my fault for adding a P.S, so trying to fix the mistake). But in short, yes, I disagree, and I think that these flaws are unfortunately severe and intractable. The ‘forcing’ scenario I imagined is more like the real world than the unforced decisions. For most of us making decisions, the fact that people will exist in the future is inevitable, and we have to think about how we can influence their welfare. We are therefore in a situation like (2), where we are going to move from (A) to either (B) or (C) and we just get to pick which of (B) or (C) it will be. Similarly, figuring out how to incorporate uncertainty is also fundamental, because all real world decisions are made under uncertainty.

• Sorry, I misread (B) and (C). You are correct that, as written in the post, (C) would then be the better choice.

However, continuing with what I meant to imply when I realised this was a forced decision, we can note that whichever of (B),(C) is picked, 1000 people will come into existence with certainty. Thus, in this case, I would argue they are effectively real. This is contrasted with the case in which the decision is not forced—then, there are no 1000 new people necessarily coming into existence, and as you correctly interpreted, the status quo is preferable (since the status quo (A) is actually an option this time).

Regarding non-identity, I would consider these 1000 new people in either (B),(C) to be identical. I am not entirely sure how non-identity is an issue here.

I am still not quite sure what you mean by uncertainty, but I feel that the above patches up (or more accurately, correctly generalises) the model at least with regards to the example you gave. I’ll try to think of counterexamples myself.

By the way, this would also be my answer to Parfit’s “depletion” problem, which I briefly glanced over. There is no way to stop hundreds of millions of people continuing to come into existence without dramatically reducing welfare (a few nuclear blasts might stop population growth but at quite a cost to welfare). Thus, these people are effectively real. Hence, if the current generation depleted everything, this would necessarily cause a massive loss of welfare to a population which may not exist yet, but are nevertheless effectively real. So we shouldn’t do that. (That doesn’t rule out a ‘slower depletion’, but I think that’s fine.)

• You can assert that you consider the 1000 people in (B) and (C) to be identical, for the purposes of applying your theory. That does avoid the non-identity problem in this case. But the fact is that they are not the same people. They have different hopes, dreams, personalities, memories, genders, etc.

By treating these different people as equivalent, your theory has become more impersonal. This means you can no longer appeal to one of the main arguments you gave to support it: that your recommendations always align with the answer you’d get if you asked the people in the population whether they’d like to move from one situation to the other. The people in (B) would not want to move to (C), and vice versa, because that would mean they no longer exist. But your theory now gives a strong recommendation for one over the other anyway.

There are also technical problems with how you’d actually apply this logic to more complicated situations where the number of future people differs. Suppose that 1000 extra people are created in (B), but 2000 extra people are created in (C), with varying levels of welfare. How do you apply your theory then? You now need 1000 of the 2000 people in (C) to be considered ‘effectively real’, to continue avoiding non-identity problem like conclusions, but which 1000? How do you pick? Different choices of the way you decide to pick will give you very different answers, and again your theory is becoming more impersonal, and losing more of its initial intuitive appeal.

Another problem is what to do under uncertainty. What if instead of a forced choice between (B) and (C), the choice is between:

0.1% chance of (A), 99.9% chance of (B)

0.1000001% chance of (A), 99.9% chance of (C).

Intuitively, the recommendations here should not be very different to the original example. The first choice should still be strongly preferred. But are the 1000 people still considered ‘effectively real’ in your theory, in order to allow you to reach that conclusion? Why? They’re not guaranteed to exist, and actually, your real preferred option, (A), is more likely to happen with the second choice.

Maybe it’s possible to resolve all these complications, but I think you’re still a long way from that at the moment. And I think the theory will look a lot less intuitively appealing once you’re finished.

I’d be interested to read what the final form of the theory looks like if you do accomplish this, although I still don’t think I’m going to be convinced by a theory which will lead you to be predictably in conflict with your future self, even if you and your future self both follow the theory. I can see how that property can let you evade the repugnant conclusion logic while still sort of being transitive. But I think that property is just as undesirable to me as non-transitiveness would be.

• “You can assert that you consider the 1000 people in (B) and (C) to be identical, for the purposes of applying your theory. That does avoid the non-identity problem in this case. But the fact is that they are not the same people. They have different hopes, dreams, personalities, memories, genders, etc.”

But you stated that they don’t exist yet (that they are “created”). Thus, we have no empirical knowledge of their hopes and dreams, so the most sensible prior seems to be that they are all identical. I apologise if I am coming across as obtuse, but I really do not see how non-identity causes issues here.

“The people in (B) would not want to move to (C), and vice versa, because that would mean they no longer exist.”

Sorry, but this is quite incorrect. The people in (C) would want to move to (B). Bear in mind that when we are evaluating this decision, we now set (C) as the status quo. So the 1000 people at welfare are considered to be wholly real. If you stipulate that in going to (B), these 1000 people are to be eradicated then replaced with (imaginary) people at high welfare, then naturally the people of (C) should say no.

However, if you instead take the more reasonable route of getting from (C) to (B) via raising the real welfare of the 1000 and slightly reducing the welfare of one person, then clearly (B) is better than (C).

I think I realise what the issue may be here. When I say “going from (C) to (B)” or similar, I do not mean that (C),(B) are existent populations and (C) is suddenly becoming (B). That way, we certainly do run into issues of non-identity. Rather, (C) is a status quo and (B) is a hypothetical which may be achieved by any route. Whether the resulting people of (B) in the hypothetical are real or imaginary depends on which route you take. Naturally, the best routes involve eradicating as few real people as possible. In this instance, we can get from (C) to (B) without getting rid of anyone. The route of disappearing 1000 people and replacing them with 1000 new people is one of the worse routes. And in the original post, in one of the examples, to get from one population to the other, it was necessary to get rid of real people, with only imaginary gain. Hence, there could not exist an acceptable route to the second population -- one better than remaining at the status quo.

I appreciate now that this may have been unclear. However, I did not fully explain this because the idea of one existent population “becoming” (indeed, how?) another existent population is surely impossible and therefore not worth consideration.

“There are also technical problems with how you’d actually apply this logic to more complicated situations where the number of future people differs. Suppose that 1000 extra people are created in (B), but 2000 extra people are created in (C), with varying levels of welfare. How do you apply your theory then? You now need 1000 of the 2000 people in (C) to be considered ‘effectively real’, to continue avoiding non-identity problem like conclusions, but which 1000? How do you pick? Different choices of the way you decide to pick will give you very different answers, and again your theory is becoming more impersonal, and losing more of its initial intuitive appeal.”

I would say exactly the same for this. If these people are being freshly created, then I don’t see the harm in treating them as identical. If a person decided not to have a child with their partner today, but rather tomorrow, then indeed, they will almost certainly produce a different child. But the hypothetical child of today is not exactly going to complain if they were never created. That is the reasoning guiding my thought, on the intuitive level.

And given that this calculus works solely by considering welfare, naturally, it is reductive, as is every utilitarian calculus which only considers welfare. Isn’t the very idea of reducing people to their welfare impersonal?

“0.1% chance of (A), 99.9% chance of (B)

0.1000001% chance of (A), 99.9% chance of (C).”

Well, it would seem to me this is perfect for an application of the concept of expectation. Taking the expected value, then in both cases ~999 people become effectively real and the same conclusion is reached.

If the odds in the second scenario were 50-50, then the expected value is that 500 people are effectively real (since 999 are expected in the first scenario, 500 in the second scenario and we have to pick one; we take the minimum). Then, the evaluation changes. Of course, this implies there is a critical point where if the chance of (C) in the second option is sufficiently low, then both options are equally good, from the perspective of (A).

The natural question then, which I also ask myself, is what if there were hundreds of scenarios, and in at least one scenario there were no people created. Then, supposedly no one is effectively real. But actually, I’m not sure this is a problem. More thinking will be required here to see whether I am right or wrong.

I do very much appreciate your criticism. Equally, it is quite striking to me that whenever you have pointed out an error, it has immediately seemed clear to me what the solution would be. Certainly, this discussion has been very productive in that way and rounded out this model a bit more. I expect I will write it all up, hopefully with some further improvements, in another post some time in the future.

• “I would say exactly the same for this. If these people are being freshly created, then I don’t see the harm in treating them as identical.”

I think you missed my point. How can 1,000 people be identical to 2,000 people? Let me give a more concrete example. Suppose again we have 3 possible outcomes:

(A) (Status quo): 1 person exists at high welfare +X

(B): Original person has welfare reduced to X − 2, 1000 new people are created at welfare +X

(C): Original person has welfare reduced only to X - , 2000 new people are created, 1000 at welfare , and 1000 at welfare X + .

And you are forced to choose between (B) and (C).

How do you pick? I think you want to say 1000 of the potential new people are “effectively real”, but which 1000 are “effectively real” in scenario (C)? Is it the 1000 at welfare ? Is it the 1000 at welfare X+? Is it some mix of the two?

If you take the first route, (B) is strongly preferred, but if you take the second, then (C) would be preferred. There’s ambiguity here which needs to be sorted out.

“Then, supposedly no one is effectively real. But actually, I’m not sure this is a problem. More thinking will be required here to see whether I am right or wrong.”

Thank you for finding and expressing my objection for me! This does seem like a fairly major problem to me.

“Sorry, but this is quite incorrect. The people in (C) would want to move to (B).”

No, they wouldn’t, because the people in (B) are different to the people in (C). You can assert that you treat them the same, but you can’t assert that they are the same. The (B) scenario with different people and the (B) scenario with the same people are both distinct, possible, outcomes, and your theory needs to handle them both. It can give the same answer to both, that’s fine, but part of the set up of my hypothetical scenario is that the people are different.

“Isn’t the very idea of reducing people to their welfare impersonal?”

Not necessarily. So called “person affecting” theories say that an act can only be wrong if it makes things worse for someone. That’s an example of a theory based on welfare which is not impersonal. Your intuitive justification for your theory seemed to have a similar flavour to this, but if we want to avoid the non-identity problem, we need to reject this appealing sounding principle. It is possible to make things worse even though there is no one who it is worse for. Your ‘effectively real’ modification does this, I just think it reduces the intuitive appeal of the argument you gave.

• “Let me give a more concrete example.”

Ah, I understand now. Certainly then there is ambiguity that needs to be sorted out. I’d like to say again that this is not something the original theory was designed to handle. Everything I’ve been saying in these comments is off the cuff rather than premeditated—it’s not surprising that there are flaws in the fixes I’ve suggested. It’s certainly not surprising that the ad hoc fixes don’t solve every conceivable problem. And again, it would appear to me that there are plenty of plausible solutions. I guess really that I just need to spend some time evaluating which would be best and then tidy it up in a new post.

“No, they wouldn’t, because the people in (B) are different to the people in (C). You can assert that you treat them the same, but you can’t assert that they are the same. The (B) scenario with different people and the (B) scenario with the same people are both distinct, possible, outcomes, and your theory needs to handle them both. It can give the same answer to both, that’s fine, but part of the set up of my hypothetical scenario is that the people are different.”

Then yes, as I did say in the rather lengthy explanation I gave:

“The route of disappearing 1000 people and replacing them with 1000 new people is one of the worse routes.”

If you insist that we must get rid of 1000 people and replace them with 1000 different people, then sure, (B) is worse than (C). So now I will remind myself what your objection regarding this was in an earlier comment.

I’ll try explaining again briefly. With this theory, don’t think of the (B),(C) etc. as populations but rather as “distributions” the status quo population could take. Thus, as I said:

“(B) is a hypothetical which may be achieved by any route. Whether the resulting people of (B) in the hypothetical are real or imaginary depends on which route you take.”

When a population is not the status quo, it is simply representing a population distribution that you can get to. Whichever population is not the status quo is considered in an abstract, hypothetical sense.

Now you wish to specifically consider the case where (with status quo (C)), everyone in (B) is specified to be different to the people in (C). I stress that this is not the usual sense in which comparisons are made in the theory; it is much more specific. Again, if one insists on this, then since we have to disappear 1000 people to get to (B), (B) is worse.

Your issue with this is that: “the people in (B) would not want to move to (C), and vice versa, because that would mean they no longer exist. But your theory now gives a strong recommendation for one over the other anyway.”

Now I hope the explanation is fully clear. The distribution of (B) is preferable to people in (C) (i.e. with (C) as the status quo), but if you insist that the only routes to (C) involve getting rid of most of the population and replacing them with 1000 non-identical people, then this is not preferable. When (A) is the status quo, yes, we have a strong preference for (B) over (C) because we don’t have to lose 1000 people, and I don’t see the problem with considering people with equal welfare who (in the status quo of (A)) are imaginary or “effectively real” as identical. In line with a person-affecting outlook, I give more priority to real people than imaginary or effectively real people—I only respect the non-identity of real people. And just to add, viewing people as effectively real is not to say that they are really real (since they don’t exist yet, even if they are mathematically expected to); it’s only been a way to balance the books for forced decisions.

The outcome is still, as far as I can see, consistent with transitivity and my already-avowed rejection of an objective ordering.