One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn’t know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie’s suffering vs. alleviating Amy’s suffering, or Bob + Amy’s suffering vs. Susie’s suffering, etc.), then they would all sign an agreement directing you to send a donation such that you would alleviate two people’s suffering rather than one, since this would give each of them the best chance of having their suffering alleviated. This is related to Rawls’ veil of ignorance argument.
And if Bob, Susie, Amy, and a million others were to sign an agreement directing your choice to donate $X to alleviate one person’s suffering or a million peoples’ suffering, again all of them behind a veil of ignorance, none of them would hesitate for a second to sign an agreement that said, “Please donate such that you would alleviate a million people’s suffering, and please oh please don’t just flip a coin.”
More broadly speaking, given that we live in a world where people have competing interests, we have to find a way to effectively cooperate such that we don’t constantly end up in the defect-defect corner of the Prisoner’s Dilemma. In the real world, such cooperation is hard; but in an ideal world, such cooperation would essentially look like people coming together to sign agreements behind a veil of ignorance (not necessarily literally, but at least people acting as if they had done so). And the upshot of such signed agreements is generally to make the interpersonal-welfare-aggregative judgments of the type “alleviating two people’s suffering is better than one”, even if everyone agrees with the theoretical arguments that the suffering of two people on opposite sides don’t literally cancel out, and that who’s suffering matters.
Bob, Susie, Amy, and the rest of us all want to live in a world where we cooperate, and therefore we’d all want to live in a world where we make these kinds of interpersonal welfare aggregations, at the very least during the kinds of donation decisions in your thought experiments.
Bob, Susie and Amy would sign the agreement to save the greater number if they assumed that they each had an equal chance of being in any of their positions. But, is this assumption true? For example, is it actually the case that Bob had an equal chance to be in Amy’s or Susie’s position? If it is the case, then saving the greater number would in effect give each of them a 2⁄3 chance of being saved (the best chance as you rightly noted). But if it isn’t, then why should an agreement based on a false assumption have any force? Suppose Bob, in actuality, had no chance of being in Amy’s or Susie’s position, then is it really in accordance with reason and empathy to save Amy and Susie and give Bob zero chance?
Intuitively, for Bob to have had an equal chance of being in Amy’s position or Susie’s position or his actual position, he must have had an equal chance of living Amy’s life or Susie’s life or his actual life. That’s how I intuitively understand a position: as a life position. To occupy someone’s position is to be in their life circumstances—to have their life. So understood, what would it take for Bob to have had an equal chance of being in Amy’s position or Susie’s position or his own? Presumably, it would have had to be the case that Bob was just as likely to have been born to Amy’s parents or Susie’s parents or his actual parents. But this seems very unlikely because the particular “subject-of-experience” or “self” that each of us are is probably biologically linked to our ACTUAL parents’ cells. Thus another parent could not give birth to us, even though they might give birth to a subjective-of-experience that is qualitatively very similar to us (i.e. same personality, same skin complexion, etc).
Of course, being in someone’s position need not be understood in this demanding (though intuitive) way. For example, maybe to be in Amy’s position just requires being in her actual location with her actual disease, but not e.g. being of the same sex as her or having her personality. But insofar as we are biologically linked to our actual parents, and parents are spread all over the world, the odds of Bob having had an equal chance of being in his actual position (i.e. a certain location with a certain disease) or in Amy’s position (i.e. a different location with an equally painful disease) is highly unlikely. Think also about all the biological/personality traits that make a person more or less likely to be in a given position. I, for example, certainly had zero chance of being in an NBA position, given my height. Of course, as we change in various ways, our chances to be in certain positions change too, but even so, it is extremely unlikely that any given person, at any given point in time, had an equal chance of being in any of the positions of a trade off situation that he is later to be involved in.
UPDATE (ADDED ON MAR 18): I have added the above two paragraphs to help first-time readers better understand how I understand “being in someone’s position” and why I think it is most unlikely that Bob actually had an equal chance of being in Amy’s or Susie’s position. These two paragraphs have replaced a much briefer paragraph, which you can find at the end of this reply. UPDATE (ADDED ON MAR 21): Also, no need to read past this point since someone (kbog) made me realize that the question I ask in the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.
Also, what would the implications of this objection be for cases where the pains involved in a choice situation are unequal? Presumably, EA favors saving a billion people each from a fairly painful disease than a single person from the excruciating pain of being burned alive. But is it clear that someone behind the veil of ignorance would accept this?
-
Original paragraph that was replaced: “Similarly, is it actually the case that each of us had an equal chance of being in any one of our positions? I think the answer is probably no because the particular “subject-of-experience” or “self” that each of us are is probably linked to our parents’ cells.”
I do think Bob has an equal chance to be in Amy’s or Susie’s position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don’t know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don’t know anything about their own propensities to get disease X or disease Y. Given this state of knowledge, Bob, Susie, and Amy all have the same chance as each other of getting disease X vs. disease Y, and so signing the agreement is rational. Note that it doesn’t have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob’s genetics predispose him to disease X, and so he shouldn’t sign the agreement. But Bob doesn’t know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance.
Regarding your second point, I don’t think EA’s are necessarily committed to saving a billion people each from a fairly painful disease vs. a single person being burned alive. That would of course depend on how painful the disease is, vs. how painful being burned alive is. To take the extreme cases, if the painful disease were like being burned alive, except just with 1% less suffering, then I think everybody would sign the contract to save the billion people suffering from the painful disease; if the disease were rather just like getting a dust speck in your eye once in your life, then probably everyone would sign the contract to save the one person being burned alive. People’s intuitions would start to differ with more middle-of-the-road painful diseases, but I think EA is a big enough tent to accommodate all those intuitions. You don’t have to think interpersonal welfare aggregation is exactly the same as intrapersonal welfare aggregation to be an EA, as long as you think there is some reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain.
It would be a mistake to conclude, from a lack of knowledge about one’s position, that one has an equal chance of being in any one’s position. Of course, if a person is behind the veil of ignorance and thus lacks relevant knowledge about his/her position, it might SEEM to him/her that he has an equal chance of being in any one’s position, and he/she might thereby be led to make this mistake and consequently choose to save the greater number.
In any case, what I just said doesn’t really matter because you go on to say,
“Note that it doesn’t have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob’s genetics predispose him to disease X, and so he shouldn’t sign the agreement. But Bob doesn’t know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance.”
Let us then suppose that Bob, in fact, had no chance of being in either Amy’s or Susie’s position. Now imagine Bob asks you why you are choosing to save Amy and Susie and giving him no chance at all, and you reply, “Look, Bob, I wished I could help you too but I can’t help all. And the reason I’m not giving you any chance is that if you, Amy and Susie were all behind the veil of ignorance and was led to assume that each of you had an equal chance of being in anyone else’s position, then all of you (including you, Bob) would have agreed to the principle of saving the greater number in the kind of case you find yourself in now.”
Don’t you think Bob can reasonably reply, “But Brian, whether or not I make that assumption under the veil of ignorance is irrelevant. The fact of the matter is that I had no chance of being in Amy’s or Susie’s position. What you should do shouldn’t be based on what I would agree to in a condition where I’m imagined as making a false assumption. What you should do should be based on my actual chance of being in Amy’s or Susie’s position. It should be based on the facts, and the fact is that I NEVER had a chance to be in any of their positions. Look, Brian, I’m really scared. I’m going to suffer a lot if you choose to save Amy and Susie—no less than any one of them would suffer. I can imagine that they must be very scared too, for each of them would suffer just as much as me were you to save me instead. In this case, seeing that we each have the same amount to suffer, shouldn’t you give each of us an equal chance of being helped, or at least give me some chance and not 0?”
How would you reply? I personally think that Bob’s reply shows the clear limits of this hypothetical contractual approach to determining what we should do in real life.
UPDATE (ADDED ON MAR 21): No need to read past this point since another person (kbog) made me realize that the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.
Regarding the second point, I think what any person would agree to behind the veil of ignorance (even assuming the truth of the assumption that each has an equal chance of being in anybody’s position) is highly dependent on their risk-adverseness to the severest potential pain. Towards the extreme ends that you described, people of varying risk-adverseness would perhaps be able to form a consensus. But it gets less clear as we consider “middle-of-the-road” cases. As you said people’s intuitions here start to differ (which I would peg to varying degrees of risk-adverseness to the severest potential pain). But the question then is whether this hypothetical contractual approach can serve as a “reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain” since your intuition might not be the same as the person whose fate might rest in your hands. Is it really reasonable to decide his fate using your intuition and not his?
Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob’s argument that you should give him a 1⁄3 chance of being helped even though he wouldn’t have signed on to such a decision behind the veil of ignorance, simply because of the actual position he finds himself in, is a sign of defection. This is not to slight Bob here—of course it’s very understandable for him to be afraid and to want a chance of being helped given his position. Rather, it’s simply a statement that if everybody argued as Bob did (not just regarding charity donations, but in general), we’d be living in a much unhappier society.
If you’re unmoved by this framing, consider this slightly different framing, illustrated by a thought experiment: Let’s say that Bob successfully argues his case to the donor, who gives Bob a 1⁄2 chance of being helped. For the purpose of this experiment, it’s best to not specify who in fact gets helped, but rather to just move forward with expected utilities. Assuming that his suffering was worth −1 utility point, consider that he netted 1⁄2 of an expected utility point from the donor’s decision to give everyone an equal chance. (Also assume that all realized painful incidents hereon are worth −1 utility point, and realized positive incidents are worth +1 utility point.)
The next day, Bob gets into a car accident, putting both him and a separate individual (say, Carl) in the hospital. Unfortunately, the hospital is short on staff that day, so the doctors + nurses have to make a decision. They can either spend their time to help Bob and Carl with their car accident injuries, or they can spend their time helping one other indivdual with a separate yet equally painful affliction, but they cannot do both. They also cannot split their time between the two choices. They have read your blog post on the EA forum and decide to flip a coin. Bob once again gets a 1⁄2 expected utility point from this decision.
Unfortunately, Bob’s hospital stay cost him all his savings. He and his brother Dan (who has also fallen on hard times) go to their mother Karen to ask for a loan to get them back on their feet. Karen, however, notes that her daughter (Bob and Dan’s sister) Emily has also just asked for a loan for similar reasons. She cannot give a loan to Bob and Dan and still have enough left over for Emily, and vice versa. Bob and Dan note that if they were to get the loan, they could both split that loan and convert it into +1 utility point each, whereas Emily would require the whole loan to get +1 utility point (Emily was used to a more lavish lifestyle and requires more expensive consumption to become happier). Nevertheless, Karen has read your blog post on the EA forum and decides to flip a coin. Bob nets a 1⁄2 expected utility point from this decision.
What is the conclusion from this thought experiment? Well, if decisions were made to your decision rule, providing each individual an equal chance of being helped in each situation, then Bob nets 1⁄2 + 1⁄2 + 1⁄2 = 3⁄2 expected utility points. Following a more conventional decision rule to always help more people vs. less people if everyone is suffering similarly (a decision rule that would’ve been agreed upon behind a veil of ignorance), Bob would get 0 (no help from the original donor) + 1 (definite help from the doctors + nurses) + 1 (definite help from Karen) = 2 expected utility points. Under this particular set of circumstances, Bob would’ve benefitted more from the veil of ignorance approach.
You may reasonably ask whether this set of seemingly fantastical scenarios has been precisely constructed to make my point rather than yours. After all, couldn’t Bob have found himself in more situations like the donor case rather than the hospital or loan cases, which would shift the math towards favoring your decision rule? Yes, this is certainly possible, but unlikely. Why? For the simple reason that any given individual is more likely to find themselves in a situation that affects more people than a situation that affects few. In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach.
Based on your post, it seems you are hesitant to aggregate utility over multiple individuals; for the sake of argument here, that’s fine. But the thought scenario above doesn’t require that at all; just aggregating utility over Bob’s own life, you can see how the veil-of-ignorance approach is expected to benefit him more. So if we rewind the tape of Bob’s life all the way back to the original donor scenario, where the donor is mulling over whether they want to donate to help Bob or to help Amy + Susie, the donor should consider that in all likelihood Bob’s future will be one in which the veil-of-ignorance approach will work out in his favor moreso than the everyone-gets-an-equal-chance approach. So if this donor and other donors in similar situations are to commit to one of these two decision rules, they should commit to the veil of ignorance approach; it would help Bob (and Amy, and Susie, and all other beneficiaries of donations) the most in terms of expected well-being.
Another way to put this is that, even if you don’t buy that Bob should put himself behind a veil of ignorance because he knows he doesn’t have an equal chance of being in Amy’s and Susie’s situation, and so shouldn’t decide to sign a cooperative agreement with Amy and Susie, you should buy that Bob is in effect behind a veil of ignorance regarding his own future, and therefore should sign the contract with Amy and Susie because this would be cooperative with respect to his future selves. And the donor should act in accord with this hypothetical contract.
I would respond to the second point, but this post is already long enough, and I think what I just laid out is more central.
I will also be bowing out of the discussion at this point – not because of anything you said or did, but simply since it took me much more time to write up my thoughts than I would have liked. I did enjoy the discussion and found it useful to lay out my beliefs in a thorough and hopefully clear manner, as well as to read your thoughtful replies. I do hope you decide that EA is not fatally flawed and to stick around the community :)
No worries! I’ve enjoyed our exchange as well—your latest response is both creative and funny. In particular, when I read “They have read your blog post on the EA forum and decide to flip a coin”, I literally laughed out loud (haha). It’s been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to.
Btw, for the benefit of first-time readers, I’ve updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I’ve also kept in the response what I originally wrote. Just wanted to let you know. Now onto my response.
You write, “In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach.”
This would be true if Bob has an equal chance of being in any of the positions of a given future trade off situation. That is, Bob would have a higher chance of being in the majority in any given future trade off situation if Bob has an equal chance of being in any of the positions of a given trade off situation. Importantly, just because there is more positions on the majority side of a trade off situation, that does not automatically mean that Bob has a higher chance of being among the majority. His probably or chance of being in each of the positions is crucial. I think you were implicitly assuming that Bob has an equal chance of being in any of the positions of a future trade off situation because he doesn’t know his future. But, as I mentioned in my previous post, it would be a mistake to conclude, from a lack of knowledge about one’s position, that one has an equal chance of being in any one’s position. So, just because Bob doesn’t know anything about his future, it does not mean that he has an equal chance of being in any of the positions in the future trade off situations that he is involved in.
In my original first response to you, I very briefly explained why I think people in general do not have an equal chance of being in anybody’s position. I have sense expanded that explanation. If what I say there is right, then it is not true that “over a whole lifetime of decisions to be made, Bob [or anyone else] is much more likely to benefit from the veil-of-ignorance-type approach [than the equal-chance approach].”
Thanks for your comment. I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.
Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.
I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.
The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.
Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.
My reply to Bob would be to essentially restate brianwang’s original comment, and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument, and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.
1) The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.
The conclusions are rational under the stipulation that each person has an equal chance of being in anybody’s position. But it is not actually rational given that the stipulation is false. So you can’t just say that the conclusions have a bearing on reality because they are necessarily rational. They are rational under the stipulation, but not when you take into account what is actually the case.
And I don’t see how the conclusion is fair to Bob when the conclusion is based on a false stipulation. Bob is a real person. He shouldn’t be treated like he had an equal chance of being in Amy’s or Susie’s position, when he in fact didn’t.
2) “My reply to Bob would be to essentially restate brianwang’s original comment...”
Sorry, can you quote the part you’re referring to?
3) ”...and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument.”
Can you explain what this “utilitarian principle of indifference argument” is?
4) “and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.”
Please don’t distort what I said. I had him say, “The fact of the matter is that I had no chance of being in Amy’s or Susie’s position.”, which is very different from saying that he was not Amy or Susie. If he wasn’t Amy or Susie, but actually had an equal chance of being either of them, then I would take the veil of ignorance approach more seriously.
I added the part that he is said because I wanted it to sound realistic. It is uncharitable to assume that that forms part of my argument.
The conclusions are rational under the stipulation that each person has an equal chance of being in anybody’s position. But it is not actually rational given that the stipulation is false.
The argument of both Rawls and Harsanyi is not that it just happens to be rational for everybody to agree to their moral criteria; the argument is that the morally rational choice for society is a universal application of the rule which is egoistically rational for people behind the veil of ignorance. Of course it’s not egoistically rational for people to give anything up once they are outside the veil of ignorance, but then they’re obviously making unfair decisions, so it’s irrelevant to the thought experiment.
And I don’t see how the conclusion is fair to Bob when the conclusion is based on a false stipulation
Stipulations can’t be true or false—they’re stipulations. It’s a thought experiment for epistemic purposes.
Bob is a real person. He shouldn’t be treated like he had an equal chance of being in Amy’s or Susie’s position, when he in fact didn’t.
The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system.
Also, to be clear, the Original Position argument doesn’t say “imagine if Bob had an equal chance of being in Amy’s or Susie’s position, see how you would treat them, and then treat him that way.” If it did, then it would simply not work, because the question of exactly how you should actually treat him would still be undetermined. Instead, the argument says “imagine if Bob had an equal chance of being in Amy’s or Susie’s position, see what decision rule they would agree to, and then treat them according to that decision rule.”
Sorry, can you quote the part you’re referring to?
The first paragraph of his first comment.
Can you explain what this “utilitarian principle of indifference argument” is?
I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person.
I think it was F. M. Kamm who first raised this objection to the veil-of-ignorance approach in his book Morality, Mortality Vol 1. (I haven’t actually read the book). Interestingly, kbog—another person I’ve been talking with on this forum—accepts this result. But I wonder if others like yourself would. Imagine Bob, Amy and Susie were in a trade off situation of the kind I just described, and imagine that Bob never actually had a chance to be in Amy’s or Susie’s position. In such a situation, do you think you should just save Amy and Susie?
Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It’s interesting that you and I have such intuitions about such a case – I see that as in the category of “being so obvious to me that I wouldn’t even have to hesitate to choose.” But obviously you have different intuitions here.
Part of what I’m confused about is what the positive case is for giving everyone an equal chance. I know what the positive case is for the approach of automatically saving two people vs. one: maximizing aggregate utility, which I see as the most rational, impartial way of doing good. But what’s the case for giving everyone an equal chance? What’s gained from that? Why prioritize “chances”? I mean, giving Bob a chance when most EAs would probably automatically save Amy and Susie might make Bob feel better in that particular situation, but that seems like a trivial point, and I’m guessing is not the main driver behind your reasoning.
One way of viewing “giving everyone an equal chance” is to give equal priority to different possible worlds. I’ll use the original “Bob vs. a million people” example to illustrate. In this example, there’s two possible worlds that the donor could create: in one possible world Bob is saved (world A), and in the other possible world a million people are saved (world B). World B is, of course, the world that an EA would create every time. As for world A, well: can we view this possible world as anything but a tragedy? If you flipped a coin and got this outcome, would you not feel that the world is worse off for it? Would you not instantly regret your decision to flip the coin? Or even forget flipping the coin, we can take donor choice out of it; wouldn’t you feel that a world where a hurricane ravaged and destroyed an urban community where a million people lived is worse than a world where that same hurricane petered out unexpectedly and only destroyed the home of one unlucky person?
If so, then why give tragic world A any priority at all, when we can just create world B instead? I mean, if you were asked to choose between getting a delicious chocolate milkshake vs. a bee sting, you wouldn’t say “I’ll take a 50% chance of each, please!” You would just choose the better option. Giving any chance, no matter how small, to the bee sting would be too high. Similarly, giving any priority to tragic world A, even 1 in 10 million, but be too high.
I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy’s burning to death plus Susie’s sore throat involves more or greater pain than Bob’s burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear.
But importantly, I don’t share your belief that Amy’s burning to death and Susie’s sore throat involves more or greater pain than Bob’s burning to death. On this note, I have completely reworked my response to Objection 1 a few days ago to make clear why I don’t share this belief, so please read that if you want to know why. On the contrary, I think Amy’s burning to death and Susie’s sore throat involves just as much pain as Bob’s burning to death.
So part of the positive case for giving everyone an equal chance is that the suffering on either side would involve the same LEVEL/AMOUNT of pain (even though the suffering on Amy’s and Susie’s side would clearly involve more INSTANCES of pain: i.e. 2 vs 1.)
But even if the suffering on Amy’s and Susie’s side would involve slightly greater pain (as you believe), there is a positive case for giving Bob some chance of being saved, rather than 0. And that is that who suffers matters, for the reason I offered in my response to Objection 2. I think that response provides a very powerful reason for giving Bob at least some chance, and not no chance at all, even if his pain would be less great than Amy’s and Susie’s together.
(My response to Objection 3 makes clear that giving Bob some chance is not in conflict with being impartial, so that response is relevant too if you think doing so is being partial)
At the end of the day, I think one’s intuitions are based on one’s implicit beliefs and what one implicitly takes into consideration. Thus, if we shared the same implicit beliefs and implicitly took the same things into consideration, then we would share the same intuitions. So one way to view my essay is that it tries to achieve its goal by doing two things:
1) Challenging a belief (e.g. that Amy’s burning to death plus Susie’s sore throat involves more pain than Bob’s burning to death) that in part underlies the differences in intuition between me and people like yourself.
2) Reminding people of another important moral fact that should figure in their implicit thought processes (and thus be reflected in their intuitions): that who suffers matters. This moral fact is often forgotten about, which skews people’s intuitions. Once this moral fact is seriously taken into account, I bet people’s intuitions would not be the same. Importantly, I bet the vast majority of people (including yourself) would feel that giving Bob some chance of being saved is more appropriate than none, EVEN IF you still thought that Amy’s pain and Susie’s pain involve slightly more pain than Bob’s.
One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn’t know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie’s suffering vs. alleviating Amy’s suffering, or Bob + Amy’s suffering vs. Susie’s suffering, etc.), then they would all sign an agreement directing you to send a donation such that you would alleviate two people’s suffering rather than one, since this would give each of them the best chance of having their suffering alleviated. This is related to Rawls’ veil of ignorance argument.
And if Bob, Susie, Amy, and a million others were to sign an agreement directing your choice to donate $X to alleviate one person’s suffering or a million peoples’ suffering, again all of them behind a veil of ignorance, none of them would hesitate for a second to sign an agreement that said, “Please donate such that you would alleviate a million people’s suffering, and please oh please don’t just flip a coin.”
More broadly speaking, given that we live in a world where people have competing interests, we have to find a way to effectively cooperate such that we don’t constantly end up in the defect-defect corner of the Prisoner’s Dilemma. In the real world, such cooperation is hard; but in an ideal world, such cooperation would essentially look like people coming together to sign agreements behind a veil of ignorance (not necessarily literally, but at least people acting as if they had done so). And the upshot of such signed agreements is generally to make the interpersonal-welfare-aggregative judgments of the type “alleviating two people’s suffering is better than one”, even if everyone agrees with the theoretical arguments that the suffering of two people on opposite sides don’t literally cancel out, and that who’s suffering matters.
Bob, Susie, Amy, and the rest of us all want to live in a world where we cooperate, and therefore we’d all want to live in a world where we make these kinds of interpersonal welfare aggregations, at the very least during the kinds of donation decisions in your thought experiments.
(For a much longer explanation of this line of reasoning, see this Scott Alexander post: http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/)
Hi Brian,
Thanks for your comment and for reading my post!
Here’s my response:
Bob, Susie and Amy would sign the agreement to save the greater number if they assumed that they each had an equal chance of being in any of their positions. But, is this assumption true? For example, is it actually the case that Bob had an equal chance to be in Amy’s or Susie’s position? If it is the case, then saving the greater number would in effect give each of them a 2⁄3 chance of being saved (the best chance as you rightly noted). But if it isn’t, then why should an agreement based on a false assumption have any force? Suppose Bob, in actuality, had no chance of being in Amy’s or Susie’s position, then is it really in accordance with reason and empathy to save Amy and Susie and give Bob zero chance?
Intuitively, for Bob to have had an equal chance of being in Amy’s position or Susie’s position or his actual position, he must have had an equal chance of living Amy’s life or Susie’s life or his actual life. That’s how I intuitively understand a position: as a life position. To occupy someone’s position is to be in their life circumstances—to have their life. So understood, what would it take for Bob to have had an equal chance of being in Amy’s position or Susie’s position or his own? Presumably, it would have had to be the case that Bob was just as likely to have been born to Amy’s parents or Susie’s parents or his actual parents. But this seems very unlikely because the particular “subject-of-experience” or “self” that each of us are is probably biologically linked to our ACTUAL parents’ cells. Thus another parent could not give birth to us, even though they might give birth to a subjective-of-experience that is qualitatively very similar to us (i.e. same personality, same skin complexion, etc).
Of course, being in someone’s position need not be understood in this demanding (though intuitive) way. For example, maybe to be in Amy’s position just requires being in her actual location with her actual disease, but not e.g. being of the same sex as her or having her personality. But insofar as we are biologically linked to our actual parents, and parents are spread all over the world, the odds of Bob having had an equal chance of being in his actual position (i.e. a certain location with a certain disease) or in Amy’s position (i.e. a different location with an equally painful disease) is highly unlikely. Think also about all the biological/personality traits that make a person more or less likely to be in a given position. I, for example, certainly had zero chance of being in an NBA position, given my height. Of course, as we change in various ways, our chances to be in certain positions change too, but even so, it is extremely unlikely that any given person, at any given point in time, had an equal chance of being in any of the positions of a trade off situation that he is later to be involved in.
UPDATE (ADDED ON MAR 18): I have added the above two paragraphs to help first-time readers better understand how I understand “being in someone’s position” and why I think it is most unlikely that Bob actually had an equal chance of being in Amy’s or Susie’s position. These two paragraphs have replaced a much briefer paragraph, which you can find at the end of this reply. UPDATE (ADDED ON MAR 21): Also, no need to read past this point since someone (kbog) made me realize that the question I ask in the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.
Also, what would the implications of this objection be for cases where the pains involved in a choice situation are unequal? Presumably, EA favors saving a billion people each from a fairly painful disease than a single person from the excruciating pain of being burned alive. But is it clear that someone behind the veil of ignorance would accept this?
-
Original paragraph that was replaced: “Similarly, is it actually the case that each of us had an equal chance of being in any one of our positions? I think the answer is probably no because the particular “subject-of-experience” or “self” that each of us are is probably linked to our parents’ cells.”
I do think Bob has an equal chance to be in Amy’s or Susie’s position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don’t know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don’t know anything about their own propensities to get disease X or disease Y. Given this state of knowledge, Bob, Susie, and Amy all have the same chance as each other of getting disease X vs. disease Y, and so signing the agreement is rational. Note that it doesn’t have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob’s genetics predispose him to disease X, and so he shouldn’t sign the agreement. But Bob doesn’t know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance.
Regarding your second point, I don’t think EA’s are necessarily committed to saving a billion people each from a fairly painful disease vs. a single person being burned alive. That would of course depend on how painful the disease is, vs. how painful being burned alive is. To take the extreme cases, if the painful disease were like being burned alive, except just with 1% less suffering, then I think everybody would sign the contract to save the billion people suffering from the painful disease; if the disease were rather just like getting a dust speck in your eye once in your life, then probably everyone would sign the contract to save the one person being burned alive. People’s intuitions would start to differ with more middle-of-the-road painful diseases, but I think EA is a big enough tent to accommodate all those intuitions. You don’t have to think interpersonal welfare aggregation is exactly the same as intrapersonal welfare aggregation to be an EA, as long as you think there is some reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain.
It would be a mistake to conclude, from a lack of knowledge about one’s position, that one has an equal chance of being in any one’s position. Of course, if a person is behind the veil of ignorance and thus lacks relevant knowledge about his/her position, it might SEEM to him/her that he has an equal chance of being in any one’s position, and he/she might thereby be led to make this mistake and consequently choose to save the greater number.
In any case, what I just said doesn’t really matter because you go on to say,
“Note that it doesn’t have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob’s genetics predispose him to disease X, and so he shouldn’t sign the agreement. But Bob doesn’t know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance.”
Let us then suppose that Bob, in fact, had no chance of being in either Amy’s or Susie’s position. Now imagine Bob asks you why you are choosing to save Amy and Susie and giving him no chance at all, and you reply, “Look, Bob, I wished I could help you too but I can’t help all. And the reason I’m not giving you any chance is that if you, Amy and Susie were all behind the veil of ignorance and was led to assume that each of you had an equal chance of being in anyone else’s position, then all of you (including you, Bob) would have agreed to the principle of saving the greater number in the kind of case you find yourself in now.”
Don’t you think Bob can reasonably reply, “But Brian, whether or not I make that assumption under the veil of ignorance is irrelevant. The fact of the matter is that I had no chance of being in Amy’s or Susie’s position. What you should do shouldn’t be based on what I would agree to in a condition where I’m imagined as making a false assumption. What you should do should be based on my actual chance of being in Amy’s or Susie’s position. It should be based on the facts, and the fact is that I NEVER had a chance to be in any of their positions. Look, Brian, I’m really scared. I’m going to suffer a lot if you choose to save Amy and Susie—no less than any one of them would suffer. I can imagine that they must be very scared too, for each of them would suffer just as much as me were you to save me instead. In this case, seeing that we each have the same amount to suffer, shouldn’t you give each of us an equal chance of being helped, or at least give me some chance and not 0?”
How would you reply? I personally think that Bob’s reply shows the clear limits of this hypothetical contractual approach to determining what we should do in real life.
UPDATE (ADDED ON MAR 21): No need to read past this point since another person (kbog) made me realize that the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.
Regarding the second point, I think what any person would agree to behind the veil of ignorance (even assuming the truth of the assumption that each has an equal chance of being in anybody’s position) is highly dependent on their risk-adverseness to the severest potential pain. Towards the extreme ends that you described, people of varying risk-adverseness would perhaps be able to form a consensus. But it gets less clear as we consider “middle-of-the-road” cases. As you said people’s intuitions here start to differ (which I would peg to varying degrees of risk-adverseness to the severest potential pain). But the question then is whether this hypothetical contractual approach can serve as a “reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain” since your intuition might not be the same as the person whose fate might rest in your hands. Is it really reasonable to decide his fate using your intuition and not his?
Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob’s argument that you should give him a 1⁄3 chance of being helped even though he wouldn’t have signed on to such a decision behind the veil of ignorance, simply because of the actual position he finds himself in, is a sign of defection. This is not to slight Bob here—of course it’s very understandable for him to be afraid and to want a chance of being helped given his position. Rather, it’s simply a statement that if everybody argued as Bob did (not just regarding charity donations, but in general), we’d be living in a much unhappier society.
If you’re unmoved by this framing, consider this slightly different framing, illustrated by a thought experiment: Let’s say that Bob successfully argues his case to the donor, who gives Bob a 1⁄2 chance of being helped. For the purpose of this experiment, it’s best to not specify who in fact gets helped, but rather to just move forward with expected utilities. Assuming that his suffering was worth −1 utility point, consider that he netted 1⁄2 of an expected utility point from the donor’s decision to give everyone an equal chance. (Also assume that all realized painful incidents hereon are worth −1 utility point, and realized positive incidents are worth +1 utility point.)
The next day, Bob gets into a car accident, putting both him and a separate individual (say, Carl) in the hospital. Unfortunately, the hospital is short on staff that day, so the doctors + nurses have to make a decision. They can either spend their time to help Bob and Carl with their car accident injuries, or they can spend their time helping one other indivdual with a separate yet equally painful affliction, but they cannot do both. They also cannot split their time between the two choices. They have read your blog post on the EA forum and decide to flip a coin. Bob once again gets a 1⁄2 expected utility point from this decision.
Unfortunately, Bob’s hospital stay cost him all his savings. He and his brother Dan (who has also fallen on hard times) go to their mother Karen to ask for a loan to get them back on their feet. Karen, however, notes that her daughter (Bob and Dan’s sister) Emily has also just asked for a loan for similar reasons. She cannot give a loan to Bob and Dan and still have enough left over for Emily, and vice versa. Bob and Dan note that if they were to get the loan, they could both split that loan and convert it into +1 utility point each, whereas Emily would require the whole loan to get +1 utility point (Emily was used to a more lavish lifestyle and requires more expensive consumption to become happier). Nevertheless, Karen has read your blog post on the EA forum and decides to flip a coin. Bob nets a 1⁄2 expected utility point from this decision.
What is the conclusion from this thought experiment? Well, if decisions were made to your decision rule, providing each individual an equal chance of being helped in each situation, then Bob nets 1⁄2 + 1⁄2 + 1⁄2 = 3⁄2 expected utility points. Following a more conventional decision rule to always help more people vs. less people if everyone is suffering similarly (a decision rule that would’ve been agreed upon behind a veil of ignorance), Bob would get 0 (no help from the original donor) + 1 (definite help from the doctors + nurses) + 1 (definite help from Karen) = 2 expected utility points. Under this particular set of circumstances, Bob would’ve benefitted more from the veil of ignorance approach.
You may reasonably ask whether this set of seemingly fantastical scenarios has been precisely constructed to make my point rather than yours. After all, couldn’t Bob have found himself in more situations like the donor case rather than the hospital or loan cases, which would shift the math towards favoring your decision rule? Yes, this is certainly possible, but unlikely. Why? For the simple reason that any given individual is more likely to find themselves in a situation that affects more people than a situation that affects few. In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach.
Based on your post, it seems you are hesitant to aggregate utility over multiple individuals; for the sake of argument here, that’s fine. But the thought scenario above doesn’t require that at all; just aggregating utility over Bob’s own life, you can see how the veil-of-ignorance approach is expected to benefit him more. So if we rewind the tape of Bob’s life all the way back to the original donor scenario, where the donor is mulling over whether they want to donate to help Bob or to help Amy + Susie, the donor should consider that in all likelihood Bob’s future will be one in which the veil-of-ignorance approach will work out in his favor moreso than the everyone-gets-an-equal-chance approach. So if this donor and other donors in similar situations are to commit to one of these two decision rules, they should commit to the veil of ignorance approach; it would help Bob (and Amy, and Susie, and all other beneficiaries of donations) the most in terms of expected well-being.
Another way to put this is that, even if you don’t buy that Bob should put himself behind a veil of ignorance because he knows he doesn’t have an equal chance of being in Amy’s and Susie’s situation, and so shouldn’t decide to sign a cooperative agreement with Amy and Susie, you should buy that Bob is in effect behind a veil of ignorance regarding his own future, and therefore should sign the contract with Amy and Susie because this would be cooperative with respect to his future selves. And the donor should act in accord with this hypothetical contract.
I would respond to the second point, but this post is already long enough, and I think what I just laid out is more central.
I will also be bowing out of the discussion at this point – not because of anything you said or did, but simply since it took me much more time to write up my thoughts than I would have liked. I did enjoy the discussion and found it useful to lay out my beliefs in a thorough and hopefully clear manner, as well as to read your thoughtful replies. I do hope you decide that EA is not fatally flawed and to stick around the community :)
Hey Brian,
No worries! I’ve enjoyed our exchange as well—your latest response is both creative and funny. In particular, when I read “They have read your blog post on the EA forum and decide to flip a coin”, I literally laughed out loud (haha). It’s been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to.
Btw, for the benefit of first-time readers, I’ve updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I’ve also kept in the response what I originally wrote. Just wanted to let you know. Now onto my response.
You write, “In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach.”
This would be true if Bob has an equal chance of being in any of the positions of a given future trade off situation. That is, Bob would have a higher chance of being in the majority in any given future trade off situation if Bob has an equal chance of being in any of the positions of a given trade off situation. Importantly, just because there is more positions on the majority side of a trade off situation, that does not automatically mean that Bob has a higher chance of being among the majority. His probably or chance of being in each of the positions is crucial. I think you were implicitly assuming that Bob has an equal chance of being in any of the positions of a future trade off situation because he doesn’t know his future. But, as I mentioned in my previous post, it would be a mistake to conclude, from a lack of knowledge about one’s position, that one has an equal chance of being in any one’s position. So, just because Bob doesn’t know anything about his future, it does not mean that he has an equal chance of being in any of the positions in the future trade off situations that he is involved in.
In my original first response to you, I very briefly explained why I think people in general do not have an equal chance of being in anybody’s position. I have sense expanded that explanation. If what I say there is right, then it is not true that “over a whole lifetime of decisions to be made, Bob [or anyone else] is much more likely to benefit from the veil-of-ignorance-type approach [than the equal-chance approach].”
All the best!
It’s a stipulation of the Original Position, whether you look at Rawls’ formulation or Harsanyi’s. It’s not up for debate.
Hey kbog,
Thanks for your comment. I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.
Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.
The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.
My reply to Bob would be to essentially restate brianwang’s original comment, and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument, and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.
1) The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.
The conclusions are rational under the stipulation that each person has an equal chance of being in anybody’s position. But it is not actually rational given that the stipulation is false. So you can’t just say that the conclusions have a bearing on reality because they are necessarily rational. They are rational under the stipulation, but not when you take into account what is actually the case.
And I don’t see how the conclusion is fair to Bob when the conclusion is based on a false stipulation. Bob is a real person. He shouldn’t be treated like he had an equal chance of being in Amy’s or Susie’s position, when he in fact didn’t.
2) “My reply to Bob would be to essentially restate brianwang’s original comment...”
Sorry, can you quote the part you’re referring to?
3) ”...and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument.”
Can you explain what this “utilitarian principle of indifference argument” is?
4) “and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.”
Please don’t distort what I said. I had him say, “The fact of the matter is that I had no chance of being in Amy’s or Susie’s position.”, which is very different from saying that he was not Amy or Susie. If he wasn’t Amy or Susie, but actually had an equal chance of being either of them, then I would take the veil of ignorance approach more seriously.
I added the part that he is said because I wanted it to sound realistic. It is uncharitable to assume that that forms part of my argument.
The argument of both Rawls and Harsanyi is not that it just happens to be rational for everybody to agree to their moral criteria; the argument is that the morally rational choice for society is a universal application of the rule which is egoistically rational for people behind the veil of ignorance. Of course it’s not egoistically rational for people to give anything up once they are outside the veil of ignorance, but then they’re obviously making unfair decisions, so it’s irrelevant to the thought experiment.
Stipulations can’t be true or false—they’re stipulations. It’s a thought experiment for epistemic purposes.
The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system.
Also, to be clear, the Original Position argument doesn’t say “imagine if Bob had an equal chance of being in Amy’s or Susie’s position, see how you would treat them, and then treat him that way.” If it did, then it would simply not work, because the question of exactly how you should actually treat him would still be undetermined. Instead, the argument says “imagine if Bob had an equal chance of being in Amy’s or Susie’s position, see what decision rule they would agree to, and then treat them according to that decision rule.”
The first paragraph of his first comment.
This very idea, originally argued by Harsanyi (http://piketty.pse.ens.fr/files/Harsanyi1975.pdf).
Hey Brian,
I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person.
I think it was F. M. Kamm who first raised this objection to the veil-of-ignorance approach in his book Morality, Mortality Vol 1. (I haven’t actually read the book). Interestingly, kbog—another person I’ve been talking with on this forum—accepts this result. But I wonder if others like yourself would. Imagine Bob, Amy and Susie were in a trade off situation of the kind I just described, and imagine that Bob never actually had a chance to be in Amy’s or Susie’s position. In such a situation, do you think you should just save Amy and Susie?
Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It’s interesting that you and I have such intuitions about such a case – I see that as in the category of “being so obvious to me that I wouldn’t even have to hesitate to choose.” But obviously you have different intuitions here.
Part of what I’m confused about is what the positive case is for giving everyone an equal chance. I know what the positive case is for the approach of automatically saving two people vs. one: maximizing aggregate utility, which I see as the most rational, impartial way of doing good. But what’s the case for giving everyone an equal chance? What’s gained from that? Why prioritize “chances”? I mean, giving Bob a chance when most EAs would probably automatically save Amy and Susie might make Bob feel better in that particular situation, but that seems like a trivial point, and I’m guessing is not the main driver behind your reasoning.
One way of viewing “giving everyone an equal chance” is to give equal priority to different possible worlds. I’ll use the original “Bob vs. a million people” example to illustrate. In this example, there’s two possible worlds that the donor could create: in one possible world Bob is saved (world A), and in the other possible world a million people are saved (world B). World B is, of course, the world that an EA would create every time. As for world A, well: can we view this possible world as anything but a tragedy? If you flipped a coin and got this outcome, would you not feel that the world is worse off for it? Would you not instantly regret your decision to flip the coin? Or even forget flipping the coin, we can take donor choice out of it; wouldn’t you feel that a world where a hurricane ravaged and destroyed an urban community where a million people lived is worse than a world where that same hurricane petered out unexpectedly and only destroyed the home of one unlucky person?
If so, then why give tragic world A any priority at all, when we can just create world B instead? I mean, if you were asked to choose between getting a delicious chocolate milkshake vs. a bee sting, you wouldn’t say “I’ll take a 50% chance of each, please!” You would just choose the better option. Giving any chance, no matter how small, to the bee sting would be too high. Similarly, giving any priority to tragic world A, even 1 in 10 million, but be too high.
Hi Brian,
I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy’s burning to death plus Susie’s sore throat involves more or greater pain than Bob’s burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear.
But importantly, I don’t share your belief that Amy’s burning to death and Susie’s sore throat involves more or greater pain than Bob’s burning to death. On this note, I have completely reworked my response to Objection 1 a few days ago to make clear why I don’t share this belief, so please read that if you want to know why. On the contrary, I think Amy’s burning to death and Susie’s sore throat involves just as much pain as Bob’s burning to death.
So part of the positive case for giving everyone an equal chance is that the suffering on either side would involve the same LEVEL/AMOUNT of pain (even though the suffering on Amy’s and Susie’s side would clearly involve more INSTANCES of pain: i.e. 2 vs 1.)
But even if the suffering on Amy’s and Susie’s side would involve slightly greater pain (as you believe), there is a positive case for giving Bob some chance of being saved, rather than 0. And that is that who suffers matters, for the reason I offered in my response to Objection 2. I think that response provides a very powerful reason for giving Bob at least some chance, and not no chance at all, even if his pain would be less great than Amy’s and Susie’s together. (My response to Objection 3 makes clear that giving Bob some chance is not in conflict with being impartial, so that response is relevant too if you think doing so is being partial)
At the end of the day, I think one’s intuitions are based on one’s implicit beliefs and what one implicitly takes into consideration. Thus, if we shared the same implicit beliefs and implicitly took the same things into consideration, then we would share the same intuitions. So one way to view my essay is that it tries to achieve its goal by doing two things:
1) Challenging a belief (e.g. that Amy’s burning to death plus Susie’s sore throat involves more pain than Bob’s burning to death) that in part underlies the differences in intuition between me and people like yourself.
2) Reminding people of another important moral fact that should figure in their implicit thought processes (and thus be reflected in their intuitions): that who suffers matters. This moral fact is often forgotten about, which skews people’s intuitions. Once this moral fact is seriously taken into account, I bet people’s intuitions would not be the same. Importantly, I bet the vast majority of people (including yourself) would feel that giving Bob some chance of being saved is more appropriate than none, EVEN IF you still thought that Amy’s pain and Susie’s pain involve slightly more pain than Bob’s.