I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved
But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it’s fewer people all the same, and it’s not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you’re defending.
Yes, there is more instances of suffering. But as I have tried to argue, x instances of suffering spread across x people is just as morally bad as 1 instance of the same kind of suffering had by one other person.
But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken.
I didn’t say it was implied.
But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount.
I don’t see how my assumption is anywhere near what I want to conclude.
Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not.
It seems to me like an assumption that is plausibly shared by all.
OK, well: it’s not.
More importantly, I wonder why one wouldn’t grant that we should act differently in choice situations 2 and 3.
Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur.
I would hesitate to use “No one”. If this were true, then I would have expected more comments along those lines.
brianwang712′s response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn’t take the time to spell it out.
If the reason boils down to the thought that 5 minor pains is experientially worse than 1 major pain, regardless if the 5 minor pains are all had felt by one person or spread across 5 different people, then I would point you to my conversation with Michael_S
Look, your comments towards him are very long and convoluted. I’m not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with “updates” alongside copies of your original comments, I find it almost painful to look through.
Finally, I just want to say that all the people I’ve conversed with on this forum so far have been very friendly and not dismissive, despite perhaps some differences in view. I wasn’t surprised by that because (presumably) most people on here are effective altruists, and it would seem rather odd for an effective altruist—someone who identifies with helping the less fortunate—to be unfriendly or dismissive. Anyways, I do hope to remain unsurprised by that. I think only in a friendly and non-dismissive atmosphere can the interlocutors benefit from their conversation.
I don’t see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards. The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere. Conversations mustn’t be friendly to be informative, and I’m really not being dismissive about anything you write which I do have the time to read.
1) “But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it’s fewer people all the same, and it’s not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you’re defending.”
To arbitrarily save fewer people is to save them on a whim. I am not suggesting that we should save them on a whim. I am suggesting that we should give each person an equal chance of being saved. They are completely different ideas.
2) “But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken.”
Please show me where I supposed that 5 minor headaches are MORALLY worse when they happen to one person than when they happen to multiple people. In both choice situations 2 and 3, I provided REASONS for saying
A) why 5 minor headaches all had by one person is morally worse than 1 major headache had by one person, and
B) why 1 major headache had by one person is morally worse than 5 minor headaches spread across 5 people.
From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don’t say that I supposed this. I provided reasons. You can reject those reasons, but that is a different story.
If you meant that I supposed that 5 minor headaches are EXPERENTIALLY worse when they happen to one person than when they happen to multiple people, sure, it can be inferred from what I wrote that I was supposing this. But importantly, to make this assumption is not a stretch as it seems (at least to me) like an assumption plausibly shared by many. But it turns out that Michael_S disagreed, at which time I was glad to defend this assumption. More importantly, even if I made this supposition (as we have to start from somewhere), it does not mean that by doing so, I was simply assuming and not arguing for what you quoted.
3) “But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount.”
If you don’t see an argument in my response to Objection 1, I’ll live with that since I put a lot of time into writing that essay and no one else has said the same.
4) “Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not.”
By basic arguments, I presume you mean utilitarian arguments. First off, I was not writing this for a utilitarian audience. If I writing this for an audience that finds it intuitive to save Amy and Susie instead of Bob, and I was trying to show how other (perhaps more basic) intuitions that I assumed were commonly held (i.e. saving one from a major headache instead of 5 each from a minor headache) could provide the ingredients for showing that we should provide each of them with an equal chance of being helped.
If I was writing this for strictly a utilitarian audience, I would have taken a different approach which would have included explaining why 5 pains all had by one person is experentially worse than 5 pains spread across 5 people.
Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.
5) “brianwang712′s response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn’t take the time to spell it out.”
Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain.
6) “Look, your comments towards him are very long and convoluted. I’m not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with “updates” alongside copies of your original comments, I find it almost painful to look through.”
I’m sorry you find them convoluted. I updated the very first replies to Brian and Michael_S in order to try to make my position more clear for first-time readers like you. I spent a lot of time on trying to make my replies more clear because I don’t want to waste reader’s time. If I failed to do that, I can only say I tried.
7) “I don’t see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards.”
I never asked for gentle standards. I asked for a non-dismissive and friendly attitude.
8) “The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere.”
I didn’t quite understand the latter half, but yes, their time is valuable, which is why I’ve tried to be as clear I can. In any case, it is a good thing to critically examine one’s own views from time to time, no matter how vital one’s time seems under the supposition of that view. So—if I understood the latter part correctly—you needn’t worry so much about saving other people’s time from my post.
9) “Conversations mustn’t be friendly to be informative, and I’m really not being dismissive about anything you write which I do have the time to read.”
A person (at least speaking for myself) is much more receptive to the content of another’s comment when they are put in a friendly (though demanding) manner. Thus, friendliness helps make conversation more informative.
Whereas dismissive and unfriendly comments like “I’m not about to wade through it just to find the specific 1-2 sentences where you go astray.” or “I find it almost painful to look through.” do not.
P.S. I will not be replying to any more of your comments that I feel are either uncharitable, dismissive or shows a lack of effort spent on understanding my position.
Ops, just noticed I missed a comment you made:
10) “Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur.”
As I see it, a case or state of affairs in which 5 minor headaches are all felt by one person is MORALLY WORSE than a case in which 5 minor headaches are spread across 5 persons because 5 minor headaches all felt by one person is EXPERIENTIALLY WORSE than 5 minor headaches spread across 5 persons.
I take experience to be the only morally relevant factor, and in this way, I am a moral singularist (as opposed to pluralist). For why I think the former is experientially worse than the latter, please at least read my first reply to Michael_S. Thanks.
From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don’t say that I supposed this. I provided reasons.
You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.
If you don’t see an argument in my response to Objection 1, I’ll live with that since I put a lot of time into writing that essay and no one else has said the same.
My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.
Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.
I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.
Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain
The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.
1) “You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.”
I DO NOT simply assert this. In case 3, I wrote, “Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it’s morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache.” (I capped ‘because’ for emphasis here)
You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren’t utilitarians and don’t buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them.
2) “My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.”
I was just using your words. You said “But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people.” As I said, I assumed a premise that I thought the vast majority of my audience would agree with (i.e., at bottom, that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people). If YOU find that premise contentious, great, we can have a discussion about it, but please don’t make it sound like my argument doesn’t do any work for anyone.
3) “I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.”
Well, I don’t, which is why I assumed the premise in the first place. I mean I wouldn’t assume a premise that I thought the majority of my audience will disagree with. It’s certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.
4) “The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.”
Sorry, I’m not familiar with the axioms of expected utility theory or with preference utilitarianism. But perhaps I can understand your position by asking 2 questions:
1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved.
2) In an actual trade off situation of this kind, would you think we ought to save the latter two?
Well, I don’t, which is why I assumed the premise in the first place. I mean I wouldn’t assume a premise that I thought the majority of my audience will disagree with. It’s certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.
But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.
Sorry, I’m not familiar with the axioms of expected utility theory or with preference utilitarianism.
PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.
1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?
1) “But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.”
The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up.
From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say “gathered” and not “deduced” because you actually don’t disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1.
P1. read: “The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1).”
You disagree because you think Amy’s and Susie’s pains would together be experientially worse than Bob’s pain.
All this is to say that I don’t think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.
However, it seems like I really should have defended P1. of my argument (and similarly my assumption in case 3) more thoroughly. So I do admit that my post is lacking in this respect, which I appreciate you’re pointing out. I’m also sure there are ways to make it more clear and concise. I will consider your suggested approach during future editting sessions.
Update (Mar 21): After thinking through what you said some more, I’ve decided I’m going to re-do my response to Objection 1 along the lines of what you’re suggesting. Thanks for motivating this improvement.
2) “PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.”
Thanks for that explanation. I see where I went wrong in my previous reply now, so I concede this point.
3) “Yes to both.”
Ok, interesting. And, just out of curiosity, you don’t consider this as biting a bullet? I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.
P.S. I will reply to your other comment after I’ve read the paper you linked me to. But, I do want to note that you were being very uncharitable in your reply that “Stipulations can’t be true or false—they’re stipulations. It’s a thought experiment for epistemic purposes.”
Clearly stipulations/suppositions cannot be false relative to the thought experiment. But surely they can be false relative to reality—to what is actually the case.
I don’t think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.
But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.
Ok, interesting. And, just out of curiosity, you don’t consider this as biting a bullet?
How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.
I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.
Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.
And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.
1) “But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.”
Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded:
That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another.
That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself).
I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do.
Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I’m happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you.
2) “How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.”
Because you effectively deny the one person ANY CHANCE of being helped from torture SIMPLY BECAUSE you can prevent an additional minor headache—a very very very minor one—by helping the two. Anyways, a lot of people think that is pretty extreme. If you don’t think so, that’s perhaps mainly because you don’t believe WHO SUFFERS MATTERS. If that’s the case, then I would encourage you to reread my response to Objection 2, where I make the case that who suffers is of moral significance.
3) “Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.”
You do give each party a 50% chance of being saved by choosing to flip a coin, instead of choosing to just help one party over the other. I prefer giving a 50% chance to each party because
A) I don’t think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S’s post),
B) I believe who suffers matters (given my response to Objection 2)
Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.
It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn’t at one point actually have a 50% of being helped.
4) “And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.”
I agree that the only reason that we value a chance of being saved is that it may lead to us actually being saved, and in that sense, we don’t value it in itself. But I don’t get why that entails that giving each party a 50% of being saved is not what we should do.
Btw, sorry I haven’t replied to your response below brian’s discussion yet. I haven’t found the time to read that article you linked. I do plan to reply sometime soon.
Also, can you tell me how to quote someone’s text in the way that you do in your responses to me? It is much cleaner than my number listing and quotations. Thanks.
Because you effectively deny the one person ANY CHANCE of being helped from torture
Your scenario didn’t say that probabilistic strategies were a possible response, but suppose that they are. Then it’s true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you’ve given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it’s good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%.
SIMPLY BECAUSE you can prevent an additional minor headache—a very very very minor one—by helping the two.
That’s not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain.
Anyways, a lot of people think that is pretty extreme.
I really don’t see it as extreme. I’m not sure that many people would.
A) I don’t think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S’s post),
B) I believe who suffers matters (given my response to Objection 2)
First, I don’t see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals’ suffering being more or less important. The problem is that no one is claiming that anyone’s suffering will disappear or stop carrying moral force, rather we are claiming that each person’s suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason.
Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.
Again I cannot tell where you got these numbers from.
It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn’t at one point actually have a 50% of being helped.
But it does mean that they don’t care.
But I don’t get why that entails that giving each party a 50% of being saved is not what we should do.
If agents don’t have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as opposed to being suddenly higher merely because they had a ‘chance.’
Also, can you tell me how to quote someone’s text in the way that you do in your responses to me?
But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it’s fewer people all the same, and it’s not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you’re defending.
But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken.
But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount.
Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not.
OK, well: it’s not.
Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur.
brianwang712′s response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn’t take the time to spell it out.
Look, your comments towards him are very long and convoluted. I’m not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with “updates” alongside copies of your original comments, I find it almost painful to look through.
I don’t see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards. The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere. Conversations mustn’t be friendly to be informative, and I’m really not being dismissive about anything you write which I do have the time to read.
1) “But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it’s fewer people all the same, and it’s not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you’re defending.”
To arbitrarily save fewer people is to save them on a whim. I am not suggesting that we should save them on a whim. I am suggesting that we should give each person an equal chance of being saved. They are completely different ideas.
2) “But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken.”
Please show me where I supposed that 5 minor headaches are MORALLY worse when they happen to one person than when they happen to multiple people. In both choice situations 2 and 3, I provided REASONS for saying
A) why 5 minor headaches all had by one person is morally worse than 1 major headache had by one person, and
B) why 1 major headache had by one person is morally worse than 5 minor headaches spread across 5 people.
From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don’t say that I supposed this. I provided reasons. You can reject those reasons, but that is a different story.
If you meant that I supposed that 5 minor headaches are EXPERENTIALLY worse when they happen to one person than when they happen to multiple people, sure, it can be inferred from what I wrote that I was supposing this. But importantly, to make this assumption is not a stretch as it seems (at least to me) like an assumption plausibly shared by many. But it turns out that Michael_S disagreed, at which time I was glad to defend this assumption. More importantly, even if I made this supposition (as we have to start from somewhere), it does not mean that by doing so, I was simply assuming and not arguing for what you quoted.
3) “But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount.”
If you don’t see an argument in my response to Objection 1, I’ll live with that since I put a lot of time into writing that essay and no one else has said the same.
4) “Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not.”
By basic arguments, I presume you mean utilitarian arguments. First off, I was not writing this for a utilitarian audience. If I writing this for an audience that finds it intuitive to save Amy and Susie instead of Bob, and I was trying to show how other (perhaps more basic) intuitions that I assumed were commonly held (i.e. saving one from a major headache instead of 5 each from a minor headache) could provide the ingredients for showing that we should provide each of them with an equal chance of being helped.
If I was writing this for strictly a utilitarian audience, I would have taken a different approach which would have included explaining why 5 pains all had by one person is experentially worse than 5 pains spread across 5 people.
Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.
5) “brianwang712′s response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn’t take the time to spell it out.”
Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain.
6) “Look, your comments towards him are very long and convoluted. I’m not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with “updates” alongside copies of your original comments, I find it almost painful to look through.”
I’m sorry you find them convoluted. I updated the very first replies to Brian and Michael_S in order to try to make my position more clear for first-time readers like you. I spent a lot of time on trying to make my replies more clear because I don’t want to waste reader’s time. If I failed to do that, I can only say I tried.
7) “I don’t see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards.”
I never asked for gentle standards. I asked for a non-dismissive and friendly attitude.
8) “The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere.”
I didn’t quite understand the latter half, but yes, their time is valuable, which is why I’ve tried to be as clear I can. In any case, it is a good thing to critically examine one’s own views from time to time, no matter how vital one’s time seems under the supposition of that view. So—if I understood the latter part correctly—you needn’t worry so much about saving other people’s time from my post.
9) “Conversations mustn’t be friendly to be informative, and I’m really not being dismissive about anything you write which I do have the time to read.”
A person (at least speaking for myself) is much more receptive to the content of another’s comment when they are put in a friendly (though demanding) manner. Thus, friendliness helps make conversation more informative.
Whereas dismissive and unfriendly comments like “I’m not about to wade through it just to find the specific 1-2 sentences where you go astray.” or “I find it almost painful to look through.” do not.
P.S. I will not be replying to any more of your comments that I feel are either uncharitable, dismissive or shows a lack of effort spent on understanding my position.
Ops, just noticed I missed a comment you made:
10) “Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur.”
As I see it, a case or state of affairs in which 5 minor headaches are all felt by one person is MORALLY WORSE than a case in which 5 minor headaches are spread across 5 persons because 5 minor headaches all felt by one person is EXPERIENTIALLY WORSE than 5 minor headaches spread across 5 persons.
I take experience to be the only morally relevant factor, and in this way, I am a moral singularist (as opposed to pluralist). For why I think the former is experientially worse than the latter, please at least read my first reply to Michael_S. Thanks.
You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.
My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.
I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.
The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.
1) “You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.”
I DO NOT simply assert this. In case 3, I wrote, “Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it’s morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache.” (I capped ‘because’ for emphasis here)
You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren’t utilitarians and don’t buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them.
2) “My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.”
I was just using your words. You said “But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people.” As I said, I assumed a premise that I thought the vast majority of my audience would agree with (i.e., at bottom, that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people). If YOU find that premise contentious, great, we can have a discussion about it, but please don’t make it sound like my argument doesn’t do any work for anyone.
3) “I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.”
Well, I don’t, which is why I assumed the premise in the first place. I mean I wouldn’t assume a premise that I thought the majority of my audience will disagree with. It’s certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.
4) “The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.”
Sorry, I’m not familiar with the axioms of expected utility theory or with preference utilitarianism. But perhaps I can understand your position by asking 2 questions:
1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?
But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.
PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.
Yes to both.
1) “But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.”
The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up.
From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say “gathered” and not “deduced” because you actually don’t disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1.
P1. read: “The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1).”
You disagree because you think Amy’s and Susie’s pains would together be experientially worse than Bob’s pain.
All this is to say that I don’t think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.
However, it seems like I really should have defended P1. of my argument (and similarly my assumption in case 3) more thoroughly. So I do admit that my post is lacking in this respect, which I appreciate you’re pointing out. I’m also sure there are ways to make it more clear and concise. I will consider your suggested approach during future editting sessions.
Update (Mar 21): After thinking through what you said some more, I’ve decided I’m going to re-do my response to Objection 1 along the lines of what you’re suggesting. Thanks for motivating this improvement.
2) “PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.”
Thanks for that explanation. I see where I went wrong in my previous reply now, so I concede this point.
3) “Yes to both.”
Ok, interesting. And, just out of curiosity, you don’t consider this as biting a bullet? I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.
P.S. I will reply to your other comment after I’ve read the paper you linked me to. But, I do want to note that you were being very uncharitable in your reply that “Stipulations can’t be true or false—they’re stipulations. It’s a thought experiment for epistemic purposes.” Clearly stipulations/suppositions cannot be false relative to the thought experiment. But surely they can be false relative to reality—to what is actually the case.
But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.
How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.
Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.
And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.
1) “But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.”
Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded:
That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another.
That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself).
I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do.
Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I’m happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you.
2) “How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.”
Because you effectively deny the one person ANY CHANCE of being helped from torture SIMPLY BECAUSE you can prevent an additional minor headache—a very very very minor one—by helping the two. Anyways, a lot of people think that is pretty extreme. If you don’t think so, that’s perhaps mainly because you don’t believe WHO SUFFERS MATTERS. If that’s the case, then I would encourage you to reread my response to Objection 2, where I make the case that who suffers is of moral significance.
3) “Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.”
You do give each party a 50% chance of being saved by choosing to flip a coin, instead of choosing to just help one party over the other. I prefer giving a 50% chance to each party because
A) I don’t think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S’s post),
B) I believe who suffers matters (given my response to Objection 2)
Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.
It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn’t at one point actually have a 50% of being helped.
4) “And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.”
I agree that the only reason that we value a chance of being saved is that it may lead to us actually being saved, and in that sense, we don’t value it in itself. But I don’t get why that entails that giving each party a 50% of being saved is not what we should do.
Btw, sorry I haven’t replied to your response below brian’s discussion yet. I haven’t found the time to read that article you linked. I do plan to reply sometime soon.
Also, can you tell me how to quote someone’s text in the way that you do in your responses to me? It is much cleaner than my number listing and quotations. Thanks.
Your scenario didn’t say that probabilistic strategies were a possible response, but suppose that they are. Then it’s true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you’ve given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it’s good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%.
That’s not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain.
I really don’t see it as extreme. I’m not sure that many people would.
First, I don’t see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals’ suffering being more or less important. The problem is that no one is claiming that anyone’s suffering will disappear or stop carrying moral force, rather we are claiming that each person’s suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason.
Again I cannot tell where you got these numbers from.
But it does mean that they don’t care.
If agents don’t have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as opposed to being suddenly higher merely because they had a ‘chance.’
use >