From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don’t say that I supposed this. I provided reasons.
You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.
If you don’t see an argument in my response to Objection 1, I’ll live with that since I put a lot of time into writing that essay and no one else has said the same.
My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.
Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.
I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.
Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain
The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.
1) “You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.”
I DO NOT simply assert this. In case 3, I wrote, “Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it’s morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache.” (I capped ‘because’ for emphasis here)
You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren’t utilitarians and don’t buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them.
2) “My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.”
I was just using your words. You said “But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people.” As I said, I assumed a premise that I thought the vast majority of my audience would agree with (i.e., at bottom, that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people). If YOU find that premise contentious, great, we can have a discussion about it, but please don’t make it sound like my argument doesn’t do any work for anyone.
3) “I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.”
Well, I don’t, which is why I assumed the premise in the first place. I mean I wouldn’t assume a premise that I thought the majority of my audience will disagree with. It’s certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.
4) “The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.”
Sorry, I’m not familiar with the axioms of expected utility theory or with preference utilitarianism. But perhaps I can understand your position by asking 2 questions:
1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved.
2) In an actual trade off situation of this kind, would you think we ought to save the latter two?
Well, I don’t, which is why I assumed the premise in the first place. I mean I wouldn’t assume a premise that I thought the majority of my audience will disagree with. It’s certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.
But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.
Sorry, I’m not familiar with the axioms of expected utility theory or with preference utilitarianism.
PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.
1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?
1) “But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.”
The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up.
From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say “gathered” and not “deduced” because you actually don’t disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1.
P1. read: “The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1).”
You disagree because you think Amy’s and Susie’s pains would together be experientially worse than Bob’s pain.
All this is to say that I don’t think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.
However, it seems like I really should have defended P1. of my argument (and similarly my assumption in case 3) more thoroughly. So I do admit that my post is lacking in this respect, which I appreciate you’re pointing out. I’m also sure there are ways to make it more clear and concise. I will consider your suggested approach during future editting sessions.
Update (Mar 21): After thinking through what you said some more, I’ve decided I’m going to re-do my response to Objection 1 along the lines of what you’re suggesting. Thanks for motivating this improvement.
2) “PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.”
Thanks for that explanation. I see where I went wrong in my previous reply now, so I concede this point.
3) “Yes to both.”
Ok, interesting. And, just out of curiosity, you don’t consider this as biting a bullet? I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.
P.S. I will reply to your other comment after I’ve read the paper you linked me to. But, I do want to note that you were being very uncharitable in your reply that “Stipulations can’t be true or false—they’re stipulations. It’s a thought experiment for epistemic purposes.”
Clearly stipulations/suppositions cannot be false relative to the thought experiment. But surely they can be false relative to reality—to what is actually the case.
I don’t think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.
But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.
Ok, interesting. And, just out of curiosity, you don’t consider this as biting a bullet?
How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.
I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.
Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.
And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.
1) “But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.”
Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded:
That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another.
That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself).
I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do.
Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I’m happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you.
2) “How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.”
Because you effectively deny the one person ANY CHANCE of being helped from torture SIMPLY BECAUSE you can prevent an additional minor headache—a very very very minor one—by helping the two. Anyways, a lot of people think that is pretty extreme. If you don’t think so, that’s perhaps mainly because you don’t believe WHO SUFFERS MATTERS. If that’s the case, then I would encourage you to reread my response to Objection 2, where I make the case that who suffers is of moral significance.
3) “Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.”
You do give each party a 50% chance of being saved by choosing to flip a coin, instead of choosing to just help one party over the other. I prefer giving a 50% chance to each party because
A) I don’t think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S’s post),
B) I believe who suffers matters (given my response to Objection 2)
Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.
It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn’t at one point actually have a 50% of being helped.
4) “And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.”
I agree that the only reason that we value a chance of being saved is that it may lead to us actually being saved, and in that sense, we don’t value it in itself. But I don’t get why that entails that giving each party a 50% of being saved is not what we should do.
Btw, sorry I haven’t replied to your response below brian’s discussion yet. I haven’t found the time to read that article you linked. I do plan to reply sometime soon.
Also, can you tell me how to quote someone’s text in the way that you do in your responses to me? It is much cleaner than my number listing and quotations. Thanks.
Because you effectively deny the one person ANY CHANCE of being helped from torture
Your scenario didn’t say that probabilistic strategies were a possible response, but suppose that they are. Then it’s true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you’ve given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it’s good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%.
SIMPLY BECAUSE you can prevent an additional minor headache—a very very very minor one—by helping the two.
That’s not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain.
Anyways, a lot of people think that is pretty extreme.
I really don’t see it as extreme. I’m not sure that many people would.
A) I don’t think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S’s post),
B) I believe who suffers matters (given my response to Objection 2)
First, I don’t see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals’ suffering being more or less important. The problem is that no one is claiming that anyone’s suffering will disappear or stop carrying moral force, rather we are claiming that each person’s suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason.
Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.
Again I cannot tell where you got these numbers from.
It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn’t at one point actually have a 50% of being helped.
But it does mean that they don’t care.
But I don’t get why that entails that giving each party a 50% of being saved is not what we should do.
If agents don’t have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as opposed to being suddenly higher merely because they had a ‘chance.’
Also, can you tell me how to quote someone’s text in the way that you do in your responses to me?
You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.
My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.
I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.
The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.
1) “You simply assert that we would rather save Emma’s major headache rather than five minor ones in case 3. But if you’ve stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn’t change the story. I just don’t follow the argument here.”
I DO NOT simply assert this. In case 3, I wrote, “Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it’s morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache.” (I capped ‘because’ for emphasis here)
You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren’t utilitarians and don’t buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them.
2) “My whole point here is that your response to Objection 1 doesn’t do any work to convince us of your premises regarding the headaches. Yeah there’s an argument, but its premise is both contentious and undefended.”
I was just using your words. You said “But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people.” As I said, I assumed a premise that I thought the vast majority of my audience would agree with (i.e., at bottom, that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people). If YOU find that premise contentious, great, we can have a discussion about it, but please don’t make it sound like my argument doesn’t do any work for anyone.
3) “I’m not just speaking for utilitarians, I’m speaking for anyone who doesn’t buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.”
Well, I don’t, which is why I assumed the premise in the first place. I mean I wouldn’t assume a premise that I thought the majority of my audience will disagree with. It’s certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.
4) “The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it’s an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people’s preferences anyway so that there isn’t any dissonance between what people would select and what utilitarianism says.”
Sorry, I’m not familiar with the axioms of expected utility theory or with preference utilitarianism. But perhaps I can understand your position by asking 2 questions:
1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?
But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.
PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.
Yes to both.
1) “But if anyone did accept that premise then they would already believe that the number of people suffering doesn’t matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie’s suffering is not a greater problem than Bob’s suffering. So I can’t tell if it’s actually doing any work. If not, then it’s just adding unnecessary length. That’s what I mean when I say that it’s too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob’s diseases in the first place, making your claim that Amy and Susie’s diseases are not experientially worse than Bob’s disease and so on.”
The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up.
From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say “gathered” and not “deduced” because you actually don’t disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1.
P1. read: “The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1).”
You disagree because you think Amy’s and Susie’s pains would together be experientially worse than Bob’s pain.
All this is to say that I don’t think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.
However, it seems like I really should have defended P1. of my argument (and similarly my assumption in case 3) more thoroughly. So I do admit that my post is lacking in this respect, which I appreciate you’re pointing out. I’m also sure there are ways to make it more clear and concise. I will consider your suggested approach during future editting sessions.
Update (Mar 21): After thinking through what you said some more, I’ve decided I’m going to re-do my response to Objection 1 along the lines of what you’re suggesting. Thanks for motivating this improvement.
2) “PU says that we should assign moral value on the basis of people’s preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you’re using the term correctly) that they’re putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.”
Thanks for that explanation. I see where I went wrong in my previous reply now, so I concede this point.
3) “Yes to both.”
Ok, interesting. And, just out of curiosity, you don’t consider this as biting a bullet? I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.
P.S. I will reply to your other comment after I’ve read the paper you linked me to. But, I do want to note that you were being very uncharitable in your reply that “Stipulations can’t be true or false—they’re stipulations. It’s a thought experiment for epistemic purposes.” Clearly stipulations/suppositions cannot be false relative to the thought experiment. But surely they can be false relative to reality—to what is actually the case.
But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.
How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.
Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.
And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.
1) “But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it’s not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.
If you disagree, try to sketch out a view (that isn’t blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.”
Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded:
That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another.
That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself).
I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do.
Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I’m happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you.
2) “How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?
I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.”
Because you effectively deny the one person ANY CHANCE of being helped from torture SIMPLY BECAUSE you can prevent an additional minor headache—a very very very minor one—by helping the two. Anyways, a lot of people think that is pretty extreme. If you don’t think so, that’s perhaps mainly because you don’t believe WHO SUFFERS MATTERS. If that’s the case, then I would encourage you to reread my response to Objection 2, where I make the case that who suffers is of moral significance.
3) “Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don’t see what could possibly lead one to prefer it.”
You do give each party a 50% chance of being saved by choosing to flip a coin, instead of choosing to just help one party over the other. I prefer giving a 50% chance to each party because
A) I don’t think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S’s post),
B) I believe who suffers matters (given my response to Objection 2)
Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.
It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn’t at one point actually have a 50% of being helped.
4) “And merely having a “chance” of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don’t sit on the mere fact of the chance and covet it as though it were something to value on its own.”
I agree that the only reason that we value a chance of being saved is that it may lead to us actually being saved, and in that sense, we don’t value it in itself. But I don’t get why that entails that giving each party a 50% of being saved is not what we should do.
Btw, sorry I haven’t replied to your response below brian’s discussion yet. I haven’t found the time to read that article you linked. I do plan to reply sometime soon.
Also, can you tell me how to quote someone’s text in the way that you do in your responses to me? It is much cleaner than my number listing and quotations. Thanks.
Your scenario didn’t say that probabilistic strategies were a possible response, but suppose that they are. Then it’s true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you’ve given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it’s good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%.
That’s not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain.
I really don’t see it as extreme. I’m not sure that many people would.
First, I don’t see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals’ suffering being more or less important. The problem is that no one is claiming that anyone’s suffering will disappear or stop carrying moral force, rather we are claiming that each person’s suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason.
Again I cannot tell where you got these numbers from.
But it does mean that they don’t care.
If agents don’t have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as opposed to being suddenly higher merely because they had a ‘chance.’
use >