Because you effectively deny the one person ANY CHANCE of being helped from torture
Your scenario didn’t say that probabilistic strategies were a possible response, but suppose that they are. Then it’s true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you’ve given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it’s good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%.
SIMPLY BECAUSE you can prevent an additional minor headache—a very very very minor one—by helping the two.
That’s not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain.
Anyways, a lot of people think that is pretty extreme.
I really don’t see it as extreme. I’m not sure that many people would.
A) I don’t think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S’s post),
B) I believe who suffers matters (given my response to Objection 2)
First, I don’t see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals’ suffering being more or less important. The problem is that no one is claiming that anyone’s suffering will disappear or stop carrying moral force, rather we are claiming that each person’s suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason.
Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.
Again I cannot tell where you got these numbers from.
It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn’t at one point actually have a 50% of being helped.
But it does mean that they don’t care.
But I don’t get why that entails that giving each party a 50% of being saved is not what we should do.
If agents don’t have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as opposed to being suddenly higher merely because they had a ‘chance.’
Also, can you tell me how to quote someone’s text in the way that you do in your responses to me?
Your scenario didn’t say that probabilistic strategies were a possible response, but suppose that they are. Then it’s true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you’ve given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it’s good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%.
That’s not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain.
I really don’t see it as extreme. I’m not sure that many people would.
First, I don’t see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals’ suffering being more or less important. The problem is that no one is claiming that anyone’s suffering will disappear or stop carrying moral force, rather we are claiming that each person’s suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason.
Again I cannot tell where you got these numbers from.
But it does mean that they don’t care.
If agents don’t have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as opposed to being suddenly higher merely because they had a ‘chance.’
use >