On ‘people should have a chance to be helped in proportion to how much we can help them’ (versus just always helping whoever we can help the most).
(Again, my preferred usage of ‘morally worse/better’ is basically defined so as to mean one always ‘should’ always pick the ‘morally best’ action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point… )
How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.
In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of ‘actually helping people’ in favour of it, but then it seems strange you argue for it so forcefully.
As a separate point, this form of reasoning seems rather incompatible with your claims about ‘total pain’ being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the ‘total pain’ does not change at all!
For example:
Suppose Alice is experiencing 10 units of suffering (by some common metric)
10n people (call them group B) are experiencing 1 units of suffering each
We can help exactly one person, and reduce their suffering to 0
In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of ‘total pain’ remains at 10 as Alice is not helped.
This means that n/(n+1) proportion of the time the ‘total pain’ remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.
Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.
Indeed, for another example:
Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.
You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with “reason and empathy”.
(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)
Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I’m back :P
How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.
This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.
Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?
Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.
One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest—seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.
For example:
• Suppose Alice is experiencing 10 units of suffering (by some common metric)
• 10n people (call them group B) are experiencing 1 units of suffering each
• We can help exactly one person, and reduce their suffering to 0
In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of ‘total pain’ remains at 10 as Alice is not helped.
This means that n/(n+1) proportion of the time the ‘total pain’ remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.
This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.
So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.
I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.
Indeed, for another example:
• Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
• However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.
You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with “reason and empathy”.
I had anticipated this objection when I wrote my post. In footnote 4, I wrote:
“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”
Admittedly, there are two potential problems with what I say in my footnote.
1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.
2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).
But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.
All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.
I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters… I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker—something more acceptable to me… (although I feel doubtful about this).
On ‘people should have a chance to be helped in proportion to how much we can help them’ (versus just always helping whoever we can help the most).
(Again, my preferred usage of ‘morally worse/better’ is basically defined so as to mean one always ‘should’ always pick the ‘morally best’ action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point… )
How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.
In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of ‘actually helping people’ in favour of it, but then it seems strange you argue for it so forcefully.
As a separate point, this form of reasoning seems rather incompatible with your claims about ‘total pain’ being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the ‘total pain’ does not change at all!
For example:
Suppose Alice is experiencing 10 units of suffering (by some common metric)
10n people (call them group B) are experiencing 1 units of suffering each
We can help exactly one person, and reduce their suffering to 0
In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of ‘total pain’ remains at 10 as Alice is not helped.
This means that n/(n+1) proportion of the time the ‘total pain’ remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.
Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.
Indeed, for another example:
Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.
You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with “reason and empathy”.
(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)
Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I’m back :P
This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.
Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?
Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.
One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest—seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.
This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.
So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.
I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.
I had anticipated this objection when I wrote my post. In footnote 4, I wrote:
“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”
Admittedly, there are two potential problems with what I say in my footnote.
1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.
2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).
But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.
All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.
I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters… I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker—something more acceptable to me… (although I feel doubtful about this).