On slide 10 (EA challenge 1), I think you meant “that” rather than “than”.
Good luck! Also, I’m new to this forum and would appreciate it if I could get some likes so that I could make a post! Thanks.
On slide 10 (EA challenge 1), I think you meant “that” rather than “than”.
Good luck! Also, I’m new to this forum and would appreciate it if I could get some likes so that I could make a post! Thanks.
Hi Brian,
Thanks for your comment and for reading my post!
Here’s my response:
Bob, Susie and Amy would sign the agreement to save the greater number if they assumed that they each had an equal chance of being in any of their positions. But, is this assumption true? For example, is it actually the case that Bob had an equal chance to be in Amy’s or Susie’s position? If it is the case, then saving the greater number would in effect give each of them a 2⁄3 chance of being saved (the best chance as you rightly noted). But if it isn’t, then why should an agreement based on a false assumption have any force? Suppose Bob, in actuality, had no chance of being in Amy’s or Susie’s position, then is it really in accordance with reason and empathy to save Amy and Susie and give Bob zero chance?
Intuitively, for Bob to have had an equal chance of being in Amy’s position or Susie’s position or his actual position, he must have had an equal chance of living Amy’s life or Susie’s life or his actual life. That’s how I intuitively understand a position: as a life position. To occupy someone’s position is to be in their life circumstances—to have their life. So understood, what would it take for Bob to have had an equal chance of being in Amy’s position or Susie’s position or his own? Presumably, it would have had to be the case that Bob was just as likely to have been born to Amy’s parents or Susie’s parents or his actual parents. But this seems very unlikely because the particular “subject-of-experience” or “self” that each of us are is probably biologically linked to our ACTUAL parents’ cells. Thus another parent could not give birth to us, even though they might give birth to a subjective-of-experience that is qualitatively very similar to us (i.e. same personality, same skin complexion, etc).
Of course, being in someone’s position need not be understood in this demanding (though intuitive) way. For example, maybe to be in Amy’s position just requires being in her actual location with her actual disease, but not e.g. being of the same sex as her or having her personality. But insofar as we are biologically linked to our actual parents, and parents are spread all over the world, the odds of Bob having had an equal chance of being in his actual position (i.e. a certain location with a certain disease) or in Amy’s position (i.e. a different location with an equally painful disease) is highly unlikely. Think also about all the biological/personality traits that make a person more or less likely to be in a given position. I, for example, certainly had zero chance of being in an NBA position, given my height. Of course, as we change in various ways, our chances to be in certain positions change too, but even so, it is extremely unlikely that any given person, at any given point in time, had an equal chance of being in any of the positions of a trade off situation that he is later to be involved in.
UPDATE (ADDED ON MAR 18): I have added the above two paragraphs to help first-time readers better understand how I understand “being in someone’s position” and why I think it is most unlikely that Bob actually had an equal chance of being in Amy’s or Susie’s position. These two paragraphs have replaced a much briefer paragraph, which you can find at the end of this reply. UPDATE (ADDED ON MAR 21): Also, no need to read past this point since someone (kbog) made me realize that the question I ask in the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.
Also, what would the implications of this objection be for cases where the pains involved in a choice situation are unequal? Presumably, EA favors saving a billion people each from a fairly painful disease than a single person from the excruciating pain of being burned alive. But is it clear that someone behind the veil of ignorance would accept this?
-
Original paragraph that was replaced: “Similarly, is it actually the case that each of us had an equal chance of being in any one of our positions? I think the answer is probably no because the particular “subject-of-experience” or “self” that each of us are is probably linked to our parents’ cells.”
I see the problem. I will fix this. Thanks.
Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I’m back :P
How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.
This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.
Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?
Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.
One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest—seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.
For example:
• Suppose Alice is experiencing 10 units of suffering (by some common metric)
• 10n people (call them group B) are experiencing 1 units of suffering each
• We can help exactly one person, and reduce their suffering to 0
In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of ‘total pain’ remains at 10 as Alice is not helped.
This means that n/(n+1) proportion of the time the ‘total pain’ remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.
This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.
So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.
I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.
Indeed, for another example:
• Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
• However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.
You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with “reason and empathy”.
I had anticipated this objection when I wrote my post. In footnote 4, I wrote:
“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”
Admittedly, there are two potential problems with what I say in my footnote.
1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.
2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).
But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.
All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.
I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters… I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker—something more acceptable to me… (although I feel doubtful about this).
I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.
By “you switched”, do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I’ve switched?
And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?
Hey Alex,
Thanks for your reply. I can understand why you’d be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of “more pain”.
I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of “more pain”, and then presenting an argument for why my sense of “more pain” is the one that really matters.
I’d be interested to know what you think.
Hey Alex, thanks for your comment!
I didn’t know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn’t structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I’m wrong or he realizes he’s wrong. If I’m wrong, I hope to realize it asap.
Also, I agree with kbog. I think it’s much likelier that one of us is just confused. Either kbog is right that there is an intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person or he is not.
After figuring that out, there is the question of which sense of “involves more pain than” is more morally important: is it the “is experientially worse than” sense or kbog’s sense? Perhaps that comes down to intuitions.
Hi bejaq,
Thanks for your thoughtful comment. I think your first paragraph captures well why I think who suffers matters. The connection between suffering and who suffers it is to strong for the former to matter and for the latter not to. Necessarily, pain is pain for someone, and ONLY for that someone. So it seems odd for pain to matter, yet for it not to matter who suffers it.
I would also certainly agree that there are pragmatic considerations that push us towards helping the larger group outright, rather than giving the smaller group a chance.
You’ll need to read to the very end of this reply before my argument seems complete.
In both cases I evaluate the quality of the experience multiplied by the number of subjects. It’s the same aspect for both cases. You’re just confused by the fact that, in one of the cases but not the other, the resulting quantity happens to be the same as the number provided by your “purely experiential sense”.
Case 1: 5 minor headaches spread among 5 people
Case 2: 1 major headache had by one person
Yes, I understand that in each case, you are multiplying a certain amount of pain (determined solely by how badly something feels) by the number of instances to get a total amount of pain (determined via this multiplication), and then you are comparing the total amount of pain in each case.
For example, in Case 1, you are multiplying the amount of pain of a minor headache (determined solely by how badly a minor headache feels) by the number of instances to get a total amount of pain (determined via this multiplication). Say each minor headache feels like a 2, then 2 x 5 = 10. Call this 10 “10A”.
Similarly, in Case 2, you are multiplying the amount of pain of a major headache (determined solely by how badly a major headache feels) by the number of instances, in this case just 1, to get a total amount of pain (determined via this multiplication). Say the major headache feels like a 6, then 6 x 1 = 6. Call this latter 6 “6A”.
You then compare the 10A with the 6A. Moreover, since the amounts of pain represented by 10A and 6A are both gotten by multiplying one dimension (i.e. amount of pain, determined purely experientially) by another dimension (instances), you claim that you are comparing things along the same dimension, namely, A. But this is problematic.
To see the problem, consider
Case 3: 5 minor headaches all had by 1 person.
Here, like in Case 1, we can multiply the amount of pain of a minor headache (determined purely experientially) by the number of instances to get a total amount of pain (determined via this multiplication). 2 x 5 = 10. This 10 is the 10A sort.
OR, unlike in Case 1, we can determine the final amount of pain not by multiplying those things, but instead in the same way we determine the amount of pain of a single minor headache, namely, by considering how badly the 5 minor headaches feels. We can consider how badly the what-it’s-like-of-going-through-5-minor-headaches feels. It feels like a 10, just as a minor headache feels like a 2, and a major headache feels like a 6. Call these 10E, 2E and 6E respectively. The ‘E’ signifies that the numbers were determined purely experientially.
Ok. I’m sure you already understand all that. Now here’s the problem.
You insist that there is no problem with comparing 10A and 6A. After all, they are both determined in the same way: multiplying an experience by its instances.
I am saying there is a problem with that. The problem is that saying 10A is more than 6A makes no sense. Why not? Because, importantly, what goes into determining the 10A and 6A are 2E and 6E respectively: 2E x 5 = 10A. 6E x 1 = 6A. So what?
Well think about it. 2E x 5 instances is really just 2E, 2E, 2E, 2E, 2E.
And 6E x 1 instance is really just 6E.
So when you assert 10A is more than 6A, you are really just asserting that (2E, 2E, 2E, 2E, 2E) is more than 6E.
But then notice that, at bottom, you are still working with the dimension of experience (E) - the dimension of how badly something feels. The problem for you, then, is that the only intelligible form of comparison on this dimension is the “is experientially more bad than” (i.e. is experientially worse than) comparison.
(Of course, there is also the dimension of instances, and an intelligible form of comparison on this dimension is the “is more in instances than” comparison. For example, you can say 5 minor headaches is more in instances than 1 major headache (i.e. 5 > 1). But obviously, the comparison we care about is not merely a comparison of instances.)
Analogously, when you are working with the dimension of weight—the dimension of how much something weighs -, the only intelligible form of comparison is “weighs more than”.
Now, you keep insisting that there is an analogy between
1) your way of comparing the amounts of pain of various pain episodes (e.g. 5 minor headaches vs 1 major headache), and
2) how we normally compare the weights of various things (e.g. 5 small oranges vs 1 big orange).
For example, you say,
No, I am effectively saying that the weight of five oranges is more than the weight of one orange.
So let me explain why they are DIS-analogous. Consider the following example:
Case 1: Five small oranges, 2lbs each. (Just like 5 minor headaches, each feeling like a 2).
Case 2: One big orange, 6lbs. (Just like 1 major headache that feels like a 6).
Now, just as the 2 of a minor headache is determined by how badly it feels, the 2 of a small orange is determined by how much it weighs. So just as we write, 2E x 5 = 10A, we can similarly write 2W x 5 = 10A. And just as we write, 6E x 1 = 6A, we can similarly write 6W x 1 = 6A.
Now, if you assert that (the total amount of weight represented by) 10A is more than 6A, I would have NO problem with that. Why not? Because the comparison “is more than” still occurs on the dimension of weight (W). You are saying 5 small oranges WEIGHS more than 1 big orange. The comparison thus occurs on the SAME dimension that was used to determine the number 2 and 6 (numbers that in turn determined 10A and 6A): A small orange was determined to be 2 by how much it WEIGHED. Likewise with the big orange. And when you say 10A is more than 6A, the comparison is still made on that dimension.
By contrast, when you assert that (the total amount of pain represented by) 10A is more than 6A, the “is more than” does not occur on the dimension of experience anymore. It does not occur on the dimension of how badly something feels anymore. You are not saying that 5 minor headaches spread among 5 people is EXPERIENTIALLY WORSE than 1 major headache had by 1 person. You are saying something else. In other words, the comparison does NOT occur on the same dimension that was used to determine the number 2 and 6 (numbers that in turn determined 10A and 6A): A minor headache was determined to be 2 by how EXPERIENTIALLY BAD IT FELT. Likewise with the major headache. Yet, when you say 10A is more than 6A, you are not making a comparison on that dimension anymore.
So I hope you see how your way of comparing the amounts of pain between various pain episodes is disanalogous to how we normally compare the weights between various things.
Now, just as the dimension of weight (i.e. how much something weighs) and the dimension of instances (i.e. how many instances there are) do not combine to form some substantive third dimension on which to compare 5 small oranges with a big orange, the dimension of experience (i.e. how badly something feels) and the dimension of instances do not combine to form some substantive third dimension on which to compare 5 minor headaches spread among 5 people and 1 major headache had by one person. At best, they combine to form a trivial third dimension consisting in their collection/conjunction, on which one can intelligibly compare, say, 32 minor headaches with 23 minor headaches, irrespective of how the 32 and 23 minor headaches are spread. This trivial dimension is the dimension of “how many instances (i.e. how much) of a certain pain there is”. On this dimension, 5 minor headaches spread among 5 people cannot be compared with a MAJOR headache, because they are different pains, but 5 minor headaches spread among 5 people can be compared with 5 minor headaches all had by 1 person. Moreover, the result of such a comparison would be that they are the same on this dimension (as I allowed in an earlier reply). But this is a small victory given that this dimension won’t allow any comparisons between differential pains (e.g. 5 minor headaches and a major headache).
Just because two things are different doesn’t mean they are incommensurate.
But I didn’t say that. As long as two different things share certain aspects/dimensions (e.g. the aspect of weight, the aspect of nutrition, etc...), then of course they can be compared on those dimensions (e.g. the weight of an orange is more than the weight of an apple, i.e., an orange weighs more than an apple).
So I don’t deny that two different things that share many aspects/dimensions may be compared in many ways. But that’s not the problem.
The problem is that when you say that the amount of pain involved in 5 minor headaches spread among 5 people is more than the amount of pain involved in 1 major headache (i.e., 5 minor headaches spread among 5 people involves more pain than 1 major headache), you are in effect saying something like the WEIGHT of an orange is more than the NUTRITION of an apple. This is because the former “amount of pain” is used in a non-purely experiential sense while the latter “amount of pain” is used in a purely experiential sense. When I said you are comparing apples to oranges, THIS is what I meant.
Wow, their name says it all. I didn’t know about OPIS—I’ll definitely check them out. Will potentially be very useful for my own charitable activities.
Also, thanks for the link to Animal Charity Evaluators—didn’t know about them either. Although, given that the numbers don’t matter to me in trade off cases, I don’t know if it will make a difference. It would if it showed me that donating to another animal charity would help the EXACT same animals I’d help via donating to PETA AND then some (i.e. even more animals). If donating to another animal charity helped different animals (e.g. a different cow than a cow I would have helped by donating to PETA), then even if I can help more animals by donating to this other charity, I would have no overwhelming reason to, because the cow who I would thereby be neglecting would end up suffering no less than any one of the other animals otherwise would, and as I argued in response to Objection 2, who suffers matters.
Thanks for both suggestions though, Evan!
Note, I have since removed PETA from my post because the point of my post was just to question EA and not to suggest charities to donate to. Thanks for making me realize this.
You write, “Agree with others that overusing the word ‘utilitarianism’ seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).”
One thing I am sure about effective altruism is that it endorses helping the greater number, all other things being equal (by which I am here only concerned with the quality of pain being equal, for simplicity’s sake). So, for example, if $10 can be used to either save persons A and B each from some pain or C from a qualitatively identical pain, EA would say that it is morally better to save the two over the one.
Now, this in itself does not mean that effective altruism believes that it makes sense to
sum together certain people’s pain and to compare said sum to the sum of other people’s pain in such a way as to be able to say that one sum of pain is in some sense greater/equal to/lesser than the other, and
say that the morally best action is the one that results in the least sum of pain and the greatest sum of pleasure (which is more-or-less utilitarianism)
(Note that 2. assumes the intelligibility of 1.; see below)
The reason is because there are also non-aggregative ways to justify why it is better to save the greater number, at least when all other things are equal. For a survey of such ways, see “Saving Lives, Moral Theory, and the Claims of Individual” (Otsuka, 2006) However, I’m not aware that effective altruism why it’s better to save the greater number, all else equal, via these non-aggregative ways. Likely, it is purposely silent on this issue. Ben Todd (in private correspondence) informed me that “effective altruism starts from the position that it’s better to help the greater number, all else equal. Justifying that premise in the first place is in the realm of moral philosophy.” If that’s indeed the case, we might say that all effective altruism says is that the morally better course of action is the one that helps more people, everything else being equal (e.g. when the suffering to each person involved in the choice situation is qualitative the same), and (presumably) also sometimes even when everything isnt equal (e.g. when the suffering to each person in the bigger group might be somewhat less painful than the suffering to each person in the smaller group).
Insofar as effective altruism isn’t in the business of justification, then perhaps moral theories shouldn’t be mentioned at all in a presentation about effective altruism. But inevitably people considering joining the movement are going to ask why is it better to save the greater number, all else equal (e.g. A and B instead of C), or even sometimes when all else aren’t equal (e.g. one million people each from a relatively minor pain instead of one other person from a relatively greater pain)? And I think effective altruists ask themselves that question too. The OP might have and thought utilitarianism offers the natural justification: it is better to save A and B instead of C (and the million instead of the one) because doing so results in the least sum of pain. So, utilitarianism clearly offers a justification (though one might question if it is an adequate justification). On the other hand, it is not clear to me at all how other moral theories propose to justify saving the greater number in these two kinds of choice situations. So it is not surprising that OP has associated utilitarianism with effective altruism. I am sympathetic.
A bit more on utilitarianism: Roughly speaking, according to utilitarianism (or the principle of utility), among all the actions we can undertake at any given moment, the right action (ie the action we ought to take) is the one that results in the least sum of pain and the greatest sum of pleasure.
To figure out which action is the right action among a range of possible actions, we are to, for each possible action, add up all its resulting pleasures and pains. We are then to compare the resulting state of affairs corresponding to each action to see which resulting state of affairs contains the least sum of pain and greatest sum of pleasure. For example, suppose you can either save one million people each from a relatively minor pain or one other person from a relatively greater pain, but not both. Then you are to add up all the minor pains that would result from saving the single person, and then add up all the major pains (in this case, just 1) that would result from saving the million people, and then compare the two states of affairs to see which contains the least sum of pain.
From this we can clearly see that utilitarianism assumes that it makes sense to aggregate distinct people’s pains and to compare these sums in such a way as to be able to say, for example, that the sum of pain involved in a million people’s minor pains is greater (in some sense) than one other person’s major pain. Of course, many philosophers have seriously questioned the intelligibility of that.
Because you told me that it’s the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain.
Where in supposition or the line of reasoning that I laid out earlier (i.e. P1) through to P5)) did I say that 1 major headache involves the same amount of pain as 5 minor toothaches?
I attributed that line of reasoning to you because I thought that was how you would get to C) from the supposition that 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.
But you then denied that that line of reasoning represents your line of reasoning. Specifically, you denied that P1) is the basis for asserting P2). When I asked you what is your basis for P2), you assert that I told you that 1 major headache involves the same amount of pain as five minor toothaches. But where did I say this?
In any case, it would certainly help if you described your actual step by step reasoning from the supposition to C), since, apparently, I got it wrong.
If you mean that it feels worse to any given person involved, yes it ignores the difference, but that’s clearly the point, so I don’t know what you’re doing here other than merely restating it and saying “I don’t agree.”
I’m not merely restating the fact that Reason S ignores this difference. I am restating it as part of a further argument against your sense of “involves more pain than” or “involves the same amount of pain as”. The argument in essence goes: P1) Your sense relies on Reason S P2) Reason S does not care about pain-qua-how-it-feels (because it ignores the above stated difference). P3) We take pain to matter because of how it feels. C) Therefore, your sense is not in harmony with why pain matters (or at least why we take pain to matter).
I had to restate that Reason S ignores this difference as my support for P2, so it was not merely stated.
On the other hand, you do not care how many people are in pain, and you do not care how much pain someone experiences so long as there is someone else who is in more pain, so if anyone’s got to figure out whether or not they “care” enough it’s you.
Both accusations are problematic.
The first accusation is not entirely true. I don’t care about how many people are in pain only in situations where I have to choose between helping, say, Amy and Susie or just Bob (i.e. situations where a person in the minority party does not overlap with anyone in the majority party). However, I would care about how many people are in pain in situations where I have to choose between helping, say, Amy and Susie or just Amy (i.e. situations where the minority party is a mere subset of the majority party). This is due to the strict pareto principle which would make Amy and Susie each suffering morally worse than just Amy suffering, but would not make Amy and Susie suffering morally worse than Bob suffering. I don’t want to get into this at this point because it’s not very relevant to our discussion. Suffice it to say that it’s not entirely true that I don’t care about how many people are in pain.
The second accusation is plain false. As I made clear in my response to Objection 2 in my post, I think who suffers matters. As a result, if I could either save one person from suffering some pain or another person from suffering a slightly less pain, I would give each person a chance of being saved in proportion to how much each has to suffer. This is what I think I should do. Ironically, your second accusation against me is precisely true of what you stand for.
You’ve pretty much been repeating yourself for the past several weeks, so, sure.
In my past few replies, I have:
1) Outlined in explicit terms a line of reasoning that got from the supposition to C), which I attributed to you.
2) Highlighted that that line of reasoning appealed to Reason S.
3) On that basis, argued that your sense of “involves the same amount of pain as” goes against the spirit of why pain matters.
If that comes across to you as “just repeating myself for the past several weeks”, then I can only think that you aren’t putting enough effort into trying to understand what I’m saying.
the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person.
No, both equivalencies are justified by the fact that they involve the same amount of base units of pain.
So you’re saying that just as 5 MiTs/5 people is equivalent to 5 MiTs/1 person because both sides involve the same amount of base units of pain, 5 MiTs/1 person is equivalent to 1 MaT/1 person because both sides involve the same amount of base units of pain (and not because both sides give rise to what-it’s-likes that are experientially just as bad).
My question to you then is this: On what basis are you able to say that 1 MaT/1 person involves 5 base units of pain?
But Reason S doesn’t give a crap about how bad the pains on the two sides of the equation FEEL
Sure it does. The presence of pain is equivalent to feeling bad. Feeling bad is precisely what is at stake here, and all that I care about.
Reason S cares about the amount of base units of pain there are because pain feels bad, but in my opinion, that doesn’t sufficiently show that it cares about pain-qua-how-it-feels. It doesn’t sufficiently show that it cares about pain-qua-how-it-feels because 5 base units of pain all experienced by one person feels a whole heck of a lot worse than anything felt when 5 base units of pain are spread among 5 people, yet Reason S completely ignores this difference. If Reason S truly cared about pain-qua-how-it-feels, it cannot ignore this difference.
I understand where you’re coming from though. You hold that Reason S cares about the quantity of base units of pain precisely because pain feels bad, and that this fact alone sufficiently shows that Reason S is in harmony with the fact that we take pain to matter because of how it feels (i.e. that Reason S cares about pain-qua-how-it-feels).
However, given what I just said, I think this fact alone is too weak to show that Reason S is in harmony with the fact that we take pain to matter because of how it feels. So I believe my objection stands.
Have we hit bedrock?
I was trying to keep the discussions of ‘which kind of pain is morally relevant’ and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.
I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn’t take any action, and that’s just absurd. Therefore, my way of determining total pain is problematic. Here “a resulting state of affairs” is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.
Well, if who suffered didn’t matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth… According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.
My appeal to leximin is not ad hoc because it takes an individual’s suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don’t actually endorse leximin because leximin does not take an individual’s identity seriously (i.e. it doesn’t treat who suffers as morally relevant, whereas I do. I think who suffers matters).
So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.
Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one’s entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)
Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).
But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.
To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)
However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn’t use the utilitarian way of determining “total pain”, which underlies effective altruism.
I have argued for this by
1) arguing that the utilitarian way of determining “total pain” goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a “total moral value” based on people’s pains, which is different from determining a total pain. I still need to address this point.
2) responding to your objection against my way of determining “total pain” (first half of this reply)
Thanks for the exposition. I see the argument now.
You’re saying that, if we determined “total pain” by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.
I’ve since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.
Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.
My response:
JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn’t seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.
Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).
I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don’t find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).
So you’re suggesting that most people aggregate different people’s experiences as follows:
FYI, I have since reworded this as “So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:”
I think it is a more precise formulation. In any case, we’re on the same page.
Basically I think sentences like:
“I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case”
are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using ‘morally worse’ in a nonstandard way (and possibly use a different term). I have the intuition that if you say “X is the morally relevant factor” then which actions you say are right will depend solely on how they effect X.
The way I phrased Objection 1 was as follows: “One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.”
Notice that this objection in argument form is as follows:
P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.
P2) We ought to prevent the morally worst case.
C) Therefore, we should help Amy and Susie over Bob.
My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
Given this premise, I’ve been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of “involves more pain than”. So I recently started arguing that my sense is the sense that really matters.
Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.
However, you are right that I should make this aspect of my work more clear.
Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)
I think thinking in terms of ‘total pain’ is not normally how this is approached, instead one thinks about converting each persons experience into ‘utility’ (or ‘moral badness’ etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don’t know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).
So you’re suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:
Assign a moral value to each person’s experiences based on its overall what-it’s-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it’s-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone’s 5 minor headaches as we would to someone else’s 1 major headache.
We then add up the moral value assigned to each person’s experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.
This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy’s and Susie’s life or Bob’s life, but we cannot save all. Who do we save? Most people would reason that we should save Amy’s and Susie’s life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don’t think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn’t want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy’s and Susie’s death being a morally worse thing than Bob’s death, and so I would at least agree that what we ought to do in this case wouldn’t be to give everyone a 50% chance.
In any case, if we assign a moral value to each person’s experience in the same way that we might assign a moral value to each person’s life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I’ve attributed to kbog). (I added the “[...]” to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don’t think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.
Then I am really not sure at all what you are meaning by ‘morally worse’ (or ‘right’!). In light of this, I am now completely unsure of what you have been arguing the entire time.
Please don’t be alarmed (haha). I assume you’re aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.
You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here’s an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of “more pain” (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people’s minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100′s suffering).
Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person’s suffering.
I hope that helps.
Hey Alex,
Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:
(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as ‘two people experiencing the same pain is twice as bad as one person experiencing that pain’ (there is some change from discussing ‘total pain’ to ‘badness’ here, but I think it still fits with our usage).)
When you say that many people here would embrace the assumption that “two people experiencing the same pain is twice as bad as one person experiencing that pain”, are you using “bad” to mean “morally bad?”
I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.
Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.
However, based on my preferred sense of “more pain”, two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.
Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn’t I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)
I don’t think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy’s suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter.” Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn’t begrudge another person an improvement that costs us nothing. Amy shouldn’t begrudge Susie an improvement that costs her nothing.
Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of “more pain”. And since I think my preferred sense of “more pain” is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.
A couple of brief points in favour of the classical approach: It in some sense ‘embeds naturally’ in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
I’m not sure I see the advantage here, or what the alleged advantage is. I don’t see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.
As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.
The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone’s position). However, as I argued, this stipulation is not true OF the real world because each of us didn’t actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog’s latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because “The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system.” I have yet to respond to this latest reply because I have been too busy arguing about our senses of “more pain”, but if I were to respond, I would say this: “I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world.” Anyways, I don’t want to say too much here, because kbog might not see it and it wouldn’t be fair if you only heard my side. I’ll respond to kbog’s reply eventually (haha) and you can follow the discussion there if you wish.
Let me just add one thing: Based on Singer’s intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls’ claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn’t support classical utilitarianism which just says we ought to maximize utility and not average utility.
One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.
Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.
To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).
While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.
Hi Risto,
You’ve done such a thorough job, well done!
One tip I would add under “How to read philosophy” is to read on when something in the book isn’t making sense, instead of spending a lot of time trying to make sense of things on the spot. The reason is because, oftentimes, later passages help to clarify what the writer meant by earlier passages, where these earlier passages can be hopelessly hard to understand or precise-ify without having read those later passages.
P.S. I’m new to this forum and would appreciate it if I could get some likes so that I could make a post! Thanks.