I used to think that a large benefit to a single person was always more important than a smaller benefit to multiple people (no matter how many people experienced the smaller benefit). That’s why I wrote this post asking others for counterarguments. After reading the comments on that post (one of which linked to this article), I became persuaded that I was wrong.
Here’s an additional counterargument. Let’s say that I have two choices:
A. I can save 1 person from a disease that decreases her quality of life by 95%; or
B. I can save 5 people from a disease that decreases their quality of life by 90%.
My intuition is that it is better to save the 5. Now let’s say I get presented with a second dilemma:
B. I can save 5 people from a disease that decreases their quality of life by 90%; or
C. I can save 25 people from a disease that decreases their quality of life by 85%.
My intuition is that it is better to save the 25. Now let’s say I get presented with a third dilemma.
C. I can save 25 people from a disease that decreases their quality of life by 85%; or
D. I can save 125 people from a disease that decreases their quality of life by 80%.
My intuition is that it is better to save the 125. This cycle continues until the seventeenth dilemma:
Q. I can save 152,587,890,625 people from a disease that decreases their quality of life by 15%; or
R. I can save 762,939,453,125 people from a disease that decreases their quality of life by 10%.
My intuition is that it is better to save the 762,939,453,125.
Since I prefer R over Q and Q over P and P over O and so on and so forth all the way through preferring C over B and B over A, it follows that I should prefer R over A.
In other words, our intuition that providing a large benefit to one person is less important than providing a slightly smaller benefit to several people conflicts with our intuition that providing a very large benefit to one person is more important than providing a very small benefit to an extremely large number of people. Given scope insensitivity, I think the former intuition is probably more reliable.
One last point. I think that EA has a role even under your worldview. It can help identify the worst possible forms of suffering (such as being boiled alive at a slaughterhouse) and the most effective ways to prevent that suffering.
First of all, awesome name! And secondly, thanks for your response.
My view is that we should give each person a chance of being helped that is proportionate to what they each have to suffer. It is irrelevant to me how many people there are who stand to suffer the lesser pain. So, for example, in the first choice situation you described, my intuition is to give the single person roughly slightly over a 50% chance of being saved and the others slightly under 50% of being saved. This is because the single person would suffer slightly worse than any one of the others, so the single person gets a slightly higher chance. It is irrelevant to me how many people have 90% to lose in quality of life, whether it be 5 or 5 billion.
So if 760 billion people have 10% to lose where the single person has 90% to lose, my intuition is to give the single person roughly a 90% chance of being saved and the other 760 billion a 10% of being saved.
In my essay, I in effect argued that everyone would have this intuition if properly appreciated the following two facts:
That were the 760 billion people to suffer, none of them would suffer anywhere near the amount the single person would. Conversely, were the single person to suffer, he/she would suffer so much more than any one of the 760 billion.
Which individual suffers matters because it is the particular individual who suffers that bears all the suffering.
I assume that we should accept the intuitions that we have when we keep all the relevant facts at the forefront of our mind (i.e. when we properly appreciate them). I believe the intuitions I mentioned above (i.e. my intuitions) are the ones people would have when they do this.
Regarding your second point, I have to think a little more about it!
For every $1,000,000,000,000 you spend on buying medicine A, the person in scenario A (from my previous comment) will have an additional 1% chance of being cured of disease A.
For every $200,000,000,000 you spend on buying medicine B, a person in scenario B (from my previous comment) will have an additional 1% chance of being cured of disease B.
For every $40,000,000,000 you spend on buying medicine C, a person in scenario C (from my previous comment) will have an additional 1% chance of being cured of disease C.
...
For every $1.31 you spend on buying medicine R, a person in scenario R (from my previous comment) will have an additional 1% chance of being cured of disease R.
Now consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 5 people with disease B. Based on your response to my comment, it sounds like you would spend $51,355,000,000,000 on the person with disease A (giving her a 51.36% chance of survival) and $9,729,000,000,000 on each person with disease B (giving each of them a 48.64% chance of survival). Is that correct?
Next consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 762,939,453,125 people with disease R. Based on your response to my comment, it sounds like you would spend $90,476,000,000,000 on the person with disease A (giving her a 90.48% chance of surviving) and $12.48 on each person with disease R (giving each of them a 9.53% chance of surviving). Is that correct?
The situations I focus on in my essay are trade-off choice situations, meaning that I can only choose one party to help, and not all parties to various degrees. Thus, if you have an objection to my argument, it is important that we focus on such kinds of situations. Thanks!
Yes but the situations that EAs face are much more analogous to my second set of hypotheticals. So if you want your argument to serve as an objection to EA, I think you have to explain how it applies to those sorts of cases.
Not true. Trade off situations are literally everywhere. Whenever you donate to some charity, it is at the expense of another charity working in a different area and thus at the expense of the people who the other charity would have helped. Even with malaria, if you donate to a certain charity, you are helping the people who that charity helps at the expense of other people that another charity against malaria helps. That’s the reality.
And if you’re thinking “Well, can’t I donate some to each malaria fighting charity?”, the answer is yes, but whatever money you donate to the other malaria fighting charity, it comes at the expense of helping the people who the original malaria fighting charity would have been able to help had they got all your donation and not just part of it. The trade off choice situation would be between either helping some of the people residing in the area of the other malaria fighting charity or helping some additional people residing in the area of the original malaria fighting charity. You cannot help all.
In principle, as long as one doesn’t have enough money to help everyone, one will always find oneself in a trade off choice situation when deciding where to donate.
I think the second set of hypotheticals does involve trade-offs. When I say that a person has an additional 1% chance of being cured, I mean that they have an additional 1% chance of receiving a medicine that will definitely cure them. If you spend more money on medicines to distribute among people with disease Q (thus increasing the chance that any given person with disease Q will be cured), you will have less money to spend on medicines to distribute among people with disease R (thus decreasing the chance that any given person with disease R will be cured).
The reason I think that the second set of hypotheticals is more analogous to the situations EAs face is that there are typically already many funders in the space, meaning that potential beneficiaries often have some chance of being helped even absent your donation. It’s quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.
My apologies. After re-reading your second set of hypothetical, I think I can answer your questions.
In the original choice situation contained in my essay, the device I used to capture the amount of chance each group would be given of being helped was independent of the donation amount. For example, in the choice situation between Bob, Amy, and Susie, the donation was $10 and the device used to give each a 50% chance of being saved from a painful disease was a coin.
However, it seems like in your hypotheticals, the donation is used as the device too. That confused me at first. But yeah, at the end of the day, I would give person A a roughly 90% of being saved from his/her suffering and roughly a 10% to each of the billions of others, regardless of what the dollar breakdown would look like. So, if I understand your hypotheticals correctly, then my answer would be yes to both your original questions.
I don’t however see the point of using the donation to also act as the device. It seems to unnecessarily over complicate the choice situations.
If your goal is to try to create a choice situation in which I have to give a vast amount of money to give person A around a 90% chance of surviving, and the objection you’re thinking of raising is that it is absurd to give that much to give a single person around a 90% of being helped, then my response is:
1) Who suffers matters
2) What person A stands to suffer is far worse than what any one of the people from the competing group stands to suffer.
I think if we really appreciate those two facts, our intuition is to give person A 90% and each of the others a 10%, regardless of the $ breakdown that involves. Thanks.
Just noticed you expanded your comment. You write, “It’s quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.” This is not true. There will always be a person in line who isn’t helped, but who would have been helped had you funded the charity working in his area. He may not be the first in line, but he is somewhere in the line waiting to be helped by that charity.
Just noticed you expanded your comment. You write, “It’s quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.” This is not true. There will always be a person in line who isn’t helped, but who would have been helped had you funded the charity working in his area. He may not be the first in line, but he is somewhere in the line waiting to be helped by that charity.
I was simply noting the difference between our two examples. In your example, Bob has no chance of receiving help if you choose the other person. In the real world, me choosing one charity over another will not cause a specific person to have no ex-ante chance of being helped. Instead, it means that each person in the potential beneficiary population has a lower chance of being helped. I wanted my situation to be more analogous to the real world because I want to see how your principle works in practice. It’s the same reason I introduced different prices into the example.
Also, my comment was expanded very shortly after it was originally posted. It’s possible that you saw the original one and while you were writing your response to it I posted my edit.
Sorry for the late reply. Well, say I’m choosing between the World Food Programme (WFP) and some other charity, and I have $30 to donate. According to WFP, $30 can feed a person for a month (if I remember correctly). If I donate to the other charity, then WFP in its next operation will have $30 less to spend on food, meaning someone who otherwise would have been helped won’t be receiving help. Who that person is, we don’t know. All we know is that he is the person who was next in line, the first to be turned away.
Now, you disagree with this. Specifically you disagree that it could be said of any SPECIFIC person that, if I don’t donate to WFP, that it would be true of THAT person that he won’t end up receiving help that he otherwise would have. And this is because:
1) HE—that specific person—still had a chance of being helped by WFP even if I didn’t donate the $30. For example, he might have gotten in line sooner than I’m supposing he has. And you will say that this holds true for ANY specific person. Therefore, the phrase “he won’t end up receiving help” is not guaranteed.
2) Moreover, even if I do donate the $30 to WFP, there isn’t any guarantee that he would be helped. For example, HE might have gotten in line way to late for an additional $30 to make a difference for him. And you will say that this holds true for ANY specific person. Therefore, the phrase “that he otherwise would have” is also not guaranteed.
In the end, you will say, all that can be true of any SPECIFIC person is that my donation of $30 would raise THAT person’s chance of being helped.
Therefore, in the real world, you will say, there’s rarely a trade-off choice situation between specific people.
I am tempted to agree with that, but two points:
1) There still seems to be a trade off choice situation between specific groups of people: i.e. the group helped by WFP and the group helped by the other charity. 2) I think, at least in refugee camps, there is already a list of all the refugees and a document specifying who in specific is next in line to receive a given service/aid. In these cases, we will be faced with a trade off choice situation between a specific individual (who we would be helping if we donated to the refugee camp) and whatever group of people that would be helped by donating to another charity. I wonder what percentage of real life situations are like this.
Moreover, if you’re looking for real life trade off situations between some specific person(s) and some other specific person or specific group, they are clearly not hard to find. For example, you can either help a specific homeless man vs whoever. Or you can help a specific person avoid torture by helping pay off a ransom vs whoever else by helping a charity. Or you can spend fund a specific person’s cancer treatment vs whoever. Etc…
My overall point is that trade off situations of the kind I describe in my paper are very real and everywhere EVEN IF it is true that there are trade off situations of the nature you describe.
I used to think that a large benefit to a single person was always more important than a smaller benefit to multiple people (no matter how many people experienced the smaller benefit). That’s why I wrote this post asking others for counterarguments. After reading the comments on that post (one of which linked to this article), I became persuaded that I was wrong.
Here’s an additional counterargument. Let’s say that I have two choices:
A. I can save 1 person from a disease that decreases her quality of life by 95%; or
B. I can save 5 people from a disease that decreases their quality of life by 90%.
My intuition is that it is better to save the 5. Now let’s say I get presented with a second dilemma:
B. I can save 5 people from a disease that decreases their quality of life by 90%; or
C. I can save 25 people from a disease that decreases their quality of life by 85%.
My intuition is that it is better to save the 25. Now let’s say I get presented with a third dilemma.
C. I can save 25 people from a disease that decreases their quality of life by 85%; or
D. I can save 125 people from a disease that decreases their quality of life by 80%.
My intuition is that it is better to save the 125. This cycle continues until the seventeenth dilemma:
Q. I can save 152,587,890,625 people from a disease that decreases their quality of life by 15%; or
R. I can save 762,939,453,125 people from a disease that decreases their quality of life by 10%.
My intuition is that it is better to save the 762,939,453,125.
Since I prefer R over Q and Q over P and P over O and so on and so forth all the way through preferring C over B and B over A, it follows that I should prefer R over A.
In other words, our intuition that providing a large benefit to one person is less important than providing a slightly smaller benefit to several people conflicts with our intuition that providing a very large benefit to one person is more important than providing a very small benefit to an extremely large number of people. Given scope insensitivity, I think the former intuition is probably more reliable.
One last point. I think that EA has a role even under your worldview. It can help identify the worst possible forms of suffering (such as being boiled alive at a slaughterhouse) and the most effective ways to prevent that suffering.
Hi RandomEA,
First of all, awesome name! And secondly, thanks for your response.
My view is that we should give each person a chance of being helped that is proportionate to what they each have to suffer. It is irrelevant to me how many people there are who stand to suffer the lesser pain. So, for example, in the first choice situation you described, my intuition is to give the single person roughly slightly over a 50% chance of being saved and the others slightly under 50% of being saved. This is because the single person would suffer slightly worse than any one of the others, so the single person gets a slightly higher chance. It is irrelevant to me how many people have 90% to lose in quality of life, whether it be 5 or 5 billion.
So if 760 billion people have 10% to lose where the single person has 90% to lose, my intuition is to give the single person roughly a 90% chance of being saved and the other 760 billion a 10% of being saved.
In my essay, I in effect argued that everyone would have this intuition if properly appreciated the following two facts:
That were the 760 billion people to suffer, none of them would suffer anywhere near the amount the single person would. Conversely, were the single person to suffer, he/she would suffer so much more than any one of the 760 billion.
Which individual suffers matters because it is the particular individual who suffers that bears all the suffering.
I assume that we should accept the intuitions that we have when we keep all the relevant facts at the forefront of our mind (i.e. when we properly appreciate them). I believe the intuitions I mentioned above (i.e. my intuitions) are the ones people would have when they do this.
Regarding your second point, I have to think a little more about it!
Let’s say that you have $100,000,000,000,000.
For every $1,000,000,000,000 you spend on buying medicine A, the person in scenario A (from my previous comment) will have an additional 1% chance of being cured of disease A.
For every $200,000,000,000 you spend on buying medicine B, a person in scenario B (from my previous comment) will have an additional 1% chance of being cured of disease B.
For every $40,000,000,000 you spend on buying medicine C, a person in scenario C (from my previous comment) will have an additional 1% chance of being cured of disease C.
...
For every $1.31 you spend on buying medicine R, a person in scenario R (from my previous comment) will have an additional 1% chance of being cured of disease R.
Now consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 5 people with disease B. Based on your response to my comment, it sounds like you would spend $51,355,000,000,000 on the person with disease A (giving her a 51.36% chance of survival) and $9,729,000,000,000 on each person with disease B (giving each of them a 48.64% chance of survival). Is that correct?
Next consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 762,939,453,125 people with disease R. Based on your response to my comment, it sounds like you would spend $90,476,000,000,000 on the person with disease A (giving her a 90.48% chance of surviving) and $12.48 on each person with disease R (giving each of them a 9.53% chance of surviving). Is that correct?
The situations I focus on in my essay are trade-off choice situations, meaning that I can only choose one party to help, and not all parties to various degrees. Thus, if you have an objection to my argument, it is important that we focus on such kinds of situations. Thanks!
Yes but the situations that EAs face are much more analogous to my second set of hypotheticals. So if you want your argument to serve as an objection to EA, I think you have to explain how it applies to those sorts of cases.
Not true. Trade off situations are literally everywhere. Whenever you donate to some charity, it is at the expense of another charity working in a different area and thus at the expense of the people who the other charity would have helped. Even with malaria, if you donate to a certain charity, you are helping the people who that charity helps at the expense of other people that another charity against malaria helps. That’s the reality.
And if you’re thinking “Well, can’t I donate some to each malaria fighting charity?”, the answer is yes, but whatever money you donate to the other malaria fighting charity, it comes at the expense of helping the people who the original malaria fighting charity would have been able to help had they got all your donation and not just part of it. The trade off choice situation would be between either helping some of the people residing in the area of the other malaria fighting charity or helping some additional people residing in the area of the original malaria fighting charity. You cannot help all.
In principle, as long as one doesn’t have enough money to help everyone, one will always find oneself in a trade off choice situation when deciding where to donate.
I think the second set of hypotheticals does involve trade-offs. When I say that a person has an additional 1% chance of being cured, I mean that they have an additional 1% chance of receiving a medicine that will definitely cure them. If you spend more money on medicines to distribute among people with disease Q (thus increasing the chance that any given person with disease Q will be cured), you will have less money to spend on medicines to distribute among people with disease R (thus decreasing the chance that any given person with disease R will be cured).
The reason I think that the second set of hypotheticals is more analogous to the situations EAs face is that there are typically already many funders in the space, meaning that potential beneficiaries often have some chance of being helped even absent your donation. It’s quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.
My apologies. After re-reading your second set of hypothetical, I think I can answer your questions.
In the original choice situation contained in my essay, the device I used to capture the amount of chance each group would be given of being helped was independent of the donation amount. For example, in the choice situation between Bob, Amy, and Susie, the donation was $10 and the device used to give each a 50% chance of being saved from a painful disease was a coin.
However, it seems like in your hypotheticals, the donation is used as the device too. That confused me at first. But yeah, at the end of the day, I would give person A a roughly 90% of being saved from his/her suffering and roughly a 10% to each of the billions of others, regardless of what the dollar breakdown would look like. So, if I understand your hypotheticals correctly, then my answer would be yes to both your original questions.
I don’t however see the point of using the donation to also act as the device. It seems to unnecessarily over complicate the choice situations.
If your goal is to try to create a choice situation in which I have to give a vast amount of money to give person A around a 90% chance of surviving, and the objection you’re thinking of raising is that it is absurd to give that much to give a single person around a 90% of being helped, then my response is:
1) Who suffers matters
2) What person A stands to suffer is far worse than what any one of the people from the competing group stands to suffer.
I think if we really appreciate those two facts, our intuition is to give person A 90% and each of the others a 10%, regardless of the $ breakdown that involves. Thanks.
Just noticed you expanded your comment. You write, “It’s quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.” This is not true. There will always be a person in line who isn’t helped, but who would have been helped had you funded the charity working in his area. He may not be the first in line, but he is somewhere in the line waiting to be helped by that charity.
I was simply noting the difference between our two examples. In your example, Bob has no chance of receiving help if you choose the other person. In the real world, me choosing one charity over another will not cause a specific person to have no ex-ante chance of being helped. Instead, it means that each person in the potential beneficiary population has a lower chance of being helped. I wanted my situation to be more analogous to the real world because I want to see how your principle works in practice. It’s the same reason I introduced different prices into the example.
Also, my comment was expanded very shortly after it was originally posted. It’s possible that you saw the original one and while you were writing your response to it I posted my edit.
Hey RandomEA,
Sorry for the late reply. Well, say I’m choosing between the World Food Programme (WFP) and some other charity, and I have $30 to donate. According to WFP, $30 can feed a person for a month (if I remember correctly). If I donate to the other charity, then WFP in its next operation will have $30 less to spend on food, meaning someone who otherwise would have been helped won’t be receiving help. Who that person is, we don’t know. All we know is that he is the person who was next in line, the first to be turned away.
Now, you disagree with this. Specifically you disagree that it could be said of any SPECIFIC person that, if I don’t donate to WFP, that it would be true of THAT person that he won’t end up receiving help that he otherwise would have. And this is because:
1) HE—that specific person—still had a chance of being helped by WFP even if I didn’t donate the $30. For example, he might have gotten in line sooner than I’m supposing he has. And you will say that this holds true for ANY specific person. Therefore, the phrase “he won’t end up receiving help” is not guaranteed.
2) Moreover, even if I do donate the $30 to WFP, there isn’t any guarantee that he would be helped. For example, HE might have gotten in line way to late for an additional $30 to make a difference for him. And you will say that this holds true for ANY specific person. Therefore, the phrase “that he otherwise would have” is also not guaranteed.
In the end, you will say, all that can be true of any SPECIFIC person is that my donation of $30 would raise THAT person’s chance of being helped.
Therefore, in the real world, you will say, there’s rarely a trade-off choice situation between specific people.
I am tempted to agree with that, but two points:
1) There still seems to be a trade off choice situation between specific groups of people: i.e. the group helped by WFP and the group helped by the other charity.
2) I think, at least in refugee camps, there is already a list of all the refugees and a document specifying who in specific is next in line to receive a given service/aid. In these cases, we will be faced with a trade off choice situation between a specific individual (who we would be helping if we donated to the refugee camp) and whatever group of people that would be helped by donating to another charity. I wonder what percentage of real life situations are like this. Moreover, if you’re looking for real life trade off situations between some specific person(s) and some other specific person or specific group, they are clearly not hard to find. For example, you can either help a specific homeless man vs whoever. Or you can help a specific person avoid torture by helping pay off a ransom vs whoever else by helping a charity. Or you can spend fund a specific person’s cancer treatment vs whoever. Etc…
My overall point is that trade off situations of the kind I describe in my paper are very real and everywhere EVEN IF it is true that there are trade off situations of the nature you describe.
Thanks.