I think the reason summing counterfactual impact of multiple people leads to weird results is not a problem with counterfactual impact but with how you are summing it. Adding together each individual’s counterfactual impact by summing is adding the difference between world A where they both act and world B and C where each of them act alone. In your calculus, you then assume this is the same as the difference between world A and D where nobody acts.
The true issue in maximising counterfactual impact seems to arise when actors act cooperatively but think of their actions as an individual. When acting cooperatively you should compare your counterfactuals to world D, when acting individually world B or C.
The Shapley value is not immune to error either I can see three ways it could lead to poor decision making:
For the Vaccine Reminder example, It seems more strange to me to attribute impact to people who would otherwise have no impact. We then get the same double-counting problem or in this case infinite dividing which is worse as It can dissuade you of high impact options. If I am not mistaken, then in this case the Shapley value is divided between the NGO, the government, the doctor, the nurse, the people driving logistics, the person who built the roads, the person who trained the doctor, the person who made the phones, the person who set up the phone network and the person who invented electricity. In which case, everyone is attributed a tiny fraction of the impact when only the vaccine reminder intentionally caused it. Depending on the scope of other actors we consider this could massively reduce the impact of the action.
Example 6 reveals another flaw as attributing impact this way can lead you to make poor decisions. If you use the Shapley value then when examining whether to leak information as the 10th person you see that the action costs −1million utilities. If I was offered 500,000 utils to share then under Shapley I should not do so as 500,00 −1M is negative. However, this thinking will just prevent me from increasing overall utilis by 500,000.
In example 7 the counterfactual impact of the applicant who gets the job is not 0 but the impact of the job the lowest impact person gets. Imagine each applicant could earn to give 2 utility and only has time for one job application. When considering counterfactual impact the first applicant chooses to apply to the EA org and gets attributed 100 utility (as does the EA org). The other applicants now enter the space and decide to earn to give as this has a higher counterfactual impact. They decrease the first applicant’s counterfactual utility to 2 but increase overall utility. If we use Shapely instead then all applicants would apply for the EA org and as this gives them a value of 2.38 instead of 2.
I may have misunderstood Shapely here so feel free to correct me. Overall I enjoyed the post and think it is well worth reading. Criticism of the underlying assumptions of many EAs decision-making methods is very valuable.
I have thought about this, and I’m actually biting the bullet. I think that a lot of people get impact for a lot of things, and that even smallish projects depend on a lot of other moving parts, in the direction of You didn’t build that.
I don’t agree with some of your examples when taken literally, but I agree with the nuanced thing you’re pointing at with them, e.g., building good roads seems very valuable precisely because it helps other projects, if there is high nurse absenteeism then the nurses who show up take some of the impact...
I think that if you divide the thing’s impact by, say 10x, the ordering of the things according to impact remains, so this shouldn’t dissuade people from doing high impact things. The interesting thing is that some divisors will be greater than others, and thus the ordering will be changed. I claim that this says something interesting.
2.
Not really. If 10 people have already done it, your Shapley value will be positive if you take that bargain. If the thing hasn’t been done yet, you can’t convince 10 Shapley-optimizing altruists to do the thing for 0.5m each, but you might convince 10 counterfactual impact optimizers. As @casebach mentioned, this may have problems when dealing with uncertainty (for example: what if you’re pretty sure that someone is going to do it?).
3.
You’re right. The example, however, specified that the EAs were to be “otherwise idle”, to simplify calculations.
I think the reason summing counterfactual impact of multiple people leads to weird results is not a problem with counterfactual impact but with how you are summing it. Adding together each individual’s counterfactual impact by summing is adding the difference between world A where they both act and world B and C where each of them act alone. In your calculus, you then assume this is the same as the difference between world A and D where nobody acts.
The true issue in maximising counterfactual impact seems to arise when actors act cooperatively but think of their actions as an individual. When acting cooperatively you should compare your counterfactuals to world D, when acting individually world B or C.
The Shapley value is not immune to error either I can see three ways it could lead to poor decision making:
For the Vaccine Reminder example, It seems more strange to me to attribute impact to people who would otherwise have no impact. We then get the same double-counting problem or in this case infinite dividing which is worse as It can dissuade you of high impact options. If I am not mistaken, then in this case the Shapley value is divided between the NGO, the government, the doctor, the nurse, the people driving logistics, the person who built the roads, the person who trained the doctor, the person who made the phones, the person who set up the phone network and the person who invented electricity. In which case, everyone is attributed a tiny fraction of the impact when only the vaccine reminder intentionally caused it. Depending on the scope of other actors we consider this could massively reduce the impact of the action.
Example 6 reveals another flaw as attributing impact this way can lead you to make poor decisions. If you use the Shapley value then when examining whether to leak information as the 10th person you see that the action costs −1million utilities. If I was offered 500,000 utils to share then under Shapley I should not do so as 500,00 −1M is negative. However, this thinking will just prevent me from increasing overall utilis by 500,000.
In example 7 the counterfactual impact of the applicant who gets the job is not 0 but the impact of the job the lowest impact person gets. Imagine each applicant could earn to give 2 utility and only has time for one job application. When considering counterfactual impact the first applicant chooses to apply to the EA org and gets attributed 100 utility (as does the EA org). The other applicants now enter the space and decide to earn to give as this has a higher counterfactual impact. They decrease the first applicant’s counterfactual utility to 2 but increase overall utility. If we use Shapely instead then all applicants would apply for the EA org and as this gives them a value of 2.38 instead of 2.
I may have misunderstood Shapely here so feel free to correct me. Overall I enjoyed the post and think it is well worth reading. Criticism of the underlying assumptions of many EAs decision-making methods is very valuable.
1.
I have thought about this, and I’m actually biting the bullet. I think that a lot of people get impact for a lot of things, and that even smallish projects depend on a lot of other moving parts, in the direction of You didn’t build that.
I don’t agree with some of your examples when taken literally, but I agree with the nuanced thing you’re pointing at with them, e.g., building good roads seems very valuable precisely because it helps other projects, if there is high nurse absenteeism then the nurses who show up take some of the impact...
I think that if you divide the thing’s impact by, say 10x, the ordering of the things according to impact remains, so this shouldn’t dissuade people from doing high impact things. The interesting thing is that some divisors will be greater than others, and thus the ordering will be changed. I claim that this says something interesting.
2.
Not really. If 10 people have already done it, your Shapley value will be positive if you take that bargain. If the thing hasn’t been done yet, you can’t convince 10 Shapley-optimizing altruists to do the thing for 0.5m each, but you might convince 10 counterfactual impact optimizers. As @casebach mentioned, this may have problems when dealing with uncertainty (for example: what if you’re pretty sure that someone is going to do it?).
3.
You’re right. The example, however, specified that the EAs were to be “otherwise idle”, to simplify calculations.