Firstly, people who believe in the correct account of counterfactual impact would have incentives to coordinate in the case you outline. Alice would maximise her counterfactual impact (defined as I define it) by coordinating with Bob on project R. The counterfactual impact of her coordinating with Bob would be +5 utility compared to scenario 1. There is no puzzle here.
Secondly, dividing counterfactual impact by contribution does not solve all these coordination problems. If everyone thought as per the Shapely value, then no rational altruists would ever vote, even when the true theory dictates that the expected value of doing so was very high.
Also consider the $1bn benefits case outlined above. Suppose that the situation is as described above but my action costs $2 and I take one billionth of the credit for the success of the project. In that case, the Shapely-adjusted benefits of my action would be $1 and the costs $2, so my action would not be worthwhile. I would therefore leave $1bn of value on the table.
For the first point, see my response to Carl above. I think you’re right in theory, but in practice it’s still a problem.
For the second point, I agree with Flodorner that you would either use the Shapley value, or you would use the probability of changing the outcome, not both. I don’t know much about Shapley values, but I suspect I would agree with you that they are suboptimal in many cases. I don’t think there is a good theoretical solution besides “consider every possible outcome and choose the best one” which we obviously can’t do as humans. Shapley values are one tractable way of attacking the problem without having to think about all possible worlds, but I’m not surprised that there are cases where they fail. I’m advocating for “think about this scenario”, not “use Shapley values”.
I think the $1bn benefits case is a good example of a pathological case where Shapley values fail horribly (assuming they do what you say they do, again, I don’t know much about them).
My overall position is something like “In the real world when we can’t consider all possibilities, one common failure mode in impact calculations is the failure to consider the scenario in which all the participants who contributed to this outcome instead do other altruistic things with their money”.
At this point, i think that to analyze the $1bn case correctly, you’d have to substract everyone’s opportunity cost in the calculation of the shapley value (if you want to use it here). This way, the example should yield what we expect.
I might do a more general writeup about shapley values, their advantages, disadvantages and when it makes sense to use them, if i find the time to read a bit more about the topic first.
Firstly, people who believe in the correct account of counterfactual impact would have incentives to coordinate in the case you outline. Alice would maximise her counterfactual impact (defined as I define it) by coordinating with Bob on project R. The counterfactual impact of her coordinating with Bob would be +5 utility compared to scenario 1. There is no puzzle here.
Secondly, dividing counterfactual impact by contribution does not solve all these coordination problems. If everyone thought as per the Shapely value, then no rational altruists would ever vote, even when the true theory dictates that the expected value of doing so was very high.
Also consider the $1bn benefits case outlined above. Suppose that the situation is as described above but my action costs $2 and I take one billionth of the credit for the success of the project. In that case, the Shapely-adjusted benefits of my action would be $1 and the costs $2, so my action would not be worthwhile. I would therefore leave $1bn of value on the table.
For the first point, see my response to Carl above. I think you’re right in theory, but in practice it’s still a problem.
For the second point, I agree with Flodorner that you would either use the Shapley value, or you would use the probability of changing the outcome, not both. I don’t know much about Shapley values, but I suspect I would agree with you that they are suboptimal in many cases. I don’t think there is a good theoretical solution besides “consider every possible outcome and choose the best one” which we obviously can’t do as humans. Shapley values are one tractable way of attacking the problem without having to think about all possible worlds, but I’m not surprised that there are cases where they fail. I’m advocating for “think about this scenario”, not “use Shapley values”.
I think the $1bn benefits case is a good example of a pathological case where Shapley values fail horribly (assuming they do what you say they do, again, I don’t know much about them).
My overall position is something like “In the real world when we can’t consider all possibilities, one common failure mode in impact calculations is the failure to consider the scenario in which all the participants who contributed to this outcome instead do other altruistic things with their money”.
At this point, i think that to analyze the $1bn case correctly, you’d have to substract everyone’s opportunity cost in the calculation of the shapley value (if you want to use it here). This way, the example should yield what we expect.
I might do a more general writeup about shapley values, their advantages, disadvantages and when it makes sense to use them, if i find the time to read a bit more about the topic first.