Thank you for this correction, I think you’re right! I had misunderstood how to apply Shapley values here, and I appreciate you taking the time to work through this in detail.
If I understand correctly now, the right way to apply Shapley values to this problem (with X=8, Y=2) is not to work with N (the number of players who end up contributing, which is unknown), but instead to work with N’, the number of ‘live’ players who could contribute (known with certainty here, not something you can select), and then:
N’=3, the number of ‘live’ players who are deciding whether to contribute.
With N’=3, the Shapley value of the coordination is 1⁄3 for each player (expected value of 1 split between 3 people), which is positive.
A positive Shapley value means that all players decide to contribute (if basing their decisions off Shapley values as advocated in this post), and you then end up with N=3.
Have I understood the Shapley value approach correctly? If so, I think my final conclusion still stands (even if for the wrong reasons) that a Shapley value analysis will lead to sub-optimal N (number of players deciding to participate). Since the optimal N here is 2 (or 1, which has same value).
As for whether the framing of the problem makes sense, with N as something we can select, the point I was making was that in a lot of real-world situations, N might well be something we can select. If a group of people have the same goals, they can coordinate to choose N, and then you’re not really in a game-theory situation at all. (This wasn’t a central point to my original comment but was the point I was defending in the comment you’re responding to)
Even if you don’t all have exactly the same goals, or if there’s a lot of actors, it seems like you’ll often be able to benefit by communicating and coordinating, and then you’ll be able to improve over the approach of everyone deciding independently according to a Shapley value estimate: e.g. Givewell recommending a funding allocation split between their top charities.
A positive Shapley value means that all players decide to contribute (if basing their decisions off Shapley values as advocated in this post), and you then end up with N=3
Since I was calculating the Shapley value relative to doing nothing, it being positive only means taking the action is better than doing nothing. In reality, there will be other options available, so I think agents will want to maximise their Shapley cost-effectiveness. For the previous situation, it would be:
SCE(N)=1−(1−p)NNVc.
For the previous values, this would be 7⁄6. Apparently not very high, considering donating 1 $ to GWWC leads to 6 $ of counterfactual effective donations as a lower bound (see here). However, the Shapley cost-effectiveness of GWWC would be lower than their counterfactual cost-effectiveness… In general, since there are barely any impact assessments using Shapley values, it is a little hard to tell whether a given value is good or bad.
Thank you for this correction, I think you’re right! I had misunderstood how to apply Shapley values here, and I appreciate you taking the time to work through this in detail.
If I understand correctly now, the right way to apply Shapley values to this problem (with X=8, Y=2) is not to work with N (the number of players who end up contributing, which is unknown), but instead to work with N’, the number of ‘live’ players who could contribute (known with certainty here, not something you can select), and then:
N’=3, the number of ‘live’ players who are deciding whether to contribute.
With N’=3, the Shapley value of the coordination is 1⁄3 for each player (expected value of 1 split between 3 people), which is positive.
A positive Shapley value means that all players decide to contribute (if basing their decisions off Shapley values as advocated in this post), and you then end up with N=3.
Have I understood the Shapley value approach correctly? If so, I think my final conclusion still stands (even if for the wrong reasons) that a Shapley value analysis will lead to sub-optimal N (number of players deciding to participate). Since the optimal N here is 2 (or 1, which has same value).
As for whether the framing of the problem makes sense, with N as something we can select, the point I was making was that in a lot of real-world situations, N might well be something we can select. If a group of people have the same goals, they can coordinate to choose N, and then you’re not really in a game-theory situation at all. (This wasn’t a central point to my original comment but was the point I was defending in the comment you’re responding to)
Even if you don’t all have exactly the same goals, or if there’s a lot of actors, it seems like you’ll often be able to benefit by communicating and coordinating, and then you’ll be able to improve over the approach of everyone deciding independently according to a Shapley value estimate: e.g. Givewell recommending a funding allocation split between their top charities.
Since I was calculating the Shapley value relative to doing nothing, it being positive only means taking the action is better than doing nothing. In reality, there will be other options available, so I think agents will want to maximise their Shapley cost-effectiveness. For the previous situation, it would be:
SCE(N)=1−(1−p)NNVc.
For the previous values, this would be 7⁄6. Apparently not very high, considering donating 1 $ to GWWC leads to 6 $ of counterfactual effective donations as a lower bound (see here). However, the Shapley cost-effectiveness of GWWC would be lower than their counterfactual cost-effectiveness… In general, since there are barely any impact assessments using Shapley values, it is a little hard to tell whether a given value is good or bad.