Shapley values are not always the right way to approach these problems. For example, the two thought experiments at the beginning of Parfit’s paper I linked above are specific cases where Shapley values would leave you predictably worse off
You are most likely aware of this, but I just wanted to clarify Shapley value is equal to counterfactual value when there is only 1 (“live”) agent. So Shapley value would not leave us predictably worse off if we get the number of agents right.
Point taken, although I think this is analogous to saying: Counterfactual analysis will not leave us predictably worse off if we get the probabilities of others deciding to contribute right.
Hi Toby,
Regarding:
You are most likely aware of this, but I just wanted to clarify Shapley value is equal to counterfactual value when there is only 1 (“live”) agent. So Shapley value would not leave us predictably worse off if we get the number of agents right.
Point taken, although I think this is analogous to saying: Counterfactual analysis will not leave us predictably worse off if we get the probabilities of others deciding to contribute right.
Agreed. Great point, Toby!