Seconding this question, and wanted to ask more broadly:
A big component/assumption of the example given is that we can “re-run” simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the “real world” and how they’ve tackled this question of how to evaluate counterfactual worlds?
(Also, great post! I’ve been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)
The first real world example that comes to mind… isn’t about agents bargaining. Namely, statistical models. The idea is that you have some subparts that each contribute to the prediction, and want to know which are the most important, and so you can calculate shapley values (“how well does this model do if it only uses age and sex to predict life expectancy, but not race”, etc. for the other coalitions).
Here’s a microecon stack exchange question that asks a similar thing as you. The only non stats answer states that a bank used Shapley values to determine capital allocation in investments. It sounds like they didn’t have a problem using a ‘time machine’ because they had the performance of the investments and so could simply evaluate what returns they would’ve gotten had they invested differently. But I haven’t read it thoroughly, so for all I know they stopped using it soon after, or had some other way to evaluate counterfactuals, etc.
Seconding this question, and wanted to ask more broadly:
A big component/assumption of the example given is that we can “re-run” simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the “real world” and how they’ve tackled this question of how to evaluate counterfactual worlds?
(Also, great post! I’ve been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)
The first real world example that comes to mind… isn’t about agents bargaining. Namely, statistical models. The idea is that you have some subparts that each contribute to the prediction, and want to know which are the most important, and so you can calculate shapley values (“how well does this model do if it only uses age and sex to predict life expectancy, but not race”, etc. for the other coalitions).
Here’s a microecon stack exchange question that asks a similar thing as you. The only non stats answer states that a bank used Shapley values to determine capital allocation in investments. It sounds like they didn’t have a problem using a ‘time machine’ because they had the performance of the investments and so could simply evaluate what returns they would’ve gotten had they invested differently. But I haven’t read it thoroughly, so for all I know they stopped using it soon after, or had some other way to evaluate counterfactuals, etc.