Do we have an intuition for how to apply shapely values in typical EA scenarios, for example:
• How much credit goes to donors, vs charity evaluators, vs object level charities?
• How much credit goes to charity founders/executives, vs other employees/contractors?
• How much credit goes to meta vs object organizations?
Seconding this question, and wanted to ask more broadly:
A big component/assumption of the example given is that we can “re-run” simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the “real world” and how they’ve tackled this question of how to evaluate counterfactual worlds?
(Also, great post! I’ve been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)
Thanks for posting this.
Do we have an intuition for how to apply shapely values in typical EA scenarios, for example: • How much credit goes to donors, vs charity evaluators, vs object level charities? • How much credit goes to charity founders/executives, vs other employees/contractors? • How much credit goes to meta vs object organizations?
Seconding this question, and wanted to ask more broadly:
A big component/assumption of the example given is that we can “re-run” simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the “real world” and how they’ve tackled this question of how to evaluate counterfactual worlds?
(Also, great post! I’ve been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)