Do we have an intuition for how to apply shapely values in typical EA scenarios, for example:
• How much credit goes to donors, vs charity evaluators, vs object level charities?
• How much credit goes to charity founders/executives, vs other employees/contractors?
• How much credit goes to meta vs object organizations?
Seconding this question, and wanted to ask more broadly:
A big component/assumption of the example given is that we can “re-run” simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the “real world” and how they’ve tackled this question of how to evaluate counterfactual worlds?
(Also, great post! I’ve been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)
The first real world example that comes to mind… isn’t about agents bargaining. Namely, statistical models. The idea is that you have some subparts that each contribute to the prediction, and want to know which are the most important, and so you can calculate shapley values (“how well does this model do if it only uses age and sex to predict life expectancy, but not race”, etc. for the other coalitions).
Here’s a microecon stack exchange question that asks a similar thing as you. The only non stats answer states that a bank used Shapley values to determine capital allocation in investments. It sounds like they didn’t have a problem using a ‘time machine’ because they had the performance of the investments and so could simply evaluate what returns they would’ve gotten had they invested differently. But I haven’t read it thoroughly, so for all I know they stopped using it soon after, or had some other way to evaluate counterfactuals, etc.
If you could guesstimate the counterfactual, you could try giving rewards according to the Shapley value. It incentivizes contributing to the task (where the strength of the incentive is relative to your BATNA because you had the option of not participating—e.g. it’s pointless to pay you a $100 reward if you could’ve spent the time earning $200 at your day job). As for actually evaluating how good the counterfactuals would be in each case… well, let’s say I’m glad I’m not the one that has to do that work.
A central intuitions of the Shapley value are that players that had better BATNAs should be paid more. Of course, there are reasons why you might fundamentally disagree that this is “fair” in a different sense (perhaps you think it is immoral to give a rich guy more money for contributing the same amount just because they could’ve done more by themselves), but I do claim that at least when it comes to incentives this is a sensible thing to do.
The Lightcone fundraiser posts mentioned that when setting Lighthaven prices, they shoot for charging half of the surplus produced by having the event run at Lighthaven. This is quite literally shooting for the Shapley value. You can try asking the people on the Lightcone team about details? It looks like their strategy is to just nicely ask the other party how much better they think Lighthaven is than their BATNA (this is all the other party info you need, as you can estimate your own costs for running the event). Of course, this breaks down when you can’t trust the other party to tell the truth, and becomes intractable when you have more than two parties.
Thanks for posting this.
Do we have an intuition for how to apply shapely values in typical EA scenarios, for example: • How much credit goes to donors, vs charity evaluators, vs object level charities? • How much credit goes to charity founders/executives, vs other employees/contractors? • How much credit goes to meta vs object organizations?
Seconding this question, and wanted to ask more broadly:
A big component/assumption of the example given is that we can “re-run” simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the “real world” and how they’ve tackled this question of how to evaluate counterfactual worlds?
(Also, great post! I’ve been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)
The first real world example that comes to mind… isn’t about agents bargaining. Namely, statistical models. The idea is that you have some subparts that each contribute to the prediction, and want to know which are the most important, and so you can calculate shapley values (“how well does this model do if it only uses age and sex to predict life expectancy, but not race”, etc. for the other coalitions).
Here’s a microecon stack exchange question that asks a similar thing as you. The only non stats answer states that a bank used Shapley values to determine capital allocation in investments. It sounds like they didn’t have a problem using a ‘time machine’ because they had the performance of the investments and so could simply evaluate what returns they would’ve gotten had they invested differently. But I haven’t read it thoroughly, so for all I know they stopped using it soon after, or had some other way to evaluate counterfactuals, etc.
If you could guesstimate the counterfactual, you could try giving rewards according to the Shapley value. It incentivizes contributing to the task (where the strength of the incentive is relative to your BATNA because you had the option of not participating—e.g. it’s pointless to pay you a $100 reward if you could’ve spent the time earning $200 at your day job). As for actually evaluating how good the counterfactuals would be in each case… well, let’s say I’m glad I’m not the one that has to do that work.
A central intuitions of the Shapley value are that players that had better BATNAs should be paid more. Of course, there are reasons why you might fundamentally disagree that this is “fair” in a different sense (perhaps you think it is immoral to give a rich guy more money for contributing the same amount just because they could’ve done more by themselves), but I do claim that at least when it comes to incentives this is a sensible thing to do.
The Lightcone fundraiser posts mentioned that when setting Lighthaven prices, they shoot for charging half of the surplus produced by having the event run at Lighthaven. This is quite literally shooting for the Shapley value. You can try asking the people on the Lightcone team about details? It looks like their strategy is to just nicely ask the other party how much better they think Lighthaven is than their BATNA (this is all the other party info you need, as you can estimate your own costs for running the event). Of course, this breaks down when you can’t trust the other party to tell the truth, and becomes intractable when you have more than two parties.