Shapley values: Better than counterfactuals
[Epistemic status: Pretty confident. But also, enthusiasm on the verge of partisanship]
One intuitive function which assigns impact to agents is the counterfactual, which has the form:
CounterfactualImpact(Agent) = Value(World) - Value(World/Agent)
which reads “The impact of an agent is the difference between the value of the world with the agent and the value of the world without the agent”.
It has been discussed in the effective altruism community that this function leads to pitfalls, paradoxes, or to unintuitive results when considering scenarios with multiple stakeholders. See:
In this post I’ll present some new and old examples in which the counterfactual function seems to fail, and how, in each of them, I think that a less known function does better: the Shapley value, a concept from cooperative game theory which has also been brought up before in such discussions. In the first three examples, I’ll just present what the Shapley value outputs, and halfway through this post, I’ll use these examples to arrive at a definition.
I think that one of the main hindrances in the adoption of Shapley values is the difficulty in its calculation. To solve this, I have written a Shapley value calculator and made it available online: shapleyvalue.com. I encourage you to play around with it.
Example 1 & recap: Sometimes, the counterfactual impact exceeds the total value.
Suppose there are three possible outcomes: P has cost $2000 and gives 15 utility to the world Q has cost $1000 and gives 10 utility to the world R has cost $1000 and gives 10 utility to the world
Suppose Alice and Bob each have $1000 to donate. Consider two scenarios:
Scenario 1: Both Alice and Bob give $1000 to P. The world gets 15 more utility. Both Alice and Bob are counterfactually responsible for giving 15 utility to the world.
Scenario 2: Alice gives $1000 to Q and Bob gives $1000 to R. The world gets 20 more utility. Both Alice and Bob are counterfactually responsible for giving 10 utility to the world.
From the world’s perspective, scenario 2 is better. However, from Alice and Bob’s individual perspective (if they are maximizing their own counterfactual impact), scenario 1 is better. This seems wrong, we’d want to somehow coordinate so that we achieve scenario 2 instead of scenario 1.
Source
Attribution: rohinmshah
In Scenario 1:
Counterfactual impact of Alice: 15 utility.
Counterfactual impact of Bob: 15 utility.
Sum of the counterfactual impacts: 30 utility. Total impact: 15 utility.
The Shapley value of Alice would be: 7.5 utility.
The Shapley value of Bob would be: 7.5 utility.
The sum of the Shapley values always adds up to the total impact, which is 15 utility.
In Scenario 2:
Counterfactual impact of Alice: 10 utility.
Counterfactual impact of Bob: 10 utility.
Sum of the counterfactual impacts: 20 utility. Total impact: 20 utility.
The Shapley value of Alice would be: 10 utility.
The Shapley value of Bob would be: 10 utility.
The sum of the Shapley values always adds up to the total impact, which is 10+10 utility = 20 utility.
In this case, if Alice and Bob were each individually optimizing for counterfactual impact, they’d end up with a total impact of 15. If they were, each of them, individually, optimizing for the Shapley value, they’d end up with a total impact of 20, which is higher.
It would seem that we could use a function such as
CounterfactualImpactModified = CounterfactualImpact / NumberOfStakeholders
to solve this particular problem. However, as the next example shows, that sometimes doesn’t work. The Shapley value, on the other hand, has the property that it always adds up to total value.
Property 1: The Shapley value always adds up to the total value.
Example 2: Sometimes, the sum of the counterfactuals is less than total value. Sometimes it’s 0.
Consider the invention of Calculus, by Newton and Leibniz at roughly the same time. If Newton hadn’t existed, Leibniz would still have invented it, and vice-versa, so the counterfactual impact of each of them is 0. Thus, you can’t normalize like above.
The Shapley value doesn’t have that problem. It has the property that equal people have equal impact, which together with the requirement that it adds up to total value is enough to assign 1⁄2 of the total impact to each of Newton and Leibniz.
Interestingly, GiveWell has Iodine Global Network as a standout charity, but not as a recommended charity, because of considerations related to the above. If it were the case that, had IGN not existed, another organization would have taken its place, its counterfactual value would be 0, but its Shapley value would be 1⁄2 (of the impact of iodizing salt in developing countries).
Property 2: The Shapley assigns equal value to equivalent agents.
Example 3: Order indifference.
Consider Scenario 1 from Example 1 again.
P has cost $2000 and gives 15 utility to the world.
Suppose Alice and Bob each have $1000 to donate. Both Alice and Bob give $1000 to P. The world gets 15 more utility. Both Alice and Bob are counterfactually responsible for giving 15 utility to the world.
Alice is now a pure counterfactual-impact maximizer, but something has gone wrong. She now views Bob adversarially. She thinks he’s a sucker, and she waits until Bob has donated to make her own donation. There are no worlds in which he doesn’t donate before her, and Alice assigns all 15 utility to herself, and 0 to Bob. Note that she isn’t exactly calculating the counterfactual impact, but something slightly different.
The Shapley value doesn’t consider any agent to be a sucker, doesn’t consider any variables to be in the background, and doesn’t care whether people try to donate strategically before or after someone else. Here is a perhaps more familiar example:
Scenario 1:
Suppose that the Indian government creates some big and expensive infrastructure to vaccinate people, but people don’t use it. Suppose an NGO then comes in, and sends reminders to people to vaccinate their people, and some end up going.
Scenario 2:
Suppose that an NGO could be sending reminders to people to vaccinate their children, but it doesn’t, because the vaccination infrastructure is nonexistent, so there would be no point. Then, the government steps in, and creates the needed infrastructure, and vaccination reminders are sent.
Again, it’s tempting to say that in the first scenario, the NGO gets all the impact, and in the second scenario the government gets all the impact, perhaps because we take either the NGO or the Indian government to be in the background. To repeat, the Shapley value doesn’t differentiate between the two scenarios, and doesn’t leave variables in the background. For how this works numerically, see the examples below.
Property 3: The Shapley value doesn’t care about who comes first.
The Shapley value is uniquely determined by simple properties.
These properties:
Property 1: Sum of the values adds up to the total value (Efficiency)
Property 2: Equal agents have equal value (Symmetry)
Property 3: Order indifference: it doesn’t matter which order you go in (Linearity). Or, in other words, if there are two steps, Value(Step1 + Step2) = Value(Step1) + Value(Step2).
And an extra property:
Property 4: Null-player (if in every world, adding a person to the world has no impact, the person has no impact). You can either take this as an axiom, or derive it from the first three properties.
are enough to force the Shapley value function to take the form it takes:
At this point, the reader may want to consult Wikipedia to familiarize themselves with the mathematical formalism, or, for a book-length treatment, The Shapley value: Essays in honor of Lloyd S. Shapley. Ultimately, a quick way to understand it is as “the function uniquely determined by the properties above”.
I suspect that order indifference will be the most controversial option. Intuitively, it prevents stakeholders from adversarially choosing to collaborate earlier or later in order to assign themselves more impact.
Note that in the case of only one agent the Shapley value reduces to the counterfactual function, and that the Shapley value uses many counterfactual comparisons in its formula. It sometimes just reduces to CounterfactualValue/ NumberOfStakeholders (though it sometimes doesn’t). Thus, the Shapley value might be best understood as an extension of counterfactuals, rather than as something completely alien.
Example 4: The Shapley value can also deal with leveraging
Organisations can leverage funds from other actors into a particular project. Suppose that AMF will spend $1m on a net distribution. As a result of AMF’s commitment, the Gates Foundation contributes $400,000. If AMF had not acted, Gates would have spent the $400,000 on something else. Therefore, the counterfactual impact of AMF’s work is:
AMF’s own $1m on bednets plus Gates’ $400,000 on bednets minus the benefits of what Gates would otherwise have spent their $400,000 on.
If Gates would otherwise have spent the money on something worse than bednets, then the leveraging is beneficial; if they would otherwise have spent it on something better than bednets, the leveraging reduces the benefit produced by AMF.
Source: The counterfactual impact of agents acting in concert.
Let’s consider the case in which the Gates Foundation would otherwise have spent their $400,000 on something half as valuable.
Then the counterfactual impact of the AMF is 1,000,000+400,000-(400,000)*0.5 = $1,2m.
The counterfactual impact of the Gates Foundation is $400,000.
And the sum of the counterfactual impacts is $1,6m, which exceeds total impact, which is $1,4m.
The Shapley value of the AMF is $1,1m.
The Shapley value of the Gates Foundation is $300,000.
Thus, the Shapley value assigns to the AMF part, but not all, of the impact of the Gates Foundation donation. It takes into account their outside options when doing so: if the Gates Foundation would have invested on something equally valuable, the AMF wouldn’t get anything from that.
Example 5: The Shapley value can also deal with funging
Suppose again that AMF commits $1m to a net distribution. But if AMF had put nothing in, DFID would instead have committed $500,000 to the net distribution. In this case, AMF funges with DFID. AMF’s counterfactual impact is therefore:
AMF’s own $1m on bednets minus the $500,000 that DFID would have put in plus the benefits of what DFID in fact spent their $500,000 on.
Source
Suppose that the DFID spends their money on something half as valuable.
The counterfactual impact of the AMF is $1m - $500,000 + ($500,000)*0.5 = $750,000.
The counterfactual impact of DFID is $250,000.
The sum of their counterfactual impacts is $1m; lower than the total impact, which is $1,250,000.
The Shapley value of the AMF is, in this case, $875,000.
The Shapley value of the DFID is $375,000.
The AMF is penalized: even though it paid $1,000,000, its Shapley value is less than that. The DFID’s Shapley-impact is increased, because it could have invested its money in something more valuable, if the AMF hadn’t intervened.
For a perhaps cleaner example, consider the case in which the DFID’s counterfactual impact is $0: It can’t use the money except to distribute nets, and the AMF got there first. In that scenario:
The counterfactual impact of the AMF is $500,000.
The counterfactual impact of DFID is $0.
The sum of their counterfactual impacts is $500,000. This is lower than the total impact, which is $1,000,000.
The Shapley value of the AMF is $750,000.
The Shapley value of the DFID is $250,000.
The AMF is penalized: even though it paid $1,000,000, its Shapley value is less than that. The DFID shares some of the impact,
Example 6: The counterfactual value doesn’t deal correctly tragedy of the commons scenarios.
Imagine a scenario in which many people could replicate the GPT-2 model and make it freely available, but the damage is already done once the first person does it. Imagine that 10 people end up doing it, and that the damage done is something big, like −10 million utility.
Then the counterfactual damage done by each person would be 0, because the other nine would have done it regardless.
The Shapley value deals with this by assigning an impact of −1 million utility to each person.
Example 7: Hiring in EA
Suppose that there was a position in an EA org, for which there were 7 qualified applicants which are otherwise “idle”. In arbitrary units, the person in that position in that organization can produce an impact of 100 utility.
The counterfactual impact of the organization is 100.
The counterfactual impact of any one applicant is 0.
The Shapley value of the the organization is 85.71.
The Shapley value of any one applicant is 2.38.
As there are more applicant, the value skews more in favor of the organization, and the opposite happens with less applicants. If there were instead only 3 applicants, the values would be 75 and 8.33, respectively. If there were only 2 applicants, the Shapley value of the organization is 66.66, and that of the applicants is 16.66. With one applicant and one organization, the impact is split 50⁄50.
In general, I suspect, but I haven’t proved it, that if there are n otherwise iddle applicants, the Shapley value assigned to the organization is (n-1)/n. This suggests that a lot of the impact of the position goes to whomever created the position.
Example 8: The Shapley value makes the price of a life rise with the number of stakeholders.
Key:
Shapley value—counterfactual value / counterfactual impact
Shapley price—counterfactual price. The amount of money needed to be counterfactually responsible for 1 unit of X / The amount of money needed for your Shapley value to be 1 unit of X.
Shapley cost-effectiveness—counterfactual cost-effectiveness.
Suppose that, in order to save a life, 4 agents have to be there: AMF to save a life, GiveWell to research them, Peter Singer to popularize them and a person to donate $5000. Then the counterfactual impact of the donation would be 1 life, but its Shapley value would be 1/4th. Or, in other words, the Shapley cost of saving a life though a donationis four times higher than the counterfactual cost.
Why is this? Well, suppose that, to save a life, each of the organizations spent $5000. Because all of them are necessary, the counterfactual cost of a life is $5000 for any of the stakeholders. But if you wanted to save an additional life, the amount of money which would be spend must be $5000*4 = $20,000, because someone would have to go through the four necessary steps.
If, instead of 4 agents there were 100 agents involved, then the counterfactual price stays the same, but the Shapley price rises to 100x the counterfactual price. In general, I’ve said “AMF”, or “GiveWell”, as if they each were only one agent, but that isn’t necessarily the case, so the Shapley price (of saving a life) might potentially be even higher.
This is a problem because if agents are reporting their cost-effectiveness in terms of counterfactuals, and one agent switches to consider their cost-effectiveness in terms of Shapley values, their cost effectiveness will look worse.
This is also a problem if organizations are reporting their cost-effectiveness in terms of counterfactuals, but in some areas there are 100 necessary stakeholders, and in other areas there are four.
Shapley value and cost effectiveness.
So we not only care about impact, but also about cost-effectiveness. Let us continue with the example in which an NGO sends reminders to undergo vaccination, and let us give us some numbers.
Lets say that a small Indian state with 10 million inhabitants spends $60 million to vaccinate 30% of their population. An NGO which would otherwise be doing something really ineffective (we’ll come back to this), comes in, and by sending reminders, increases the vaccination rate to 35%. They do this very cheaply, for $100,000.
The Shapley value of the Indian government would be 32.5%, or 3.25 million people vaccinated.
The Shapley value of the small NGO would be 2.5%, or 0.25 million people vaccinated.
Dividing this by the amount of money spent:
The cost-effectiveness in terms of the Shapley value of the Indian government would be $60 million / 3.25 million vaccinations = $18.46/vaccination.
The cost-effectiveness in terms of the Shapley value of the NGO would be $100,000 / 250,000 vaccinations = $0.4/vaccination.
So even though the NGO’s Shapley value is smaller, it’s cost-effectiveness is higher, as one might expect.
If the outside option of the NGO were something which has a similar impact to vaccinating 250,000 people, we’re back at the funging/leveraging scenario: because the NGO’s outside option is better, its Shapley value rises.
Cost effectiveness in terms of Shapley value changes when considering different groupings of agents.
Continuing with the same example, consider that, instead of the abstract “Indian government” as a homogeneous whole, there are different subagents which are all necessary to vaccinate people. Consider: The Central Indian Government, the Ministry of Finance, the Ministry of Health and Family Welfare, and within any one particular state: the State’s Council of Ministers, the Finance Department, the Department of Medical Health and Family Welfare, etc. And within each of them there are sub-agencies, and sub-subagencies.
In the end, suppose that there are 10 organizations which are needed for the vaccine to be delivered, for a nurse to be there, for a hospital or a similar building to be available, and for there to be money to pay for all of it. For simplicity, suppose that the budget of each of those organizations is the same: $60 million / 10 = $6 million. Then the Shapley-cost effectiveness is different:
The Shapley value of each governmental organization would be 1⁄10 * (30 million + 10⁄11 * 0.5 million) = 345,454 people vaccinated.
The Shapley value of the NGO would be 1⁄11 * 500,000 = 45,454 people vaccinated.
The cost effectiveness of each governmental organization would be ($6 million)/(345,454 vaccinations) = $17 / vaccination.
The cost effectiveness of the NGO would be $100,000 / 45,454 vaccinations = $2.2 / vaccination.
That’s interesting. These concrete numbers are all made up, but they’re inspired by reality and “plausible”, and I was expecting the result to be that the NGO would be less cost-effective than a government agency. It’s curious to see that, in this concrete example, the NGO seems to be robustly more cost-efficient than the government under different groupings. I suspect that something similar is going on with 80,000h.
Better optimize Shapley.
If each agent individually maximizes their counterfactual impact per dollar, we get suboptimal results, as we have seen above. In particular, consider a toy world in which twenty people can either:
Each be an indispensable part of a project which has a value of 100 utility, for a total impact of 100 utility
Each can by themselves undertake a project which has 10 utility, for a total impact of 200 utility.
Then if each person was optimizing for the counterfactual impact, they would all choose the first option, for a lower total impact. If they were optimizing for their Shapley value, they’d choose the second option.
Can we make a more general statement? Yes. Agents individually optimizing for cost-effectiveness in terms of Shapley value globally optimize for total cost-effectiveness.
Informal proof: Consider the case in which agents have constant budgets and can divide them between different projects as they like. Then, consider the case in which each $1 is an agent: projects with higher Shapley value per dollar get funded first, then those with less impact per dollar, etc. Total cost-effectiveness is maximized. Because of order indifference, both cases produce the same distribution of resources. Thus, agents individually optimizing for cost effectiveness in terms of Shapley-value globally optimize for total cost-effectiveness.
Note: Thinking in terms of marginal cost-effectiveness doesn’t change this conclusion. Thinking in terms of time/units other than money probably doesn’t change the conclusion.
Am I bean counting?
I don’t have a good answer to that question.
Conclusion
The counterfactual impact function is well defined, but it fails to meet my expectations of what an impact function ought to do when considering scenarios with multiple stakeholders.
On the other hand, the Shapley value function flows from some very general and simple properties, and can deal with the examples in which the counterfactual function fails. Thus, instead of optimizing for counterfactual impact, it seems to me that optimizing for Shapley value is less wrong.
Finally, because the Shapley value is not pretty to calculate by hand, here is a calculator.
Question: Is there a scenario in which the Shapley value assigns impacts which are clearly nonsensical, but with which the counterfactual value, or a third function, deals correctly?
Addendum: The Shapley value is not easily computable.
For large values the Shapley value will not be computationally tractable (but approximations might be pretty good), and work on the topic has been done in the area of interpreting machine learning results. See, for example:
This was a very simple example that we’ve been able to compute analytically, but these won’t be possible in real applications, in which we will need the approximated solution by the algorithm. Source: https://towardsdatascience.com/understanding-how-ime-shapley-values-explains-predictions-d75c0fceca5a
Or
The Shapley value requires a lot of computing time. In 99.9% of real-world problems, only the approximate solution is feasible. An exact computation of the Shapley value is computationally expensive because there are 2^k possible coalitions of the feature values and the “absence” of a feature has to be simulated by drawing random instances, which increases the variance for the estimate of the Shapley values estimation. The exponential number of the coalitions is dealt with by sampling coalitions and limiting the number of iterations M. Decreasing M reduces computation time, but increases the variance of the Shapley value. There is no good rule of thumb for the number of iterations M. M should be large enough to accurately estimate the Shapley values, but small enough to complete the computation in a reasonable time. It should be possible to choose M based on Chernoff bounds, but I have not seen any paper on doing this for Shapley values for machine learning predictions. Source: https://christophm.github.io/interpretable-ml-book/shapley.html#disadvantages-13
That being said, here is a nontrivial example:
Foundations and projects.
Suppose that within the EA community, OpenPhilantropy, a foundation whose existence I appreciate, has the opportunity to fund 250 out of 500 projects every year. Say that you also have 10 smaller foundations: Foundation1,..., Foundation10, each of which can afford to fund 20 projects, that there aren’t any more sources of funding, and that each project costs the same.
On the other hand, we will also consider the situation in which OpenPhil is a monopoly. In the end, perhaps all these other foundations and centers might be founded by OpenPhilantropy themselves. Consider the assumption that OpenPhil has the opportunity to fund 450 projects out of 500, and that there are no other sources in the EA community.
Additionally, we could model the distribution of projects with respect to how much good they do in the world by ordering all projects from 1 to 500, and saying that:
Impact1 of the k-th project = I1(k) = 0.99^k.
Impact2 of the k-th project = I2(k) = 2/k^2 (a power law).
With that in mind, here are our results for the different assumptions. Power Index= Shapley(OP) / Total Impact
Monopoly? | Impact measure | Total Impact | Shapley(OP) | Power index |
---|---|---|---|---|
0 | I(k) = 0.99^k | 97.92 | 7.72 | 7.89% |
0 | I(k) = 2/k^2 | 3.29 | 0.028 | 0.86% |
1 | I(k) = 0.99^k | 97.92 | 48.96 | 50% |
1 | I(k) = 2/k^2 | 3.29 | 1.64 | 50% |
For a version of this table which has counterfactual impact as well, see here.
The above took some time, and required me to beat the formula for the Shapley value into being computationally tractable for this particular case (see here for some maths, which as far as I’m aware, are original, and here for some code).
- Beyond micromarriages by 11 Mar 2022 3:40 UTC; 91 points) (LessWrong;
- Critique of the notion that impact follows a power-law distribution by 14 Mar 2024 10:28 UTC; 88 points) (
- Relative Impact of the First 10 EA Forum Prize Winners by 16 Mar 2021 17:11 UTC; 88 points) (
- Predicting the Value of Small Altruistic Projects: A Proof of Concept Experiment. by 22 Nov 2020 20:07 UTC; 62 points) (
- Epistemic status: an explainer and some thoughts by 31 Aug 2022 13:59 UTC; 59 points) (
- Cooperative or Competitive Altruism, and Antisocial Counterfactuals by 20 Mar 2023 17:54 UTC; 53 points) (
- Do people have a form or resources for capturing indirect interpersonal impacts? by 4 Jan 2023 4:47 UTC; 47 points) (
- Beyond micromarriages by 1 Apr 2022 16:38 UTC; 41 points) (
- Shapley Values II: Philantropic Coordination Theory & other miscellanea. by 10 Mar 2020 17:36 UTC; 35 points) (
- 26 Mar 2021 0:52 UTC; 34 points) 's comment on How much does performance differ between people? by (
- Quantifying the impact of grantmaking career paths by 30 Oct 2022 21:00 UTC; 32 points) (
- Effective Giving: Best Practices, Key Considerations, and Resources by 28 Nov 2023 0:36 UTC; 27 points) (
- Shapley value, importance, easiness and neglectedness by 5 May 2023 7:33 UTC; 27 points) (
- Malaria bednet and medicine overlap by 5 Sep 2023 23:55 UTC; 24 points) (
- EA Forum Prize: Winners for October 2019 by 11 Dec 2019 10:37 UTC; 23 points) (
- Building Blocks of Utility Maximization by 20 Sep 2021 17:23 UTC; 21 points) (
- 26 Oct 2022 8:45 UTC; 21 points) 's comment on Effectiveness of Giving Blood by (
- Future benefits of mitigating food shocks caused by abrupt sunlight reduction scenarios by 4 Mar 2023 16:22 UTC; 20 points) (
- 15 Mar 2024 13:25 UTC; 18 points) 's comment on Critique of the notion that impact follows a power-law distribution by (
- 16 Jun 2021 20:05 UTC; 16 points) 's comment on 2018-2019 Long-Term Future Fund Grantees: How did they do? by (
- 27 Jan 2023 19:43 UTC; 16 points) 's comment on Assigning Praise and Blame: Decoupling Epistemology and Decision Theory by (LessWrong;
- 11 Oct 2024 19:30 UTC; 15 points) 's comment on Open Letter to Young EAs by (
- Shapley values: an introductory example by 12 Nov 2023 13:35 UTC; 15 points) (
- Part 3: Comparing agency organisational models by 25 Jul 2021 13:03 UTC; 14 points) (
- How do inspired contributions and externalized costs factor in cost-effectiveness calculations? by 8 Aug 2023 21:43 UTC; 12 points) (
- Projects for EA historians by 7 Jun 2022 14:48 UTC; 10 points) (
- Projects for EA historians by 7 Jun 2022 14:48 UTC; 10 points) (
- Interaction Effect by 16 Dec 2019 15:42 UTC; 7 points) (
- 27 Mar 2024 22:43 UTC; 7 points) 's comment on Critique of the notion that impact follows a power-law distribution by (
- 14 Oct 2023 14:03 UTC; 7 points) 's comment on Linch’s Quick takes by (
- 22 May 2022 13:59 UTC; 6 points) 's comment on Impact is very complicated by (
- 23 Nov 2020 16:36 UTC; 6 points) 's comment on What is the marginal impact of a small donation to an EA Fund? by (
- 5 Apr 2024 23:01 UTC; 5 points) 's comment on JP Addison’s Quick takes by (
- 20 Jun 2022 23:44 UTC; 3 points) 's comment on quinn’s Quick takes by (
- Acausal trade naturally results in the Nash bargaining solution by 8 May 2023 18:13 UTC; 3 points) (LessWrong;
- 25 Jun 2023 14:34 UTC; 2 points) 's comment on On infinite ethics by (
- 24 Mar 2024 18:24 UTC; 2 points) 's comment on Critique of the notion that impact follows a power-law distribution by (
- 16 Jun 2022 15:27 UTC; 2 points) 's comment on How to allocate impact shares of a past project by (
- 19 Sep 2022 10:53 UTC; 2 points) 's comment on List of ways in which cost-effectiveness estimates can be misleading by (
- 3 Sep 2022 21:39 UTC; 1 point) 's comment on Rationality of demonstrating & voting by (LessWrong;
- 5 Apr 2021 8:12 UTC; 0 points) 's comment on Forget replaceability? (for ~community projects) by (
While I think the Shapley value can be useful, there are clearly cases where the counterfactual value is superior for an agent deciding what to do. Derek Parfit clearly explains this in Five Mistakes in Moral Mathematics. He is arguing against the ‘share of the total view’ and but at least some of the arguments also apply to the Shapley value too (which is basically an improved version of ‘share of the total’). In particular, the best things you have listed in favour of the Shapley value applied to making a moral decision correctly apply when you and others are all making the decision ‘together’. If the others have already committed to their part in a decision, the counterfactual value approach looks better.
e.g. on your first example, if the other party has already paid their $1000 to P, you face a choice between creating 15 units of value by funding P or 10 units by funding the alternative. Simple application of Shapley value says you should do the action that creates 10 units, predictably making the world worse.
One might be able to get the best of both methods here if you treat cases like this where another agent has already committed to a known choice as part of the environment when calculating Shapley values. But you need to be clear about this. I consider this kind of approach to be a hybrid of the Shapley and counterfactual value approaches, with Shapley only being applied when the other agents’ decisions are still ‘live’. As another example, consider your first example and add the assumption that the other party hasn’t yet decided, but that you know they love charity P and will donate to it for family reasons. In that case, the other party’s decision, while not yet made, is not ‘live’ in the relevant sense and you should support P as well.
If you are going to pursue what the community could gain from considering Shapley values, then look into cases like this and subtleties of applying the Shapley value further — and do read that Parfit piece.
Sorry to revive a dead comment just to argue, but I’m going to disagree about the claims made here for most of what EA as a movement does, even if it’s completely right in many narrowly defined cases.
In most cases where we see that other funders have committed their funds before we arrive, you say that we should view it counterfactually. I think this is probably myopic. EA is a large funder, and this is an iterated dilemma—other actors are ‘live’ in the relevant sense, and will change their strategies based on knowing our decisions. The cooperative and overall better solution, if we can get other actors to participate in this pareto-improving change in strategy, is to explicitly cooperate, or at least embrace a decision theory that lets us do so.
(See the discussion here that pointed me back to this comment, where I make a similar argument. And in the post, I point to where Givewell is actively using counterfactual reasoning when the other decisions are most certainly ‘live’, because again, it’s an iterated game, and the other funders have already said they are adjusting their funding levels to account for the funding that EA provides.)
I think the reason summing counterfactual impact of multiple people leads to weird results is not a problem with counterfactual impact but with how you are summing it. Adding together each individual’s counterfactual impact by summing is adding the difference between world A where they both act and world B and C where each of them act alone. In your calculus, you then assume this is the same as the difference between world A and D where nobody acts.
The true issue in maximising counterfactual impact seems to arise when actors act cooperatively but think of their actions as an individual. When acting cooperatively you should compare your counterfactuals to world D, when acting individually world B or C.
The Shapley value is not immune to error either I can see three ways it could lead to poor decision making:
For the Vaccine Reminder example, It seems more strange to me to attribute impact to people who would otherwise have no impact. We then get the same double-counting problem or in this case infinite dividing which is worse as It can dissuade you of high impact options. If I am not mistaken, then in this case the Shapley value is divided between the NGO, the government, the doctor, the nurse, the people driving logistics, the person who built the roads, the person who trained the doctor, the person who made the phones, the person who set up the phone network and the person who invented electricity. In which case, everyone is attributed a tiny fraction of the impact when only the vaccine reminder intentionally caused it. Depending on the scope of other actors we consider this could massively reduce the impact of the action.
Example 6 reveals another flaw as attributing impact this way can lead you to make poor decisions. If you use the Shapley value then when examining whether to leak information as the 10th person you see that the action costs −1million utilities. If I was offered 500,000 utils to share then under Shapley I should not do so as 500,00 −1M is negative. However, this thinking will just prevent me from increasing overall utilis by 500,000.
In example 7 the counterfactual impact of the applicant who gets the job is not 0 but the impact of the job the lowest impact person gets. Imagine each applicant could earn to give 2 utility and only has time for one job application. When considering counterfactual impact the first applicant chooses to apply to the EA org and gets attributed 100 utility (as does the EA org). The other applicants now enter the space and decide to earn to give as this has a higher counterfactual impact. They decrease the first applicant’s counterfactual utility to 2 but increase overall utility. If we use Shapely instead then all applicants would apply for the EA org and as this gives them a value of 2.38 instead of 2.
I may have misunderstood Shapely here so feel free to correct me. Overall I enjoyed the post and think it is well worth reading. Criticism of the underlying assumptions of many EAs decision-making methods is very valuable.
1.
I have thought about this, and I’m actually biting the bullet. I think that a lot of people get impact for a lot of things, and that even smallish projects depend on a lot of other moving parts, in the direction of You didn’t build that.
I don’t agree with some of your examples when taken literally, but I agree with the nuanced thing you’re pointing at with them, e.g., building good roads seems very valuable precisely because it helps other projects, if there is high nurse absenteeism then the nurses who show up take some of the impact...
I think that if you divide the thing’s impact by, say 10x, the ordering of the things according to impact remains, so this shouldn’t dissuade people from doing high impact things. The interesting thing is that some divisors will be greater than others, and thus the ordering will be changed. I claim that this says something interesting.
2.
Not really. If 10 people have already done it, your Shapley value will be positive if you take that bargain. If the thing hasn’t been done yet, you can’t convince 10 Shapley-optimizing altruists to do the thing for 0.5m each, but you might convince 10 counterfactual impact optimizers. As @casebach mentioned, this may have problems when dealing with uncertainty (for example: what if you’re pretty sure that someone is going to do it?).
3.
You’re right. The example, however, specified that the EAs were to be “otherwise idle”, to simplify calculations.
The order indifference of Shapely values only makes sense from a perspective where there is perfect knowledge of what other players will do, but if you don’t have that, a party that spent a huge amount of money on a project that was almost certainly going to be wasteful and ended up being saved when by sheer happenstance another party appeared to save the project was not making good spending decisions. Similarly, many agents won’t be optimising for Shapely value, say a government which spends money on infrastructure not caring about whether it’ll be used or not just to win political points, so they don’t properly deserve a share of the gains when someone else intervenes with notifications to make the project actually effective.
I feel that this article presents Shapley value as just plain superior, when instead a combination of both Shapley value and counterfactual value will likely be a better metric. Beyond this, what you really want to use is something more like FDT where you take into account the fact that the decisions of some agents are subjunctively linked to you and that the decisions of some other agents aren’t. Even though my current theory is that very, very few agents are actually subjunctively linked to you, I suspect that thinking about problems in this fashion is likely to work reasonably well in practise (I would need to dedicate a solid couple of hours in order to be able to write out my reasons for believing this more concretely)
Hey Chris! It was nice seeing you at the EA Hotel, and I’m glad we could talk about this. I’m writing down some of my notes from our conversations. Is there anything I’ve forgotten, or which you’d like to add?
a. What are you using Shapley values / counterfactual values for?
You might want to use different tools depending on what your goal is; three different goals migh be: Coordination / Analysis / Reward / Award.
For example, you might want a function which is easier to understand when announcing an award. If you’re rewarding a behavior, you might want to make sure you’re incentivizing the right thing.
b. The problem of choosing who to count is more complicated than I originally thought, and you should in fact exclude some agents from your calculations.
The example of: “If a bus driver falls off a cliff and Superman rescues them and brings them safely to their destination, earlier, the bus driver gets half the credit” is silly, but made the thing really crisp for me.
Hearing that, we then thought that:
Yes, the driver gets half the credit under Shapley values, but the same value as Superman under counterfactual value.
(also, if the driver distracts Superman from saving a different bus, then the driver gets 0 or negative value in both cases)
(if the driver was intelligent enough to know that Superman wasn’t doing anything important, he might actually get half the credit, but only of getting there earlier. In this scenario, had there been no Superman, the driver wouldn’t have fallen off the cliff.).
(if the driver was a paperclip maximizer who didn’t know that Superman was going to be around, then Superman should take all the credit).
So the answer would seem to be something like: -Counting only over people who are broadly similar to you?
Who are optimizing over the same thing, or whose decisions can be changed because of yours? It seems like this is more of a case of causal, rather than subjunctive dependence.
c. Shapley values and uncertainty
How do SVs deal with uncertainty? Can you do expected value over SVs? [Yes, you can]. For example, if you have a 1% chance of a SV of 100, you can say that the E[SV] = 1. Even thought the SV formalism is more complicated than the counterfactual, it still works elegantly / is well-defined, etc.
Fair point re: uncertainty. The situation seems pretty symmetric, though: if a politician builds roads just to get votes, and an NGO steps in and does something valuable with that, the politician’s counterfactual impact is still the same as the NGO’s, so both the Shapley value and counterfactuals have that problem (?). Maybe one can exclude agents acording to how close their goals are to yours, e.g., totally exclude a paperclip maximizer from both counterfactual and Shapley value calculations, and apply order indifference to allies only (?). This is something I haven’t though about; thanks for pointing it out.
Fair point re: epistemic status. Changed my epistemic status.
“The situation seems pretty symmetric, though: if a politician builds roads just to get votes, and an NGO steps in and does something valuable with that, the politician’s counterfactual impact is still the same as the NGO’s”—true, but the NGO’s counterfactual impact is reduced when I feel it’s fairer for the NGO to be able to claim the full amount (though of course you’d never know the government’s true motivations in real life)
I like this angle! It seems useful to compare the Shapley value in this domain to the Banzhaf value. (Brief, dense description: If Shapley value attributes value to pivotal actors during the sequential process of coalition formation (averaged across all permutations of coalition formation orderings), Banzhaf value attributes value to critical actors without which any given coalition would fail. See Shapley-Shubik power index and Banzhaf power index for similar concepts in a slightly different context.)
This paper has a nice table of properties:
(“Additivity’ is the same as “linearity” here.)
Focusing on just the properties where they differ:
Efficiency: I’ve sometimes seen this called “full allocation” which is suggestive. It’s basically just whether the full value of the coalition is apportioned to actors of the coalition or if some of it is leftover.
2-Efficiency: “The 2-Efficiency property states that the allocation rule that satisfies it is immune against artificial merging or splitting of players.”
Total power: “The Total power property establishes that the total payoff obtained for the players is the sum of all marginal contributions of every player normalized by 2n−1.”
I’d have to think about this more carefully, but it’s not immediately obvious to me which set of properties is better for the purpose at hand.
Is it possible to use Banzhaf values for generic attribution questions outside of voting? If so, can you link to some posts/papers that describe how to use it in such cases. The first set of things that came up are all voting-related.
Unless I’m very confused, yes. Unfortunately, it does seem that almost all of the discussion of it is pretty theoretical and about various axiomatic characterizations. Here’s an interesting application paper I found though: The Shapley and Banzhaf values in microarray games. They have a short description of their use of the Banzhaf value (equation 2)---not sure how helpful it is.
Example 7 seems wild to me. If the applicants who don’t get the job also get some of the value, does that mean people are constantly collecting Shapley value from the world, just because they “could” have done a thing (even if they do absolutely nothing)? If there are an infinite number of cooperative games going on in the world and someone can plausibly contribute at least a unit of value to any one of them, then it seems like their total Shapley value across all games is infinite, and at that point it seems like they are as good as one can be, all without having done anything. I can’t tell if I’m making some sort of error here or if this is just how the Shapley value works.
Presumably everything adds up to normality? Like you have a high numerator but also a high denominator.
(But this is mostly a drive-by comment, I don’t really understand Shapleys)
What numerator and denominator? I am imagining that a single person could be a player in multiple cooperative games. The Shapley value for the person would be finite in each game, but if there are infinitely many games, the sum of all the Shapley values (adding across all games, not adding across all players in a single game) could be infinite.
Hmm I would guess that the number of realistic cooperative games in the world to grow ~linearly (or some approximation[1]) with the number of people in the world, hence the denominator.
[1] I suppose if you think the growth is highly superlinear and there are ~infinity people, than Shapley values can grow to be ~infinite? But this feels like a general problem with infinities and not specific to Shapleys.
I asked my question because the problem with infinities seems unique to Shapley values (e.g. I don’t have this same confusion about the concept of “marginal value added”). Even with a small population, the number of cooperative games seems infinite: for example, there are an infinite number of mathematical theorems that could be proven, an infinite number of Wikipedia articles that could be written, an infinite number of films that could be made, etc. If we just use “marginal value added”, the total value any single person adds is finite across all such cooperative games because in the actual world, they can only do finitely many things. But the Shapley value doesn’t look at just the “actual world”, it seems to look at all possible sequences of ways of adding people to the grand coalition and then averages the value, so people get non-zero Shapley value assigned to them even if they didn’t do anything in the “actual world”.
(There’s maybe some sort of “compactness” argument one could make that even if there are infinitely many games, in the real world only finitely many of them get played to completion and so this should restrict the total Shapley value any single person can get, but I’m just trying to go by the official definition for now.)
I agree that this is unintuitive. Personally the part of that that I like less is that it feels like people could cheat that, by standing in line.
But they can’t cheat it! See this example: <http://shapleyvalue.com/?example=4>. You can’t even cheat by noticing that something is impactful, and then self-modifying so that in the worlds where you were needed you would do it, because in the worlds where you would be needed, you wouldn’t have done that modification (though there are some nuances here, like if you self-modify and there is some chance that you are needed in the future).
Not sure if that addresses part of what you were asking about.
I agree that SV’s don’t play nice with infinities, though I’m not sure whether there could be an extension which could (for instance, looking at the limit of the Shapley value).
I don’t think the example you give addresses my point. I am supposing that Leibniz could have also invented calculus, so v({2})=100. But Leibniz could have also invented lots of different things (infinitely many things!), and his claim to each invention would be valid (although in the real world he only invents finitely many things). If each invention is worth at least a unit of value, his Shapley value across all inventions would be infinite, even if Leibniz was “maximally unluckly” and in the actual world got scooped every single time and so did not invent anything at all.
I don’t understand the part about self-modifications—can you spell it out in more words/maybe give an example?
Assuming an infinite number of players. If there are only a finite number of players, there are only finite terms in the Shapley value calculation, and if each invention has finite value, that’s finite.
The Wikipedia page says:
This means that there must be gains to distribute for anyone to get nonzero credit from that game, and that they in fact “collaborated” (although this could be in name only) to get any credit at all. Ignoring multiverses, infinitely many things have not been invented yet, but maybe infinitely many things will be invented in the future. In general, I don’t think that Leibniz cooperated in infinitely many games, or even that infinitely many games have been played so far, unless you define games with lots of overlap and double counting (or you invoke multiverses, or consider infinitely long futures, or some exotic possibilities, and then infinite credit doesn’t seem unreasonable).
Furthermore, in all but a small number of games, he might make no difference to each coalition even when he cooperates, so get no credit at all. Or the credit could decrease fast enough to have a finite sum, even if he got nonzero credit in infinitely many games, as it becomes vanishingly unlikely that he would have made any difference even in worlds where he cooperates.
In general, I don’t think you should sum an individual’s Shapley values across possible and maybe even actual games, because some actions the individual could take could be partially valuable in the same way in multiple games simultaneously, and you would double count value by summing. The sum wouldn’t represent anything natural or useful in such cases. However, there may be specific sets of games where it works out, maybe when the value across games is in fact additive for the value to the world. This doesn’t mean the games can’t interact or compete in principle, but the value function for each game can’t depend on the specific coalition set of any other game, but it can average over them.
I think a general and theoretically sound approach would be to build a single composite game to represent all of the games together, but the details could be tricky or unnatural, because you need to represent in which games an individual cooperates, given that they can only do so much in a bounded time interval.
Maybe you use the set of all players across all games as the set of players in the composite game, and cooperating in any game counts as cooperating in the composite game. To define the value function, you could model the distribution of games the players cooperate in conditional on the set of players cooperating in any game (taking an expected value). Then you get Shapley values the usual way. But now you’re putting a lot of work into the value function.
Maybe you can define the set of players to be the product of the set of all players across all of the games and the set of games. That is, with a set I of individuals (across all games) and a set X of games, (i,x)∈I×X cooperates if and only if i cooperates in game x. Then you can define i’s Shapley value as the sum of Shapley values over the “players” (i,x), ranging over the x. If you have infinitely many games in X, you get an infinite number of “players”. There is work on games with infinitely many players (e.g. Diubin). Maybe you don’t need to actually compute the Shapley value for each (i,x), and you can directly compute the aggregate values over each x for each i.
Unless you’re double counting, I think there are only finitely many games actually being played at a time, so this is one way to avoid infinities. In counterfactuals where an individual “cooperates” in infinitely many games locally (ignoring multiverses and many worlds) and in a finite time interval, their marginal contribution to value to a coalition (i.e. v(S∪{i})−v(S)=E[U|S∪{i}]−E[U|S],i∉S) is realistically going to be 0 in all but finitely many of those games, unless you double count value, which you shouldn’t.[1] The more games an individual is playing, the less they can usually contribute to each.
I don’t know off-hand if you can guarantee that the sum of an individual’s Shapley values across separately defined games matches the individual’s Shapley value for the composite game (defined based on 1 or 2) in interesting/general enough types of sets of games.
For an infinite set of games an individual “cooperates” in, they could randomly pick finitely many games to actually contribute to according to a probability distribution with positive probability on infinitely many subsets of games, and so contribute nonzero value in expectation to infinitely many games. I suspect this isn’t physically possible in a finite time interval. Imagine the games are numbered, and the player chooses which games to actually cooperate to by generating random subsets of numbers (or even just one at a time). To have infinite support in a finite time interval, they’d need a procedure that can represent arbitrarily large numbers in that time interval. In general, they’d need to be sensitive to arbitrarily large amounts of information to decide which games to actually contribute to in order to distinguish infinitely many subsets of games.
There could also just be butterfly effects on infinitely many games, but if those don’t average out in expectation, I’d guess you’re double counting.
Yeah, I did actually have this thought but I guess I turned it around and thought: shouldn’t an adequate notion of value be invariant to how I decide to split up my games? The linearity property on Wikipedia even seems to be inviting us to just split games up in however manner we want.
And yeah, I agree that in the real world games will overlap and so there will be double counting going on by splitting games up. But if that’s all that’s saving us from reaching absurd conclusions then I feel like there ought to be some refinement of the Shapley value concept...
This seems confused to me. Shapley values are additive, so one’s shapley value should be the sum of one’s Shapley value for all games.
In particular, if you do an action that is valuable for many games, e.g., writing a wikipedia article that is valuable for many projects, you could conceive of each project as its own game, and the shapley value would be the sum of the contributions to each project. There is no double-counting.
<https://en.wikipedia.org/wiki/Shapley_value#Linearity>
i had to double-check, though, because you seemed so sure.
I think the linearity property holds if the two value/payoff functions themselves can be added (because Shapley values are linear combinations of the value/payoff functions’ values with fixed coefficients for fixed sets of players), but usually not otherwise. Also, I think this would generally assume a common set of players, and that a player cooperates in one game iff they cooperate in the other, so that we can use (v+w)(S)=v(S)+w(S).
I think there’s the same problem that motivated the use of Shapley values in the first place. Just imagine multiple decisions one individual makes as part of 3 separate corresponding games:
Doing the basics to avoid dying, like eating, not walking into traffic (and then working, earning money and donating some of it)
Working and earning money (to donate, where and how much to work)
Donating (how much to donate, and optionally also where)
Let’s assume earning-to-give only with low impact directly from each job option.
1 and 2 get their value from eventually donating, which is the decision made in 3, but you’d already fully count the value of your donations in 3, so you shouldn’t also count it in 1 or 2. These can also be broken down into further separate games. It doesn’t matter for your donations if you avoid dying now if you die soon after before getting to donate. You won’t get to donate more if you do 1 more minute of work in your job before quitting instead of quitting immediately.
I think people wouldn’t generally make the mistake of treating these as separate games to sum value across, because the decisions are too fine-grained and because the dependence is obvious. Even if they were earning money to donate from impactful direct work, they still wouldn’t accidentally double count their earnings/donations, because they wouldn’t represent that with multiple games.
A similar example that I think could catch someone would be someone who is both a grant advisor and doing separate fundraising work that isn’t specific to their grants but raises more money for them to grant, anyway. For example, they’re both a grant advisor for an EA Fund, and do outreach for GWWC. If they treat these as separate coalition games they’re playing, there’s a risk that they’ll double count additional money that’s been raised through GWWC and was granted on their recommendation (or otherwise affected by their grantmaking counterfactually). Maybe assume that if they don’t make grant recommendations soon, there’s a greater risk the extra funds aren’t useful at all (or are much much less useful), e.g. the extra funding is granted prioritizing other things over potential impact, the funds are misappropriated, or we go extinct. So, they’re directly or indirectly counting extra funding in both games. This seems harder to catch, because the relationship between the two games isn’t as obvious, and they’re both big natural decisions to consider.
Another example: calculus was useful to a huge number of later developments. Leibniz “cooperated” in the calculus-inventing game, but we might say he also cooperated in many later games that depended on calculus, but any value we’d credit him with generated in those later games should already be fully counted in the credit he gets in the calculus-inventing game.
There are also more degenerate cases, like two identical instances of the same game, or artificial modifications, e.g. adding and excluding different players (but counting their contributions anyway, just not giving them credit in all games).
Disagree-voting a question seems super aggressive and also nonsensical to me. (Yes, my comment did include some statements as well, but they were all scaffolding to present my confusion. I wasn’t presenting my question as an opinion, as my final sentence makes clear.) I’ve been unhappy with the way the EA Forum has been going for a long time now, but I am noting this as a new kind of low.
Thanks for this post. I’m also pretty enthusiastic about Shapley values, and it is overdue for a clear presentation like this.
The main worry I have is related to the first one GeorgeBridgwater notes: the values seem very sensitive to who one includes as a co-operative counterparty (and how finely we individuate them). As your example with vaccine reminders shows, different (but fairly plausible) accounts of this can change the ‘raw’ CE estimate by a factor of five.
We may preserve ordering among contributors if we twiddle this dial, but the more typical ‘EA problem’ is considering different interventions (and thus disjoint sets of counter-parties). Although typical ‘EA style’ CE estimates likely have expected errors in their exponent rather than their leading digit, a factor of 5 (or maybe more) which can hinge on relatively arbitrary decisions on how finely to individuate who we are working with looks pretty challenging to me.
The Banzhaf value should avoid this problem since it has the property of 2-Efficiency: “The 2-Efficiency property states that the allocation rule that satisfies it is immune against artificial merging or splitting of players.”
I’d like to hear more about this if you have the time. It seems to me that it’s hard to find a non-arbitrary way splitting of players.
Say a professor and a student work together on a paper. Each of them spends 30 hours on it and the paper would counterfactually not have been written if either of them had not contributed this time. The Shapley values should not be equivalent, because the ‘relative size’ of the players’ contributions shouldn’t be measured by time input.
Similarly, in the India vaccination example, players’ contribution size is determined by their money spent. But this is sensitive to efficiency: one should not be able to get a higher Shapley value just from spending money inefficiently, right? Or should it, because this worry is addressed by Shapley cost-effectiveness?
(This issue seems structurally similar to how we should allocate credence between competing hypotheses in the absence of evidence. Just because the two logical possibilities are A and ~A, does not mean a 50⁄50 credence is non-abitrary. Cf. Principle of Indifference)
This is the best explanation I could find: Notes on a comment on 2-efficiency and the Banzhaf value.
It describes two different kinds of 2-efficiency:
These lead to the corresponding properties:
So basically they’re just saying that players can’t artificially boost or reduce their assigned values by merging or amalgamating—the resulting reward is always just the sum of the individual rewards.
I don’t think it directly applies in the case of your professor and student case. The closest analogue would be if the professor and student were walking as part of a larger group. Then 2-efficiency would say that the student and professor collectively get X credit whether they submit their work under two names or one.
Sorry for the delayed reply. Does that help at all?
Thanks! Late replies are better than no replies ;)
I don’t think this type of efficiency deals with the practical problem of impact credit allocation though! Because there the problem appears to be that it’s difficult to find a common denominator for people’s contributions. You can’t just use man hours, and I don’t think the market value of man hours would do that much better (although it gets in the right direction).
I really like this post!
I think you meant the fraction of the total Shapley value assigned to the organisation is S_o/S = n/(n + 1), which equals 1⁄2 for 1 applicant (n = 1). I have confirmed this is the case:
The number of players is n + 1.
The only coalition for which the marginal contribution of each applicant is not null is one having solely the organisation. In this case, the marginal contribution of each applicant is 100, and the coalition size is 1.
There are n coalitions with size 1, n − 1 coalitions containing only one applicant, and 1 containing only the organisation.
So the Shapley value of each applicant is S_a = 100/n/(n + 1).
The total Shapley value of S = 100 should be equal to n*S_a + S_o, therefore S_o = 100 - n*S_a = 100*(1 − 1/(n + 1)) = 100*n/(n + 1).
In other words, the fraction of the total Shapley value assigned to the organisation is S_o/S = S_o/100 = n/(n + 1).
It is interesting to note that the relative contribution of each applicant S_a/S = 1/n/(n+1) is roughly proportional to n^-2. This suggests the contribution of each applicant would become 0.25 times as large if the number of applicants doubled.
However, all of this assumes applicants would be equally good at their job (or that the selected applicant would be chosen randomly). If the factual impact of the selected applicant (i.e. the total Shapley value) equals S = n^alpha, where alpha >= 0, the relative contribution of each applicant would be:
Sa=1n+1∑ni=1iα−(i−1)α(ni).
I have demonstrated this is equal to n^(alpha − 1)/(n + 1), but the proof is too long to fit in this comment… Just kidding, the formula is probably not that simple.
> I have confirmed this is the case
Oh, nice!
I’m skating on thin ice, but I think
1) the discussion is basically correct
2) similar problems have been discussed in evolutionary game theory, chemical reaction/economic/ ecological networks, cooking, and category theory.
3) I find it difficult to wade through examples (ie stories about AMF and gates foundations, or EA hiring) --these remind me of many ‘self help’ psychology books which explain how to resolve conflicts by going through numerous vignettes involving couples, families, etc—i can’t remember all the ‘actors’ names and roles.
4) i think a classic theorem in game theory (probably by john von neumman, but maybe by john nash) shows you can convert shapley value to counterfactual value very easily. the same issue applies in physics—which can be often thought of as a ‘continuous game’.
5) time ordering invariance is not really a problem (except technically)---you can include a time variable as is done in evolutionary game theory. (mathematically its a much more difficult problem but not conceptually).
would surprise me; can you think of a source?
As I said I’m skating on thin ice, but the theorem says you can convert any positive or negative sum game into a zero sum game. (its due to von Neumann or nash, but i think i saw it in books on evolutionary game theory . i think there are analogs in physics , and even ecology, etc. ).
Again, i think that may be related to the counterfactual/shapley conversion i ‘see’ or think exists, but can’t prove it----i’d have to look at the definitions again.
To possibly fall through more holes in the ice , i think the prisoner’s dillema might be the simplest example.
(I’m just not fluent in the definitions since i didn’t learn them when i was studying some game theory; but i looked at many game theory texts where they did occur—mostly for more complex situations than i was dealing with.
Also the term ‘counterfactual’ i only learned from a history book by Niall Ferguson (not a big hero of mine but had what seemed like worthwhile ideas--- he wrote ‘counterfactual history’—eg ‘what would be state of the world if Germany had won WW2?’ )
as noted , i also find examples which use ‘vignettes’ or ‘scenarios’, fractions, whole numbers like ‘7 EA candidates’, ’60 million$ ′ , along with the names of countries (India) and organizations, make it difficult (or time consuming for me) to process. but this is just a stylisitic or personal issue.
I wonder if you think an excercize trying to compare the shapley vs counterfactual value of the 2 cases for WW2 is meaningful—ie would money spent by UK/USA/etc fighting the war have been better spent another way?
i may even put this question to myself to see if its meaningful in your framework. i spend a bit of time on questionable math/logic problems (some of which have solutions, but i try to find different proofs because i dont understand the existing ones, and occasionaly do. Many theorems have many correct proofs which look very different and use different methods, and often have been discovered by many people on different continents at the same time (eg the renormalization group in physics was discovered by Feynman and Nambu (japan) about the same time) . I wish i had a study group who shared my interests in various problems like this one; the few aquaintances i have who work on math/logic basically work on problems that interest them, and don’t find mine interesting or relevant. )
P.S. I just re-skimmed your article and see you dealt in Scenario 6 with ‘tragedy of the commons’ which i view as an n-person variant of the 2 -person prisoner’s dillema.
also your example 2 (Newton and Leibniz ) is an example which is sort of what i was thinking. The theorem i was thinking of would add to the picture and have something like a ‘god’ who would create either Newton, Leibniz, or both of them. Shapley value would be the same in all cases. (unless 2 calculus discoveries are better than 1----in sciences sometimes this is seen as true (‘replication’), or having ‘multiple witnesses’ in law as opposed to just an account by one (who is the victim and may not be believed )).
(its also the case for example that the 3 or 4 or even 5 early versions of quantum mechanics—schrodinger, heisenberg, dirac, feynman, bohm—though some say debroglie anticipated bohm , and feynman acknolwedged that he found his idea in a footnote in a book by Dirac—although redundant in many ways, each have unique perspectives . the golden rule also has many formulations i’ve heard)
(In my scenario, with ‘god’ , i think the counterfactual value of either newton or leibniz would be 1---because without either or both there would be no calculus with shapley value 1. god could have just created nothing---0 rather than 1).
In a way what you seem to be describing is how to avoid the ‘neglectedness’ problem of EA theory. This overlaps with questions in politics—some people vote for people in a major party who may win anyway, rather than vote for a ‘minor party’ they may actually agree with more. This might be called the ‘glow effect’—similarily some people will support some rock or sports star partly just to be in the ‘in crowd’. So they get ‘counterfactual value’ even if the world is no better off-voting for someone who will win any way is no better than voting for one who will lose—or rather they actually get additional Shapley value because they are ‘happier’ being in the ‘in crowd’ rather than being a less favored minority—but this involves a different calculation for the Shapley value, including ‘happiness’ and not just ‘who won’. But, some people are happier being in ‘minorities’, so thats another complication in the calculations.
(eg the song by Beck ‘i’m a loser’ comes to mind. pays to be a loser some times or support an unpopular cause because its actually a neglected one—people just didn’t know its actual or Shapley value. )
Thanks for this interesting post. As I argued in the post that you cite and as George Bridgwater notes below, I don’t think you have identified a problem in the idea of counterfactual impact here, but have instead shown that you sometimes cannot aggregate counterfactual impact across agents. As you say, CounterfactualImpact(Agent) = Value(World with agent) - Value(World without agent).
Suppose Karen and Andrew have a one night stand which leads to Karen having a baby George (and Karen and Andrew otherwise have no effect on anything). In this case, Andrew’s counterfactual impact is:
Value (world with one night stand) - Value (world without one night stand)
The same is true for Karen. Thus, the counterfactual impact of each of them taken individually is an additional baby George. This doesn’t mean that the counterfactual impact of Andrew and Karen combined is two additional baby Georges. In fact, the counterfactual impact of Karen and Andrew combined is also given by:
Value (world with one night stand) - Value (world without one night stand)
Thus, the counterfactual impact of Karen and Andrew combined is an additional baby George. There is nothing in the definition of counterfactual impact which implies it can be always be aggregated across agents.
This is the difference between “if me and Karen hadn’t existed, neither would George” and “If I hadn’t existed, neither would George, and if Karen hadn’t existed neither would George, therefore if me and Karen hadn’t existed, neither would two Georges.” This last statement is confused, because the babies referred to in the antecedent are the same.
I discuss other examples in the comments to Joey’s post.
**
The counterfactual understanding of impact is how almost all voting theorists analyse the expected value of voting. EAs tends to think that voting is sometimes altruistically rational because of the small chance of being the one pivotal voter and making a large counterfactual difference. On the Shapely value approach, the large counterfactual difference would be divided by the number of winning voters. Firstly, to my knowledge almost no-one in voting theory assesses the impact of voting in this way. Secondly, this would I think imply that voting is never rational since in any large election the prospective pay-off of voting would be divided by the potential set of winning voters and so would be >100,000x smaller than on the counterfactual approach
I don’t exactly claim to have identified a problem with the counterfactual function, in itself. The counterfactual is perfectly well defined, and I like it, and it has done nothing wrong. I understand this. It is clear to me that it can’t be added just like that. The function, per se, is fine.
What I’m claiming is that, because it can’t be aggregated, it is not the right function to think about in terms of assigning impact to people in the context of groups. I am arguing about the area of applicability of the function, not about the function. I am claiming that, if you are optimizing for counterfactual impact in terms of groups, pitfalls may arise.
It’s like, when you first see for the same time: −1 = sqrt(-1)*sqrt(-1) = sqrt((-1)*(-1)) = sqrt(1) = 1, therefore −1 = 1, and you can’t see the mistake. It’s not that the sqrt function is wrong, it’s that you’re using it outside it’s limited fiefdom, so something breaks. I hope the example proved amusing.
I’m not only making statements about the counterfactual function, I’m also making statements about the concept which people have in your head which is called “impact”, and how that concept doesn’t map to counterfactual impact some of the time, and about how, if you had to map that concept to a mathematical function, the Shapley value is a better candidate.
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement:
Instead of “counterfactually” should we say “Shapily” now?
Roses are redily
counterfactuals sloppily
but I don’t thinkily
that we should use Shapily
Nice post!
Quick thought on example 2:
I just wanted to point out that what is described with Newton and Leibniz is a very, very simplified example.
I imagine that really, Newton and Leibniz wouldn’t be the only ones counted. With Shapley values, all of the other many people responsible for them doing that work and for propagating it would also have shared responsibility. Plus, all of the people who would have invented calculus had the two of them not invented it also would have had some part of the Shapley value.
The phrase “The Shapley assigns equal value to equivalent agents.” is quite tricky here, as there’s a very specific meaning to “equivalent agents” that probably won’t be obvious to most readers at first.
Of course, much of this complexity also takes place with counterfactual value. (As in, Newton and Leibniz aren’t counterfactually responsible for all of calculus, but rather some speedup and quality difference, in all likelihood).
Hi Nuno, Great post.
I am thinking about how to calucate the Shapley values for policy work. So far I am just getting confused wo would love your input.
1.
How to think about the case where the government is persuaded to take some action. In general how would you recommend calculating the Shapley value of persuading someone to do something?
If I persuade you to donate out of the goodness of your heart to a charity then I assume you would say that value is impact split between me as the persuader you as the giver (and the charity and other actors). But what if I persuade you to do something for a non-altruistic reason, like I tell you that donating would be good for your companies image and sales would go up, would you imagine that is the same? My naive reading of your post is that in the second case is that I get 100% of the value (minus split with charity and other actors).
2.
How to think about crowd actions. If I organise a ballot initiative on good thing x and then 1.5million people vote for it and it happens and makes the world better. I assume I claim something like 50% of the value (50% responsible for each vote)? What about in the case where there is no actual vote but I use the fact (gathered from survey data) that 1.5million people say they would vote for good thing x and persuade policy makers to adopt x. I assume in this case I get 100% of the value of x as the population did not take action. Is that how you would see it?
Hey,
I.
Initially I thought I’d calculate the SV by looking at:
Value of coalition {}:
Value of coalition {Gov}:
Value of coalition {Lobbying group}:
Value of coalition {Gov, Lobbying group}:
But this feels a bit ackward, because you’d have to calculate the value of the whole government. So I’d be inclined to do something like:
Value of coalition {}: 0 // State of the world in which neither the government department nor the lobbying group exists.
Value of coalition {Part of the government department}: What would the government department otherwise do.
Value of coalition {Lobbying group}: What would the lobbying group otherwise do.
Value of coalition {Part of the government department, Lobbying group}: What will they both do.
Assuming that Value of coalition {Part of the government department} = 0, Value of coalition {Lobbying group} = 0, they do split the gains, so this is just counterfactual value / 2. If that is not the case, this becomes more complicated, because you’d have to incorporate whatever other group the lobbying group would lobby into the calculation.
But what’s the point of just dividing all counterfactual values by 2? There is no point. The fun begins is there are projects with more stakeholders and projects with fewer stakeholders, in which case the SV would divide total impact by a higher number, i.e., optimizing SV would recommend projects with fewer stakeholders over projects with a larger number, given the same impact.
Also, note that you have to include raised taxes in your calculation, e.g., if you lobby for some large amount of spending, that spending doesn’t come from the void but corresponds to some mix of larger taxes, more debt, prioritization amongst programs, etc.
II.
Well, this depends on whether:
All the votes were necessary
There were plenty enough of votes
If 1, then each participant would get 1/n of the impact. If 2., then this depends, but as there are more and more extra votes, more of the credit goes to the organizer.
I’ve added an example to the shapleyvalue.com website here: <http://shapleyvalue.com/?example=11>, where “An organization organizes a costless online vote to pass a measure of value 1, and it passes if it gets more than 3 votes. 7 people vote in favour.”
Hope this was of help
I think that if this is true, they aren’t modelling the counterfactual correctly. If it were the case that all the others were definitely going for the 100 joint utility project no matter what you do, then yes, you should also do that, since the difference in utility is 100 > 20. That’s the correct solution in this particular case. If none of the others were pursuing the 100 utility project, then you should pursue the 20 utility one, since 20 > 0. Reality is in-between, since you should treat the counterfactual as a (subjective) probability distribution.
EDIT: “Reality is in-between” was inaccurate. Rather, the situation I presented had all decisions independent. In reality, they are not independent, and you should consider your impact on the decisions of others. See my reply below.
What you say seems similar to a Stag hunt. Consider, though, that if the group is optimizing for their individual counterfactual impact, they’ll want to coordinate to all do the 100 utility project. If they were optimizing their Shapley value, they’d instead want to coordinate to do 10 different projects, each worth 20 utility. 20*10 = 200 >100.
Consider this case: you choose the 20 utility project and single-handedly convince the others to each choose the 20 utility project, or else you convince everyone to do the joint 100 utility project. Now, your own individual counterfactual impact would be 20*10 = 200 > 100.
If you all coordinate and all agree to the 20 utility projects, with the alternative being everyone choosing the joint 100 utility project, then each actor has an impact of 20*10 = 200 > 100. Each of them can claim they convinced all the others.
So, when you’re coordinating, you should consider your impact on others’ decisions; some of the impact they attribute to themselves is also your own, and this is why you would end up double-counting if you just add up individual impacts to get the group’s impact. Shapley values may be useful, but maximizing expected utility still, by definition, leads to the maximum expected utility (ex ante).
Good point!
In my mind, that gets a complexity penalty. Imagine that instead of ten people, there were 10^10 people. Then for that hack to work, and for everyone to be able to say that they convinced all the others, there has to be some overhead, which I think that the Shapley value doesn’t require.
FWIW, it’s a complex as you want it to be since you can use subjective probability distributions, but there are tradeoffs. With a very large number of people, you probably wouldn’t rely much on individual information anymore, and would instead lean on aggregate statistics. You might assume the individuals are sampled from some (joint) distribution which is identical under permutations.
If you were calculating Shapley values in practice, I think you would likely do something similar, too. However, if you do have a lot of individual data, then Shapley values might be more useful there (this is not an informed opinion on my part, though).
Perhaps Shapley values could also be useful to guide more accurate estimation, if directly using counterfactuals is error-prone. But it’s also a more complex concept for people to understand, which may cause difficulties in their use and verification.
Yes, now I see what is wrong with scenario 1. Both Alice and Bobs contribution of 1000 are necessary conditions but neither of them is alone sufficient to elicit utility of 15. Hence neither of their contribution of 1000 alone elicits contribution of 15. Those are only in conjunction sufficient. Counterfactuals are still conditionals and you have to get the logic right.
The counterfactual value of Alice is typically calculated as the value if Alice didn’t exist or didn’t participate. If both Alice and Bob are necessary for a project, the counterfactual value of each is the total value of the project.
I agree that you can calculate conditionals in other ways (like with Shapley values), and that in that case you get more meaningful answers.
I don’t understand how both Alice and Bobs utility contribution in the counterfactual could be 15 in scenario 1. Counterfactuals are still based on logic and maths and that does not add up.
Sorry there’s something really basic I don’t get about Example 1. There is a good chance I am mistaken but I think this confusion will be shared by many so I think it’s worth commenting.
Your point is that Scenario 1 is what they would come to if they each try to maximise their personal counterfactual impact. In your example, Alice and Bob are calculating their counterfactual impact in each scenario on the basis that the other person’s decision is not affected by theirs. But if both are trying to maximise their personal counterfactual impact, then the other person’s decision IS affected by theirs! So their calculations of counterfactual impact were wrong!
Each person can say to the other “I’ll donate to P if you donate to P, otherwise I’ll donate to Q/R” (because that is what would maximise their counterfactual impact). Each will then see that donating to P provides a counterfactual impact of 5 (utility of 15 is created rather than 10), while donating to Q/R gives a counterfactual impact of 10 (20 is created rather than 10). They will both donate to Q/R, not P like you suggest.
(You CAN make it so that they can’t communicate and they each THINK that their decision doesn’t affect the other’s [even though it does], and this WOULD make the counterfactual impact in each scenario the same as the ones you give. BUT you would have to multiply those impacts by the probability that the other person would make the given choice, and then combine it with the situation where they don’t… so it doesn’t work the same and even then it would be an information issue rather than a problem with maximising counterfactual impact...)