I don’t see Shapley values mentioned anywhere in your post. I think you’ve made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.
Wouldn’t estimating Shapley values still miss a core insight of the post—that ‘do-gooding’ efforts are ultimately co-dependent, not simply additive?
EXAMPLE: We can estimate the Shapley values for the relative contributions of different pieces of wood, matches, and newspaper to a fire. These estimated Shapley values might indicate that biggest piece of wood contributed the most fire, but miss three critical details:
The contribution of matches and newspaper was ‘small’ but essential. This didn’t come up in our estimated Shapley values because our dataset didn’t include instances where there was no matches or no newspaper
Kindling was also an essential contributor but was not included in our calculations
The accessibility of fire inputs had their own interacting inputs, e.g. a trusting social and economic system that enabled us to access the fire inputs
We also make the high-risk assumption that the fire would be used and experienced beneficially
INTERPRETED IMPLICATION: estimated Shapley values still miss, at least in part, that outcomes from our efforts are co-dependent. We therefore still mislead ourselves by attempting to frame EA as an independent exercise?
(I’m not confident on this and would be keen to take on critiques)
Unless I’m misunderstanding, isn’t this “just” an issue of computing Shapley values incorrectly? If kindling is important to the fire, it should be included in the calculation; if your modeling neglects to consider it, then the problem is with the modeling and not with the Shapley algorithm per se.
Of course, I say “just” in quotes because actually computing real Shapley values that take everything into account is completely intractable. (I think this is your main point here, in which case I mostly agree. Shapley values will almost always be pretty made-up in the best of circumstances, so they should be taken lightly.)
I still find the concept of Shapley values useful in addressing this part of the OP:
Impact does not seem to be a property that can sensibly be assigned to an individual. If an individual (or organisation) takes an action, there a number of reasons why I think that the subsequent consequences/impact can’t solely be attributed to that one individual.
I read this as sort of conflating the claims that “impact can’t be solely attributed to one person” and “impact can’t be sensibly assigned to one person.” Shapley values help with assigning values to individuals even when they’re not solely responsible for outcomes, so it helps pull these apart conceptually.
Much more fuzzily, my experience of learning about Shapley values took me from thinking “impact attribution is basically impossible” (as in the quote above) to “huh, if you add a bit more complexity you can get something decent out.” My takeaway is to be less easily convinced that problems of this type are fundamentally intractable.
I don’t see Shapley values mentioned anywhere in your post. I think you’ve made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.
Wouldn’t estimating Shapley values still miss a core insight of the post—that ‘do-gooding’ efforts are ultimately co-dependent, not simply additive?
EXAMPLE: We can estimate the Shapley values for the relative contributions of different pieces of wood, matches, and newspaper to a fire. These estimated Shapley values might indicate that biggest piece of wood contributed the most fire, but miss three critical details:
The contribution of matches and newspaper was ‘small’ but essential. This didn’t come up in our estimated Shapley values because our dataset didn’t include instances where there was no matches or no newspaper
Kindling was also an essential contributor but was not included in our calculations
The accessibility of fire inputs had their own interacting inputs, e.g. a trusting social and economic system that enabled us to access the fire inputs
We also make the high-risk assumption that the fire would be used and experienced beneficially
INTERPRETED IMPLICATION: estimated Shapley values still miss, at least in part, that outcomes from our efforts are co-dependent. We therefore still mislead ourselves by attempting to frame EA as an independent exercise?
(I’m not confident on this and would be keen to take on critiques)
Unless I’m misunderstanding, isn’t this “just” an issue of computing Shapley values incorrectly? If kindling is important to the fire, it should be included in the calculation; if your modeling neglects to consider it, then the problem is with the modeling and not with the Shapley algorithm per se.
Of course, I say “just” in quotes because actually computing real Shapley values that take everything into account is completely intractable. (I think this is your main point here, in which case I mostly agree. Shapley values will almost always be pretty made-up in the best of circumstances, so they should be taken lightly.)
I still find the concept of Shapley values useful in addressing this part of the OP:
I read this as sort of conflating the claims that “impact can’t be solely attributed to one person” and “impact can’t be sensibly assigned to one person.” Shapley values help with assigning values to individuals even when they’re not solely responsible for outcomes, so it helps pull these apart conceptually.
Much more fuzzily, my experience of learning about Shapley values took me from thinking “impact attribution is basically impossible” (as in the quote above) to “huh, if you add a bit more complexity you can get something decent out.” My takeaway is to be less easily convinced that problems of this type are fundamentally intractable.