I’m annoyed at vague “value” questions. If you ask a specific question the puzzle dissolves. What should you do to make the world go better? Maximize world-EV, or equivalently maximize your counterfactual value (not in the maximally-naive way — take into account how “your actions” affect “others’ actions”). How should we distribute a fixed amount of credit or a prize between contributors? Something more Shapley-flavored, although this isn’t really the question that Shapley answers (and that question is almost never relevant, in my possibly controversial opinion).
Happy to talk about well-specified questions. Annoyed at questions like “should I use counterfactuals here” that don’t answer the obvious reply, “use them FOR WHAT?”
I don’t feel 100% bought-in to the Shapley Value approach, and think there’s a value in paying attention to the counterfactuals. My unprincipled compromise approach would be to take some weighted geometric mean and call it a day.
FOR WHAT?
Let’s assume in all of these scenarios that you are only one of the players in the situation, and you can only control your own actions.
If this is your specification (implicitly / further specification: you’re an altruist trying to maximize total value, deciding how to trade off between increasing X and doing good in other ways) then there is a correct answer — maximize counterfactual value (this is equivalent to maximizing total value, or argmaxing total value over your possible actions), not your personal Shapley value or anything else. (Just like in all other scenarios. Multiplicative-ness is irrelevant. Maximizing counterfactual value is always the answer to questions about what action to take.)
I’m annoyed at vague “value” questions. If you ask a specific question the puzzle dissolves. What should you do to make the world go better? Maximize world-EV, or equivalently maximize your counterfactual value (not in the maximally-naive way — take into account how “your actions” affect “others’ actions”). How should we distribute a fixed amount of credit or a prize between contributors? Something more Shapley-flavored, although this isn’t really the question that Shapley answers (and that question is almost never relevant, in my possibly controversial opinion).
Happy to talk about well-specified questions. Annoyed at questions like “should I use counterfactuals here” that don’t answer the obvious reply, “use them FOR WHAT?”
FOR WHAT?
If this is your specification (implicitly / further specification: you’re an altruist trying to maximize total value, deciding how to trade off between increasing X and doing good in other ways) then there is a correct answer — maximize counterfactual value (this is equivalent to maximizing total value, or argmaxing total value over your possible actions), not your personal Shapley value or anything else. (Just like in all other scenarios. Multiplicative-ness is irrelevant. Maximizing counterfactual value is always the answer to questions about what action to take.)