As I said in the post, “I’m unsure if there is a simple solution to this, since Shapley values require understanding not just your own strategy, but the strategy of others, which is information we don’t have. I do think that it needs more explicit consideration...”
You’re saying that “if you’re maximizing this only over your own actions and their consequences, including on others’ responses (and possibly acausal influence), it’s just maximizing expected utility.”
I think we agree, modulus the fact that we’re operating in conditions where much of the information we need to “just” maximize utility is unavailable.
It seems like you’re overselling Shapley values here, then, unless I’ve misunderstood. They won’t help to decide which interventions to fund, except for indirect reasons (e.g. assigning credit and funding ex post, judging track record).
You wrote “Then we walk away saying (hyperopically,) we saved a life for $5,000, ignoring every other part of the complex system enabling our donation to be effective. And that is not to say it’s not an effective use of money! In fact, it’s incredibly effective, even in Shapley-value terms. But we’re over-allocating credit to ourselves.”
But if $5000 per life saved is the wrong number to use to compare interventions, Shapley values won’t help (for the right reasons, anyway). The solution here is to just model counterfactuals better. If you’re maximizing the sum of Shapley values, you’re acknowledging we have to model counterfactuals better anyway, and the sum is just expected utility, so you don’t need the Shapley values in the first place. Either Shapley value cost-effectiveness is the same as the usual cost-effectiveness (my interpretation 1) and redundant, or it’s a predictably suboptimal theoretical target (e.g. maximizing your own Shapley value only, as in Nuno’s proposal, or as another option, my interpretation 2, which requires unrealistic counterfactual assumptions).
The solution to the non-EA money problem is also to just model counterfactuals better. For example, Charity Entrepreneurship has used estimates of the counterfactual cost-effectiveness of non-EA money raised by their incubated charities if the incubated charity doesn’t raise it.
You’re right that Shapley values are the wrong tool—thank you for engaging with me on that, and I have gone back and edited the post to reflect that!
I’m realizing as I research this that the problem is that act-utilitarianism fundamentally fails for cooperation, and there’s a large literature on that fact[1] - I need to do much more research.
But “just model counterfactuals better” isn’t a useful response. It’s just saying “get the correct answer,” which completely avoids the problem of how to cooperate and how to avoid the errors I was pointing at.
Kuflik, A. (1982). Utilitarianism and large-scale cooperation. Australasian Journal of Philosophy, 60(3), 224–237.
Regan, Donald H., ‘Co-operative Utilitarianism Introduced’, Utilitarianism and Co-operation (Oxford, 1980)
Williams, Evan G. “Introducing Recursive Consequentialism: A Modified Version of Cooperative Utilitarianism.” The Philosophical Quarterly 67.269 (2017): 794-812.
As I said in the post, “I’m unsure if there is a simple solution to this, since Shapley values require understanding not just your own strategy, but the strategy of others, which is information we don’t have. I do think that it needs more explicit consideration...”
You’re saying that “if you’re maximizing this only over your own actions and their consequences, including on others’ responses (and possibly acausal influence), it’s just maximizing expected utility.”
I think we agree, modulus the fact that we’re operating in conditions where much of the information we need to “just” maximize utility is unavailable.
It seems like you’re overselling Shapley values here, then, unless I’ve misunderstood. They won’t help to decide which interventions to fund, except for indirect reasons (e.g. assigning credit and funding ex post, judging track record).
You wrote “Then we walk away saying (hyperopically,) we saved a life for $5,000, ignoring every other part of the complex system enabling our donation to be effective. And that is not to say it’s not an effective use of money! In fact, it’s incredibly effective, even in Shapley-value terms. But we’re over-allocating credit to ourselves.”
But if $5000 per life saved is the wrong number to use to compare interventions, Shapley values won’t help (for the right reasons, anyway). The solution here is to just model counterfactuals better. If you’re maximizing the sum of Shapley values, you’re acknowledging we have to model counterfactuals better anyway, and the sum is just expected utility, so you don’t need the Shapley values in the first place. Either Shapley value cost-effectiveness is the same as the usual cost-effectiveness (my interpretation 1) and redundant, or it’s a predictably suboptimal theoretical target (e.g. maximizing your own Shapley value only, as in Nuno’s proposal, or as another option, my interpretation 2, which requires unrealistic counterfactual assumptions).
The solution to the non-EA money problem is also to just model counterfactuals better. For example, Charity Entrepreneurship has used estimates of the counterfactual cost-effectiveness of non-EA money raised by their incubated charities if the incubated charity doesn’t raise it.
You’re right that Shapley values are the wrong tool—thank you for engaging with me on that, and I have gone back and edited the post to reflect that!
I’m realizing as I research this that the problem is that act-utilitarianism fundamentally fails for cooperation, and there’s a large literature on that fact[1] - I need to do much more research.
But “just model counterfactuals better” isn’t a useful response. It’s just saying “get the correct answer,” which completely avoids the problem of how to cooperate and how to avoid the errors I was pointing at.
Kuflik, A. (1982). Utilitarianism and large-scale cooperation. Australasian Journal of Philosophy, 60(3), 224–237.
Regan, Donald H., ‘Co-operative Utilitarianism Introduced’, Utilitarianism and Co-operation (Oxford, 1980)
Williams, Evan G. “Introducing Recursive Consequentialism: A Modified Version of Cooperative Utilitarianism.” The Philosophical Quarterly 67.269 (2017): 794-812.