Sorry there’s something really basic I don’t get about Example 1. There is a good chance I am mistaken but I think this confusion will be shared by many so I think it’s worth commenting.
Your point is that Scenario 1 is what they would come to if they each try to maximise their personal counterfactual impact. In your example, Alice and Bob are calculating their counterfactual impact in each scenario on the basis that the other person’s decision is not affected by theirs. But if both are trying to maximise their personal counterfactual impact, then the other person’s decision IS affected by theirs! So their calculations of counterfactual impact were wrong!
Each person can say to the other “I’ll donate to P if you donate to P, otherwise I’ll donate to Q/R” (because that is what would maximise their counterfactual impact). Each will then see that donating to P provides a counterfactual impact of 5 (utility of 15 is created rather than 10), while donating to Q/R gives a counterfactual impact of 10 (20 is created rather than 10). They will both donate to Q/R, not P like you suggest.
(You CAN make it so that they can’t communicate and they each THINK that their decision doesn’t affect the other’s [even though it does], and this WOULD make the counterfactual impact in each scenario the same as the ones you give. BUT you would have to multiply those impacts by the probability that the other person would make the given choice, and then combine it with the situation where they don’t… so it doesn’t work the same and even then it would be an information issue rather than a problem with maximising counterfactual impact...)
Sorry there’s something really basic I don’t get about Example 1. There is a good chance I am mistaken but I think this confusion will be shared by many so I think it’s worth commenting.
Your point is that Scenario 1 is what they would come to if they each try to maximise their personal counterfactual impact. In your example, Alice and Bob are calculating their counterfactual impact in each scenario on the basis that the other person’s decision is not affected by theirs. But if both are trying to maximise their personal counterfactual impact, then the other person’s decision IS affected by theirs! So their calculations of counterfactual impact were wrong!
Each person can say to the other “I’ll donate to P if you donate to P, otherwise I’ll donate to Q/R” (because that is what would maximise their counterfactual impact). Each will then see that donating to P provides a counterfactual impact of 5 (utility of 15 is created rather than 10), while donating to Q/R gives a counterfactual impact of 10 (20 is created rather than 10). They will both donate to Q/R, not P like you suggest.
(You CAN make it so that they can’t communicate and they each THINK that their decision doesn’t affect the other’s [even though it does], and this WOULD make the counterfactual impact in each scenario the same as the ones you give. BUT you would have to multiply those impacts by the probability that the other person would make the given choice, and then combine it with the situation where they don’t… so it doesn’t work the same and even then it would be an information issue rather than a problem with maximising counterfactual impact...)