I’ve thought a bit about this in the past. It’s a complicated issue because it mixes what’s already a philosophically awkward point, with significant uncertainty. I’ll see if I can get somewhere with untangling it:
First, it may be helpful to remember that the real question is “what actions should I take?”, not “how good was this thing I did?”. Expectation of how good the different actions is helpful in choosing what to do, of course.
If you knew precisely the counterfactual that would apply absent your action (and it’s that she would never have made that donation and lived an otherwise similar life), it would be correct to say that you’d done $100 worth of good. Likewise from her perspective if she knew the precise counterfactuals attaching to her donation, it would be correct to say she’d done $100 of good. These numbers don’t need to add up to $100; Parfit has a lengthier explanation in Five mistakes in moral mathematics.
However, in practical terms we aren’t that close to precise knowledge of the counterfactuals. Even in theory it’s not clear that we could all be, when there are other agents involved. If you model everyone as agents trying to be credited with good for their deeds, then cooperative game theory can give you some tools for assigning credit—and it will add up to $100. But this doesn’t seem quite right as a model either, since it wasn’t clear your friend was even playing this game (it may be a better model for splitting credit among EAs).
There are some other advantages of assuming as a heuristic that the credit has to add up to $100. It’s relatively easy to apply, and it’s fairly robust—it’s harder for a group of people to get confused and collectively do something that’s a big mistake. Particularly because there are so many uncertainties when we try to guess counterfactuals, we want to judge on expectations, and the cap is a method of keeping our expectations more anchored to reality.
I’ve thought a bit about this in the past. It’s a complicated issue because it mixes what’s already a philosophically awkward point, with significant uncertainty. I’ll see if I can get somewhere with untangling it:
First, it may be helpful to remember that the real question is “what actions should I take?”, not “how good was this thing I did?”. Expectation of how good the different actions is helpful in choosing what to do, of course.
If you knew precisely the counterfactual that would apply absent your action (and it’s that she would never have made that donation and lived an otherwise similar life), it would be correct to say that you’d done $100 worth of good. Likewise from her perspective if she knew the precise counterfactuals attaching to her donation, it would be correct to say she’d done $100 of good. These numbers don’t need to add up to $100; Parfit has a lengthier explanation in Five mistakes in moral mathematics.
However, in practical terms we aren’t that close to precise knowledge of the counterfactuals. Even in theory it’s not clear that we could all be, when there are other agents involved. If you model everyone as agents trying to be credited with good for their deeds, then cooperative game theory can give you some tools for assigning credit—and it will add up to $100. But this doesn’t seem quite right as a model either, since it wasn’t clear your friend was even playing this game (it may be a better model for splitting credit among EAs).
There are some other advantages of assuming as a heuristic that the credit has to add up to $100. It’s relatively easy to apply, and it’s fairly robust—it’s harder for a group of people to get confused and collectively do something that’s a big mistake. Particularly because there are so many uncertainties when we try to guess counterfactuals, we want to judge on expectations, and the cap is a method of keeping our expectations more anchored to reality.