Does anyone have any thoughts on how much we should value leading other people to donate? I mean this in a very narrow sense, and my thoughts on this topic are quite muddled, so I’ll try to illustrate what I mean with a simplified example. I apologize if my confusion ends up making my writing unclear.
If I talk with a close friend of mine about EA for a bit, and she donates $100 to, say, GiveWell, and then she disengages from EA for the rest of her life, how much should I value her donation to GiveWell? In this scenario, it seems like I’ve put some time and effort into getting my friend to donate, and she presumably wouldn’t have donated $100 if I hadn’t chatted with her, so it feels like maybe I did a few dollars worth of good by chatting with her. At the same time, she’s the one who donated the money, so it feels like she should get credit for all of the good that was done because of her donation. But wait—if I did a few dollars of good, then does that mean that she did less than $100 worth of good?
At this point, my moral intuitions on this issue are all over the place. I guess that positing that the story above actually has a problem implies that the sum of good done by my friend and I should sum to $100, but the only reason I’ve tacitly assumed that to be true is because it intuitively feels true. I previously wrote a comment on LessWrong on this topic that wasn’t any clearer than this comment, and this response was quite clear, but I’m still confused.
I’ve thought a bit about this in the past. It’s a complicated issue because it mixes what’s already a philosophically awkward point, with significant uncertainty. I’ll see if I can get somewhere with untangling it:
First, it may be helpful to remember that the real question is “what actions should I take?”, not “how good was this thing I did?”. Expectation of how good the different actions is helpful in choosing what to do, of course.
If you knew precisely the counterfactual that would apply absent your action (and it’s that she would never have made that donation and lived an otherwise similar life), it would be correct to say that you’d done $100 worth of good. Likewise from her perspective if she knew the precise counterfactuals attaching to her donation, it would be correct to say she’d done $100 of good. These numbers don’t need to add up to $100; Parfit has a lengthier explanation in Five mistakes in moral mathematics.
However, in practical terms we aren’t that close to precise knowledge of the counterfactuals. Even in theory it’s not clear that we could all be, when there are other agents involved. If you model everyone as agents trying to be credited with good for their deeds, then cooperative game theory can give you some tools for assigning credit—and it will add up to $100. But this doesn’t seem quite right as a model either, since it wasn’t clear your friend was even playing this game (it may be a better model for splitting credit among EAs).
There are some other advantages of assuming as a heuristic that the credit has to add up to $100. It’s relatively easy to apply, and it’s fairly robust—it’s harder for a group of people to get confused and collectively do something that’s a big mistake. Particularly because there are so many uncertainties when we try to guess counterfactuals, we want to judge on expectations, and the cap is a method of keeping our expectations more anchored to reality.
Does anyone have any thoughts on how much we should value leading other people to donate? I mean this in a very narrow sense, and my thoughts on this topic are quite muddled, so I’ll try to illustrate what I mean with a simplified example. I apologize if my confusion ends up making my writing unclear.
If I talk with a close friend of mine about EA for a bit, and she donates $100 to, say, GiveWell, and then she disengages from EA for the rest of her life, how much should I value her donation to GiveWell? In this scenario, it seems like I’ve put some time and effort into getting my friend to donate, and she presumably wouldn’t have donated $100 if I hadn’t chatted with her, so it feels like maybe I did a few dollars worth of good by chatting with her. At the same time, she’s the one who donated the money, so it feels like she should get credit for all of the good that was done because of her donation. But wait—if I did a few dollars of good, then does that mean that she did less than $100 worth of good?
At this point, my moral intuitions on this issue are all over the place. I guess that positing that the story above actually has a problem implies that the sum of good done by my friend and I should sum to $100, but the only reason I’ve tacitly assumed that to be true is because it intuitively feels true. I previously wrote a comment on LessWrong on this topic that wasn’t any clearer than this comment, and this response was quite clear, but I’m still confused.
I’ve thought a bit about this in the past. It’s a complicated issue because it mixes what’s already a philosophically awkward point, with significant uncertainty. I’ll see if I can get somewhere with untangling it:
First, it may be helpful to remember that the real question is “what actions should I take?”, not “how good was this thing I did?”. Expectation of how good the different actions is helpful in choosing what to do, of course.
If you knew precisely the counterfactual that would apply absent your action (and it’s that she would never have made that donation and lived an otherwise similar life), it would be correct to say that you’d done $100 worth of good. Likewise from her perspective if she knew the precise counterfactuals attaching to her donation, it would be correct to say she’d done $100 of good. These numbers don’t need to add up to $100; Parfit has a lengthier explanation in Five mistakes in moral mathematics.
However, in practical terms we aren’t that close to precise knowledge of the counterfactuals. Even in theory it’s not clear that we could all be, when there are other agents involved. If you model everyone as agents trying to be credited with good for their deeds, then cooperative game theory can give you some tools for assigning credit—and it will add up to $100. But this doesn’t seem quite right as a model either, since it wasn’t clear your friend was even playing this game (it may be a better model for splitting credit among EAs).
There are some other advantages of assuming as a heuristic that the credit has to add up to $100. It’s relatively easy to apply, and it’s fairly robust—it’s harder for a group of people to get confused and collectively do something that’s a big mistake. Particularly because there are so many uncertainties when we try to guess counterfactuals, we want to judge on expectations, and the cap is a method of keeping our expectations more anchored to reality.