This feels like a “if a tree falls in the forest and no one is nearby, does it make a sound?” type debate. “Blame” and “credit” are social constructions, not objective features of reality discoverable through experiment, and in principle we could define them however we wanted.
I think the right perspective here is a behavioral psychology one. Blame & credit are useful constructions insofar as they reinforce (and, counterfactually, motivate) particular behaviors. For example, if Mary receives credit for donating $100, she will feel better about the donation and more motivated to donate in the future—to society’s benefit. If Joe makes a good bet in a poker game, but ends up losing the round anyway, and his poker teammates blame him for the loss, he will feel punished for making what was fundamentally a good bet and not make bets like that one in the future—to his team’s harm.
So ultimately the question of where to assign credit or blame is highly situation-dependent, and the most important input might be how others will see & learn from how the behavior is regarded. I might blame John’s three comrades equally for his death, because they all made an effort to kill him and I want to discourage efforts to kill people equally regardless of whether they happen to work or not. I may even assign all 3 comrades the “full blame” for John’s death, because blame, being a social construct, is not a conserved quantity.
Let’s take the donating $100 example again. Let’s say I can cause an additional $100 worth of donation to Givewell by donating $x to Giving What We Can. Say the EA community assigns me $100 worth of credit for achieving this. If I receive $100 worth of credit for either making or encouraging a donation of $100, then I will be motivated to encourage donation whenever x < 100, and make donations directly whenever x > 100.
This approach would be an efficient outcome for EA. Suppose x = $80; that is, donating $80 to Giving What We Can results in an additional $100 for Givewell. Thus the net effect from my $80 donation is that $100 gets donated to Givewell. But if x = $120 the movement would be better off had I donated $120 to Givewell directly instead of using it to purchase $100 worth of donation.
But there are complicated second-order effects. Suppose the person who donates $100 as a result of my $x donation to Giving What We Can notices that since x < 50, they are best off donating their $100 to Giving What We Can too. Done on a wide scale this has the potential to change the value of x in complicated ways—you could probably figure out the new value of x using some calculus, but it’s getting late. There’s also the effect of increasing the speed of movement growth, which might be a bad thing, or maybe the person I encourage to donate $100 later learns that I was purchasing credit more efficiently than they were and feels like a sucker. Or maybe people outside the movement notice this “credit inflation” aspect of EA and discount the movement because of this. (Similar to how we discount trophies from sports competitions if every player gets their own “participation trophy”.) There’s also time value of money—if my $80 donation to GWWC takes 20 years to manifest as $100 more for Givewell, then depending on the rate of return I’d get through investing the $80 I might be better off investing it and donating the resulting capital in 20 years. To decide between this option and direct donation I’d need to know Givewell’s discount rate. Etc.
Some interesting points John, and I agree that blame can be manipulated to mean what we want it to mean for a purpose. But—this was more directed at the measurement of impact in EA meta-orgs and individuals. If some EA org claims to have directed $200,000 of donations to effective charities for a spend of $100,000, the cost-benefit ratio would be 1:2. But I’m not convinced that this is the whole picture, and if we’re not measuring this type of thing correctly, we could be spending $100,000 to raise only $99,999 counterfactually and not realising it.
One example is that I rarely see the cost-benefit done for where this money might have gone otherwise, even when it is counterfactual. Maybe it would have gone to a pretty good charity instead of a great one, and so we shouldn’t be able to pull the full value from that. And maybe that $1,000 donation made to AMF would have happened anyway. And all sorts of other complicated events.
I’m just making the point that things are, I believe, more complicated than we generally make them out to be.
This feels like a “if a tree falls in the forest and no one is nearby, does it make a sound?” type debate. “Blame” and “credit” are social constructions, not objective features of reality discoverable through experiment, and in principle we could define them however we wanted.
I think the right perspective here is a behavioral psychology one. Blame & credit are useful constructions insofar as they reinforce (and, counterfactually, motivate) particular behaviors. For example, if Mary receives credit for donating $100, she will feel better about the donation and more motivated to donate in the future—to society’s benefit. If Joe makes a good bet in a poker game, but ends up losing the round anyway, and his poker teammates blame him for the loss, he will feel punished for making what was fundamentally a good bet and not make bets like that one in the future—to his team’s harm.
So ultimately the question of where to assign credit or blame is highly situation-dependent, and the most important input might be how others will see & learn from how the behavior is regarded. I might blame John’s three comrades equally for his death, because they all made an effort to kill him and I want to discourage efforts to kill people equally regardless of whether they happen to work or not. I may even assign all 3 comrades the “full blame” for John’s death, because blame, being a social construct, is not a conserved quantity.
Let’s take the donating $100 example again. Let’s say I can cause an additional $100 worth of donation to Givewell by donating $x to Giving What We Can. Say the EA community assigns me $100 worth of credit for achieving this. If I receive $100 worth of credit for either making or encouraging a donation of $100, then I will be motivated to encourage donation whenever x < 100, and make donations directly whenever x > 100.
This approach would be an efficient outcome for EA. Suppose x = $80; that is, donating $80 to Giving What We Can results in an additional $100 for Givewell. Thus the net effect from my $80 donation is that $100 gets donated to Givewell. But if x = $120 the movement would be better off had I donated $120 to Givewell directly instead of using it to purchase $100 worth of donation.
But there are complicated second-order effects. Suppose the person who donates $100 as a result of my $x donation to Giving What We Can notices that since x < 50, they are best off donating their $100 to Giving What We Can too. Done on a wide scale this has the potential to change the value of x in complicated ways—you could probably figure out the new value of x using some calculus, but it’s getting late. There’s also the effect of increasing the speed of movement growth, which might be a bad thing, or maybe the person I encourage to donate $100 later learns that I was purchasing credit more efficiently than they were and feels like a sucker. Or maybe people outside the movement notice this “credit inflation” aspect of EA and discount the movement because of this. (Similar to how we discount trophies from sports competitions if every player gets their own “participation trophy”.) There’s also time value of money—if my $80 donation to GWWC takes 20 years to manifest as $100 more for Givewell, then depending on the rate of return I’d get through investing the $80 I might be better off investing it and donating the resulting capital in 20 years. To decide between this option and direct donation I’d need to know Givewell’s discount rate. Etc.
Some interesting points John, and I agree that blame can be manipulated to mean what we want it to mean for a purpose. But—this was more directed at the measurement of impact in EA meta-orgs and individuals. If some EA org claims to have directed $200,000 of donations to effective charities for a spend of $100,000, the cost-benefit ratio would be 1:2. But I’m not convinced that this is the whole picture, and if we’re not measuring this type of thing correctly, we could be spending $100,000 to raise only $99,999 counterfactually and not realising it.
One example is that I rarely see the cost-benefit done for where this money might have gone otherwise, even when it is counterfactual. Maybe it would have gone to a pretty good charity instead of a great one, and so we shouldn’t be able to pull the full value from that. And maybe that $1,000 donation made to AMF would have happened anyway. And all sorts of other complicated events.
I’m just making the point that things are, I believe, more complicated than we generally make them out to be.