The first criticism feels pretty odd to me. Clearly what Singer, MacAskill, GiveWell, etc. are talking about is the counterfactual impact of your donation, since that is the thing that should be guiding your decision-making. And that seems totally fine and in accordance with ordinary English: it is fine to say that I saved the life of the choking person by performing the Heimlich maneuver, that Henry Heimlich saved X number of lives by inventing the Heimlich maneuver, that my instructor saved Y number of lives by teaching people the Heimlich maneuver, and that a government program to promote knowledge of the Heimlich maneuver saved Z lives, even if X + Y + Z + 1 is greater than the overall number of lives saved by the Heimlich maneuver. And if I were, say, arguing for increased funding of the government program by saying it would save a certain number of lives, it would be completely beside the point to start arguing that I should actually divide the marginal impact of increased funding to account for the contribution of Henry Heimlich, etc.
This is a key point, and EJT helpfully shared on twitter an excerpt from Reasons and Persons in which Parfit clearly explains the fallacy behind the “share of the total view” that Wenar seems to be uncritically assuming here. (ETA: this the first of Parfit’s famous “Five mistakes in moral mathematics”; one of the most important parts of arguably the most important work of 20th century moral philosophy.)
This is foundational stuff for the philosophical tradition upon which EA draws, and casts an ironic light on Wenar’s later criticism: “The crucial-but-absent Socratic meta-question is, ‘Do I know enough about what I’m talking about to make recommendations that will be high stakes for other people’s lives?’”
I agree that when it comes to decision making, Leifs objection doesn’t work very well.
However, when it comes to communication, I think there is a point here (although I’m not sure it was the one Leif was making). If Givewell communicates about the donation and how many lives you saved, and don’t mention the aid workers and mothers who put up nets, aren’t they selling them short here, and dismissing their importance?
In Parfits experiment, obviously you should go on the four person mission and help save the hundred lives. But if you then went on to do a book tour and touted what a hero you are for saving the hundred lives, and don’t mention the other three people, you are being a jerk.
I could imagine an aid worker in Uganda being kind of annoyed that they spent weeks working full time in sweltering heat handing out malaria nets for low pay, and then watching some tech guy in america take all the credit for the lifesaving work. It could hurt EA’s ability to connect with the third world.
What I really don’t agree with is that we should let someone choke and die, just because otherwise Henry Heimlich would get the credit anyway. The goal is not to get the most credit or Shapley values, but to help others, I don’t see what prof. Wenar proposes as a better alternative to GiveWell.
Different ways of calculating impact make sense in different contexts. What I want to say is that the way Singer, MacAskill, GiveWell are doing it (i) is the one you should be using in deciding whether/where to donate (at the very least assuming you aren’t in some special collective action problem, etc.) and (ii) one that is totally fine by ordinary standards of speech—it isn’t deceptive, misleading, excessively imprecise, etc. Maybe we agree.
Yes I think we agree, but I also think that it’s not a crux of the argument.
As Neel Nanda noted, whatever vaguely reasonable method you use to calculate impact will result in attributing a lot of impact to life-saving interventions.
The first criticism feels pretty odd to me. Clearly what Singer, MacAskill, GiveWell, etc. are talking about is the counterfactual impact of your donation, since that is the thing that should be guiding your decision-making. And that seems totally fine and in accordance with ordinary English: it is fine to say that I saved the life of the choking person by performing the Heimlich maneuver, that Henry Heimlich saved X number of lives by inventing the Heimlich maneuver, that my instructor saved Y number of lives by teaching people the Heimlich maneuver, and that a government program to promote knowledge of the Heimlich maneuver saved Z lives, even if X + Y + Z + 1 is greater than the overall number of lives saved by the Heimlich maneuver. And if I were, say, arguing for increased funding of the government program by saying it would save a certain number of lives, it would be completely beside the point to start arguing that I should actually divide the marginal impact of increased funding to account for the contribution of Henry Heimlich, etc.
This is a key point, and EJT helpfully shared on twitter an excerpt from Reasons and Persons in which Parfit clearly explains the fallacy behind the “share of the total view” that Wenar seems to be uncritically assuming here. (ETA: this the first of Parfit’s famous “Five mistakes in moral mathematics”; one of the most important parts of arguably the most important work of 20th century moral philosophy.)
This is foundational stuff for the philosophical tradition upon which EA draws, and casts an ironic light on Wenar’s later criticism: “The crucial-but-absent Socratic meta-question is, ‘Do I know enough about what I’m talking about to make recommendations that will be high stakes for other people’s lives?’”
I agree that when it comes to decision making, Leifs objection doesn’t work very well.
However, when it comes to communication, I think there is a point here (although I’m not sure it was the one Leif was making). If Givewell communicates about the donation and how many lives you saved, and don’t mention the aid workers and mothers who put up nets, aren’t they selling them short here, and dismissing their importance?
In Parfits experiment, obviously you should go on the four person mission and help save the hundred lives. But if you then went on to do a book tour and touted what a hero you are for saving the hundred lives, and don’t mention the other three people, you are being a jerk.
I could imagine an aid worker in Uganda being kind of annoyed that they spent weeks working full time in sweltering heat handing out malaria nets for low pay, and then watching some tech guy in america take all the credit for the lifesaving work. It could hurt EA’s ability to connect with the third world.
I think there is a valuable concern about Triple counting impact in EA and I agree that there is a case for Shapley values being better than counterfactuals[1].
What I really don’t agree with is that we should let someone choke and die, just because otherwise Henry Heimlich would get the credit anyway. The goal is not to get the most credit or Shapley values, but to help others, I don’t see what prof. Wenar proposes as a better alternative to GiveWell.
I disagree that Shapley values are better than counterfactual in most cases, but I think it’s a reasonable stance to have.
Different ways of calculating impact make sense in different contexts. What I want to say is that the way Singer, MacAskill, GiveWell are doing it (i) is the one you should be using in deciding whether/where to donate (at the very least assuming you aren’t in some special collective action problem, etc.) and (ii) one that is totally fine by ordinary standards of speech—it isn’t deceptive, misleading, excessively imprecise, etc. Maybe we agree.
Yes I think we agree, but I also think that it’s not a crux of the argument.
As Neel Nanda noted, whatever vaguely reasonable method you use to calculate impact will result in attributing a lot of impact to life-saving interventions.