“If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques.”
I agree—but I think Wenar does a very good job of pointing out specific weaknesses. If he alternatively framed this piece as “how EA should improve” (which is how I mentally steelman every EA hit-piece that I read), it would be an excellent piece. Under his current framing of “EA bad”, I think it is a very unsuccessful piece.
I think these are his very good and perceptive criticisms:
Global health and development EA does not adequately account for side-effects, unintended consequences and perverse incentives caused by different interventions in its expected-value calculations, and does not adequately advertise these risks to potential donors. Weirdly, I don’t think I’ve come across this criticism of EA before despite it seeming very obvious. I think this might be because people are polarised between “aid bad” and “aid good”, leaving very few people saying “aid good overall but you should be transparent about downsides of interventions you are supporting”.
The use of quantitative impactestimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.
Expected-value calculations rooted in probabilities derived from belief (as opposed to probabilities derived from empirical evidence) are prone to motivated reasoning and self-serving biases.
I’ve previously discussed weaknesses of expected-value calculations on the forum and have suggested some actionable tools to improve them.
I think Givewell should definitely clarify what they think the most likely negative side-effects and risks of the programs they recommend are, and how severe they think the side-effects are.
Re 1, as Richard says: “Wenar scathingly criticized GiveWell—the most reliable and sophisticated charity evaluators around—for not sufficiently highlighting the rare downsides of their top charities on their front page.8 This is insane: like complaining that vaccine syringes don’t come with skull-and-crossbones stickers vividly representing each person who has previously died from complications. He is effectively complaining that GiveWell refrains from engaging in moral misdirection. It’s extraordinary, and really brings out why this concept matters.”
Re 2: I just don’t think this is true. EAs often note the uncertainty.
3. This is true but constantly talked about EAs. Furthermore, I don’t know what the alternative is supposed to be—just ignore all non-quantifiable harms.
The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.
In my experience, this is not a winnable battle. Regardless of how many times you repeat that your quantitative estimates are based on limited evidence / embed a lot of assumptions / have high margins of error / etc., people will say you’re taking your estimates too seriously.
On #1, how would you define “adequately account” and “adequately advertise”? I wasn’t convinced that Wenar’s specific GiveWell examples rose to a level of materiality that would justify these conclusions.
Even agreeing that EA GHD should be held to a higher standard because its effectiveness claims are much more explicit and specific, I also think “industry standards” are relevant to this point. If a criticism is no more valid of EA GHD than the charitable sector as a whole, critics need to say that.
“If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques.”
I agree—but I think Wenar does a very good job of pointing out specific weaknesses. If he alternatively framed this piece as “how EA should improve” (which is how I mentally steelman every EA hit-piece that I read), it would be an excellent piece. Under his current framing of “EA bad”, I think it is a very unsuccessful piece.
I think these are his very good and perceptive criticisms:
Global health and development EA does not adequately account for side-effects, unintended consequences and perverse incentives caused by different interventions in its expected-value calculations, and does not adequately advertise these risks to potential donors. Weirdly, I don’t think I’ve come across this criticism of EA before despite it seeming very obvious. I think this might be because people are polarised between “aid bad” and “aid good”, leaving very few people saying “aid good overall but you should be transparent about downsides of interventions you are supporting”.
The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.
Expected-value calculations rooted in probabilities derived from belief (as opposed to probabilities derived from empirical evidence) are prone to motivated reasoning and self-serving biases.
I’ve previously discussed weaknesses of expected-value calculations on the forum and have suggested some actionable tools to improve them.
I think Givewell should definitely clarify what they think the most likely negative side-effects and risks of the programs they recommend are, and how severe they think the side-effects are.
Re 1, as Richard says: “Wenar scathingly criticized GiveWell—the most reliable and sophisticated charity evaluators around—for not sufficiently highlighting the rare downsides of their top charities on their front page.8 This is insane: like complaining that vaccine syringes don’t come with skull-and-crossbones stickers vividly representing each person who has previously died from complications. He is effectively complaining that GiveWell refrains from engaging in moral misdirection. It’s extraordinary, and really brings out why this concept matters.”
Re 2: I just don’t think this is true. EAs often note the uncertainty.
3. This is true but constantly talked about EAs. Furthermore, I don’t know what the alternative is supposed to be—just ignore all non-quantifiable harms.
In my experience, this is not a winnable battle. Regardless of how many times you repeat that your quantitative estimates are based on limited evidence / embed a lot of assumptions / have high margins of error / etc., people will say you’re taking your estimates too seriously.
On #1, how would you define “adequately account” and “adequately advertise”? I wasn’t convinced that Wenar’s specific GiveWell examples rose to a level of materiality that would justify these conclusions.
Even agreeing that EA GHD should be held to a higher standard because its effectiveness claims are much more explicit and specific, I also think “industry standards” are relevant to this point. If a criticism is no more valid of EA GHD than the charitable sector as a whole, critics need to say that.