Sure. To be clear, I think most of what I’m concerned about applies to prioritization decisions made in highly-uncertain scenarios. So far, I think the EA community has had very few opportunities to look back and conclusively assess whether highly-uncertain things it prioritized turned out to be worthwhile. (Ben makes a similar point at https://www.lesswrong.com/posts/Kb9HeG2jHy2GehHDY/effective-altruism-is-self-recommending.)
That said, there are cases where I believe mistakes are being made. For example, I think mass deworming in areas where almost all worm infections are light cases of trichuriasis or ascariasis is almost certainly not among the most cost-effective global health interventions.
Neither trichuriasis nor ascariasis appear to have common/significant/easily-measured symptoms when infections are light (i.e., when there are not many worms in an infected person’s body). To reach the conclusion that treating these infections has a high expected value, extrapolations are made from the results of a study that had some weird features and occurred in a very different environment (an environment with far heavier infections and additional types of worm infections). When GiveWell makes its extrapolations, lots of discounts, assumptions, probabilities, etc. are used. I don’t think people can make this kind of extrapolation reliably (even if they’re skeptical, smart, and thinking carefully). When unreliable estimates are combined with an optimization procedure, I worry about the optimizer’s curse.
Someone who is generally skeptical of people’s ability to productively use models in highly-uncertain situations might instead survey experts about the value of treating light trichuriasis & asariasis infections. Faced with the decision of funding either this kind of deworming or a different health program that looked highly-effective, I think the example person who ran surveys would choose the latter.
(I used to work for GiveWell)
Hey Ben,
I’m sympathetic to a lot of the points you make in this post, but I think your conclusions are far more negative than is reasonable.
Here’s the stuff I largely agree with you on:
-The opportunities to save lives w/ global health interventions probably aren’t nearly as easy as Singer’s thought experiment suggests
-Entities other than GiveWell use GiveWell’s estimates without the appropriate level of nuance and detail about where the estimates come from and how uncertain they are
-There’s not anything close to $50,000,000,000 funding gap for ultra cost-effective interventions to save lives
-GiveWell’s cost-effectiveness estimates are probably overly optimistic
That said, I find a few of the things you say in this post frustrating:
I don’t think anyone at GiveWell believes millions of lives could be saved today at an ultra-low cost. GiveWell regularly publishes their room for more funding analyses that indicate it thinks the funding gaps for their recommended interventions amount to way way less than $50 billion/year.
As far as I can tell, people at Good Ventures & Open Phil sincerely believe that funding in cause areas other than global health may be incredibly cost-effective. I think Good Ventures funds other stuff because they think each $5,000 of funding given to those causes may do more good than an additional $5,000 given to GiveWell’s recommended charities. They might be dead wrong, but I don’t think they rationalize their choices with, “Well, GiveWell’s estimates are just BS so let’s not take them seriously.”
I find this way of describing GW’s motivations awfully uncharitable.
GiveWell puts a ton of effort into coming up with these numbers and drawing on them as they make decisions. None of that would happen if the numbers were just created for the purposes of marketing and manipulation. I have significant reservations about how GiveWell’s estimates are created and used. I don’t have significant reservations about GiveWell’s sincerity when sharing the estimates.