Stories such as Peter Singer’s “drowning child” hypothetical frequently imply that there is a major funding gap for health interventions in poor countries, such that there is a moral imperative for people in rich-countries to give a large portion of their income to charity. There are simply not enough excess deaths for these claims to be plausible.
Much of this is a restatement of part of my series on GiveWell and the problem of partial funding, so if you read that carefully and in detail, this may not be new to you, but it’s important enough to have its own concise post. This post has been edited after its initial publication for clarity and tone.
People still make the funding gap claim
In his 1997 essay The Drowning Child and the Expanding Circle, Peter Singer laid out the basic argument for a moral obligation to give much more than most to, for the good of poor foreigners:
To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance.
Singer no longer consistently endorses cost-effectiveness estimates that are so low, but still endorses the basic argument. Nor is this limited to him. As of 2019, GiveWell claims that its top charities can avert a death for a few thousand dollars, and the Center for Effective Altruism claims that someone with a typical American income can save dozens of lives over their lifetime by donating 10% of their income to the Against Malaria Foundation, which points to GiveWell’s analysis for support. (This despite GiveWell’s long-standing disclaimer that you shouldn’t take its expected value calculations literally). The 2014 Slate Star Codex post Infinite Debt describes the Giving What We Can pledge as effectively a negotiated compromise between the perceived moral imperative to give literally everything you can to alleviate Bottomless Pits of Suffering, and the understandable desire to still have some nice things.
How many excess deaths can developing-world interventions plausibly avert?
According to the 2017 Global Burden of Disease report, around 10 million people die per year, globally, of “Communicable, maternal, neonatal, and nutritional diseases.”* This is roughly the category that the low cost-per-life-saved interventions target. If we assume that all of this is treatable at current cost per life saved numbers—the most generous possible assumption for the claim that there’s a funding gap—then at $5,000 per life saved (substantially higher than GiveWell’s current estimates), that would cost about $50 Billion to avert.
This is already well within the capacity of funds available to the Gates Foundation alone, and the Open Philanthropy Project / GiveWell is the main advisor of another multi-billion-dollar foundation, Good Ventures. The true number is almost certainly much smaller because many communicable, maternal, neonatal, and nutritional diseases do not admit of the kinds of cheap mass-administered cures that justify current cost-effectiveness numbers.
Of course, that’s an annual number, not a total number. But if we think that there is a present, rather than a future, funding gap of that size, that would have to mean that it’s within the power of the Gates Foundation alone to wipe out all fatalities due to communicable diseases immediately, a couple times over—in which case the progress really would be permanent, or at least quite lasting. And infections are the major target of current mass-market donor recommendations.
Even if we assume no long-run direct effects (no reduction in infection rates the next year, no flow-through effects, the people whose lives are saved just sit around not contributing to their communities), a large funding gap implies opportunities to demonstrate impact empirically with existing funds. Take the example of malaria alone (the target of the intervention specifically mentioned by CEA in its “dozens of lives” claim). The GBD report estimates 619,800 annual deaths—a reduction by half at $5k per life saved would only cost $3 billion per year, an annual outlay that the Gates Foundation alone could sustain for over a decade, and Good Ventures could certainly maintain for a couple of years on its own.
GiveWell’s stated reason for not bothering to monitor statistical data on outcomes (such as e.g. malaria incidence and mortality, in the case of AMF) is that the data are too noisy. A reduction like that ought to be very noticeable, and therefore ought to make filling the next year’s funding gap much more appealing to other potential donors. (And if the intervention doesn’t do what we thought, then potential donors are less motivated to step in—but that’s good, because it doesn’t work!)
Imagine the world in which funds already allocated are enough to bring deaths due to communicable, maternal, neonatal, and nutritional diseases to zero or nearly zero even for one year. What else would be possible? And if you think that people’s revealed preferences correctly assume that this is far from possible, what specifically does that imply about the cost per life saved?
What does this mean?
If the low cost-per-life-saved numbers are meaningful and accurate, then charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths. If the Gates Foundation and Good Ventures are behaving properly because they know better, then the opportunity to save additional lives cheaply has been greatly exaggerated. My former employer GiveWell in particular stands out, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried that Good Ventures would be saving more than their “fair share” of lives.
In either case, we’re not getting these estimates from a source that behaves as though it both cared about and believed them. The process that promoted them to your attention is more like advertising than like science or business accounting. Basic epistemic self-defense requires us to interpret them as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.
We should be more skeptical, not less, of vague claims by the same parties to even more spectacular returns on investment for speculative, hard to evaluate interventions, especially ones that promise to do the opposite of what the argument justifying the intervention recommends.
If you give based on mass-marketed high-cost-effectiveness representations, you’re buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There’s no substitute for developing and acting on your own models of the world.
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk while maximizing earnings, and give all your excess money to the global poor or something even more urgent. Insofar as there’s a way to fix these problems as a low-info donor, there’s already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.
* A previous version of this post erroneously read a decadal rate of decline as an annual rate of decline, which implied a stronger conclusion than is warranted. Thanks to Alexander Gordon-Brown to pointing out the error.
(I used to work for GiveWell)
Hey Ben,
I’m sympathetic to a lot of the points you make in this post, but I think your conclusions are far more negative than is reasonable.
Here’s the stuff I largely agree with you on:
-The opportunities to save lives w/ global health interventions probably aren’t nearly as easy as Singer’s thought experiment suggests
-Entities other than GiveWell use GiveWell’s estimates without the appropriate level of nuance and detail about where the estimates come from and how uncertain they are
-There’s not anything close to $50,000,000,000 funding gap for ultra cost-effective interventions to save lives
-GiveWell’s cost-effectiveness estimates are probably overly optimistic
That said, I find a few of the things you say in this post frustrating:
I don’t think anyone at GiveWell believes millions of lives could be saved today at an ultra-low cost. GiveWell regularly publishes their room for more funding analyses that indicate it thinks the funding gaps for their recommended interventions amount to way way less than $50 billion/year.
As far as I can tell, people at Good Ventures & Open Phil sincerely believe that funding in cause areas other than global health may be incredibly cost-effective. I think Good Ventures funds other stuff because they think each $5,000 of funding given to those causes may do more good than an additional $5,000 given to GiveWell’s recommended charities. They might be dead wrong, but I don’t think they rationalize their choices with, “Well, GiveWell’s estimates are just BS so let’s not take them seriously.”
I find this way of describing GW’s motivations awfully uncharitable.
GiveWell puts a ton of effort into coming up with these numbers and drawing on them as they make decisions. None of that would happen if the numbers were just created for the purposes of marketing and manipulation. I have significant reservations about how GiveWell’s estimates are created and used. I don’t have significant reservations about GiveWell’s sincerity when sharing the estimates.
Ben & Alexander Gordon-Brown are having an interesting conversation in the comments of the Compass Rose version of this post.
My first question was “Why the assumption that all deaths are as cheap to prevent as the marginal one?” which I see AGB has already raised there. I’ll be interested to see an answer.
That was my first question too, but I think figured out the answer? Maybe? (Let me know if I got this right BenHoffman?)
BenHoffman’s central claim is not that people aren’t suffering preventable diseases. It is only that “drowning children” (a metaphor for people who can be saved with a few thousand dollars) are rare.
So they’re questioning why, if the current price of saving a life is so low, and the amount of available funding so high, why hasn’t all that low hanging fruit of saving “drowning children” been funded already? And if it has been, the marginal price should be higher by now?
And the answer supposedly can’t be “there’s simply too many low hanging fruits, too many drowning children” because, if you assume that all low hanging fruits are Communicable, maternal, neonatal, and nutritional diseases disease related, there’s a maximum of ten million fruits (low hanging or not) and the most generous thing for the “there’s just too many low hanging fruits for us to pick them all and that’s why the price remains low” is to assume all possible fruits are low hanging. And that’s why it makes sense to assume that they’re all at the marginal price. The claim is that if you were truly purchasing all the low hanging lives saved, and your budget was that high, the marginal price should have gone up by now because you should have already bought up all the cheap life saving methods.
(I’m just exploring the thought process behind this particular subsection of the analysis, which is not to be taken as being agreement with the overall argument, in whole or in part.)
Also some interesting discussion on the LessWrong version.
Also also, just want to register the observation that this post seems further evidence of my continuing claim that votes on LW/EAF/AF are boos/yays: at time of this writing here the score is 0 with 17 votes and on LW it’s 36 with 24 votes. I don’t want to detract from the direct discussion of the topic, but I find that discrepancy very interesting and clearer evidence than we’ve seen in the past of how voting patterns are a poor signal of post quality.
My takeaway is that the EA forum’s voting is better than LessWrong’s.
What do you mean by “better” here? That there is a discrepancy suggests to me that people are voting for different reasons between the two places, not that the voting is better in some universal way (compare the way “better” in economics could mean redistribution to things you like or more efficiency so everyone gets more of what they want).
Also, just further noting voting patterns, no disrespect intended to you kbog, but your comment contains little content (in a very straightforward sense: it is short) and is purely a statement of opinion with no justification provided (though some is implied), yet at time of writing has 6 votes for 14 karma, which relative to what I see on average comments on EAF, where more thorough comments receive less karma and less attention, suggests to me you hit an applause light and people are upvoting it for that reason rather than anything else.
None of this is to say people can’t vote the way they like or that you don’t deserve the karma. I merely seek to highlight how people seem to use voting today. The way people use voting is not aligned with how I would like voting to be used, hence why I mention these things and am interested in them, but it is also not up to me to shape this particular mechanism.
I think people use upvotes both to signal agreement and to highlight thoughtful, effortful, or detailed comments. I think it’s fairly clear that Kbog’s comments was upvoted because people agreed with it, not because people thought it was a particularly insightful comment. That doesn’t preclude people upvoting posts for being high quality.
If your point is more that people don’t generally upvote quality posts that they disagree with, then I would probably agree with that.
My (small) update is also this, except confined to posts criticizing EA.
Most of the comments in the EA forum are pointing out serious factual errors in the post (or linking to such explanations). The LW comments are more positive. The simpler explanation to me seems like the issues with his posts were hard-to-find, and unsurprisingly people on the EA forum are better at finding them because they have thought more about EA.
I think we lack clear evidence to conclude that, though. I can just as easily believe the story, given what we’ve seen, that EAF users are more likely to downvote anything criticizing EA (just as LW users are more likely to downvote anything that goes against the standard interpretation of LW rationality). I’d be very interested to know if there are posts that both criticize something EA in a cogent way as this post does and don’t receive large numbers of downvotes.
Also, don’t forget many posts that have pro-EA results are about equally well reasoned as what we see here, but receive overwhelmingly positive votes, even if they receive criticism in the comments. So the question remains, why downvote this post when we respond to it and not downvote other posts when we criticize them?
Hallstead’s criticism of ACE seems like one example.
My article criticizing the EA Funds last year were both more cogent than this post, and the recipient of a much greater number of upvotes, than Ben’s here. I do in fact think it is the case here that this post is receiving downvotes because of the factual errors with it. Yet neither is this entirely separate from the issue of people downvoting the post simply because they don’t like it as a criticism of EA. That people don’t like the post is confounded by the fact the reason they don’t like it could be because they think it’s very erroneous.
Another ex-GiveWell’s employee post criticizing GiveWell and the EA community was recently highly upvoted. See also Ben’s old post Effective Altruism is Self-Recommending, which is currently at +30 (a solid amount given that it was posted on the old forum, where karma totals were much lower).
I think the reason this post is at near 0 karma is because it is objectively wrong in multiple ways, and is of negative value. I would say this is clear if you engage with the comments here, on Ben’s blog, and Jeff Kaufman’s reply.
I actually interpret the voting on this post to be too positive. I think it is because EAs tend to be wary of downvoting criticisms that might be good. Ben’s previous reputation for worthwhile criticism seems to be protecting him to a certain extent.
(views my own)
To add to Ben’s example, one of the most upvoted posts of all time was critical of discrepancy between the message that working at EA org is a promising career path and the fact that it’s extremely hard to get a job at an EA org. There was probably an element of people empathising with the story but I still think it ‘criticised something EA in a cogent way’.
FWIW, I think the EA community is unusually good at engaging with critical commentary and updating accordingly.
It looks like you meant to publish this post using the Markdown editor, but that you were in the WYSIWYG editor when you wrote it. You can switch editors in the “Edit Account” settings.
--
Much of what I wanted to say in response to this post was said by Alexander Gordon-Brown in this comment section, so I’ll skip it. A couple of other notes:
(1) Good Ventures (through its funding of GiveWell charities) doesn’t just aim to avert deaths; it also tries to reduce poverty and fight non-lethal but debilitating instances of disease. Even people who are pro-deworming don’t claim that it saves many lives; instead, they argue that it helps many people live better lives than they would have otherwise.
Do you believe that Good Ventures overvalues “improving lives” compared to “saving lives”? I could also read your argument as “Good Ventures should be doing more of what they do already, spending money faster in the process to both save and improve lives”. Do either of those interpretations match your beliefs?
(2) Phrases like “marketing copy designed to control your behavior” seem wildly uncharitable, in a way that actually distorts reality.
GiveWell’s declining to fully fund recommended charities has been criticized before, and well. But while it seems plausible to me that they’ve chosen the wrong number by funding 50% of “non-must-fund” opportunities, I don’t think they’re deliberately lying about their beliefs or that they have some kind of sinister desire to “control” donors. Everything I’ve read by them in the last few years has been open advice to donors with particular values, with open acknowledgment that effectiveness numbers are estimates that don’t fully reflect reality (and public spreadsheets demonstrating disagreements between GiveWell staff on the best numbers to use). I don’t understand why they would deliberately lie—do you believe that they are unfairly biased in favor of certain charities? That they think all the charities they recommend are worse than they say, but still better than other options?
(Yes, some of this may be addressed in the blog series, but given the harshness of the criticism here, it seems fair to at least re-summarize some of your relevant earlier points.)
Jeff Kaufman wrote a short reply on his blog: https://www.jefftk.com/p/theres-lots-more-to-do
Ah, and there’s also a EA Forum version of Jeff’s post, which I missed on my initial pass.
I found this post interesting overall. I have a few thoughts on the argument as a whole, but want to focus on one thing in particular:
I don’t see this as an accurate summary of the reasons GiveWell outlined in the linked blogpost. The stated reason is that in the long-term, fully funding every strong giving opportunity they see would be counterproductive because their behavior might influence other donors’ behavior:
Despite this, that year they recommended that Good Ventures fully funds the highest-value opportunities:
The post itself goes into much greater detail about these considerations.
This isn’t a coherent rationalization for reasons covered in tedious detail in the longer series.
It might be helpful if you linked to specific parts of the longer series which addressed this argument, or summarized the argument. Even if it would be good for people to read the entire thing it hardly seems like something we can expect as a precondition.
Whether you think it’s a rationalization or not, the claim in the OP is misleading at best. It sounds like you’re paraphrasing them as saying that they don’t recommend that Good Ventures fully fund their charities because this is an unfair way to save lives. GiveWell says nothing of the sort in the very link you use to back up your claim. The reason the you assign to them instead, that they think that this would be unfair, is absurd and isn’t backed up by anything in the OP.
Isn’t there option 3?
Option 3: a funder reasons that by partially filling a funding gap, it will draw in money from other funders that would counterfactually not go towards the cause area. By drawing in money from other sources, the funder leverages their grant-making spend.
Apologies if you’ve already addressed this somewhere; I haven’t read your full series on the topic.
The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.
For reference, the landing page for Ben’s series.
Just to be clear, saving lives for several hundred thousand dollars each would still be efficient enough to justify donating most of one’s disposable income. The rhetorical force of the drowning child argument is useful for philosophy classrooms and public media where you have to prod people who are otherwise disappointingly selfish, but I don’t think many of us are going to rely on that as a rigorous basis for why we do what we do.
I would interpret your post as merely objecting that EA organizations are misrepresenting things in order to foster more aid for the otherwise-good goal of helping people in severe poverty. But the idea that we actually aren’t obligated to donate just because the cost per life saved is $100,000 instead of $5,000 is ridiculous.
Does everyone who holds a moral anti-realist view think that they aren’t obliged to donate at $100k per life, or $5k per life?
Maybe you’re just claiming that moral anti-realism is ridiculous?
[Edit: some moral anti-realist views probably preserve the concept of moral obligation, though many don’t. So saying that all anti-realists aren’t moved by obligation is too strong.]
I don’t think realism / anti realism has much to do with it, it doesn’t necessarily change the actual content of ethics.
And if anti realism is true, then I’m definitely not going to change my views just because of what philosophers think.
It’s more a question of what meta-ethical view you hold personally, rather than what philosophers think.
If you hold an anti-realist view such that you think the concept of moral obligation is incoherent, you won’t feel morally obligated to do things.
Then you sure aren’t obligated to do accurate marketing, or anything else. That kind of nihilism just blows everything up. It’s a bit like saying “I’m just a Boltzmann brain, therefore drowning kids don’t exist.”
He is claiming the idea that the cost of saving a life being $100k instead of being $5k being a sufficient condition to logically conclude one is not obliged to save a life, given the assumption one would otherwise be obliged to save a life, and that one believes in obligations in the first place, is ridiculous.
More than that, I’m saying we’re simply obligated to save lives for $100k each. Assuming that we are first-worlders with spare money, of course.
Related, by Wei Dai: How should large donors coordinate with small donors?
I think examining the number of low hanging fruits is important. I’m not yet sure if this analysis is correct, but I too would like to know exactly how many low hanging fruits there are, and exactly how low hanging they are, and whether this information is consistent with EA org’s actions. If your analysis is true, people should put more energy into expanding cause areas beyond health stuff.
I think it might be nice if someone attempted a per-intervention spreadsheet / graph estimating how much more expensive the “next marginal life saved / qaly / disease prevented / whatever” would get, with each additional dollar spent...while sort of assuming that that currently existing organizations can successfully scale or new organizations can be formed to handle the issue. (So, sort of like “room for more funding”, but focusing instead on the scale of the problem rather than the scale of the organization that deals with the problem). Has someone already done so? I know plenty of people have looked at problem scales in general, but I haven’t seen much on predicting the marginal-cost changes as we progress along the scales.
Okay, that said: this last paragraph was in the original post but not the cross-post
I think there’s potentially a much deeper problem with this statement, which goes beyond any in the impact analysis. Even if one forgets all moral philosophy, disregard all practical analyses, and use nothing but concrete practical personal experience and a gut sense of right and wrong to guide one’s behavior...well, for me at least, that still makes living frugally to conserve scarce resources for others seem like a correct thing to do?
I know people who live in poverty, personally—both in the “below the American poverty line” sense (I guess I’m technically below that line myself in a grad student sort of way, but I know people who are rather more permanently under it), and in the “global poor” sense. Even by blood alone, I’m only two generations removed from people who have temporarily experienced global poverty of the <$2/day magnitude. So for me at least, it remains obvious on a personal face-to-face level that among humans the global poor are the ones who can make best personal use of scarce resources. I imagine there are people whose social circles don’t include people in local or global poverty, but that’s not an immutable fact of life—one can change that, if one thinks social circles are essential ingredients to making impact.
I don’t really agree with the framing of “Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits” as something obviously distinct from helping the global poor. I don’t feel like I or my lived ones could never experience global poverty. I feel like I’m part of a community and friendly with people who might directly experience or interact with global poverty. If being a low info donor doesn’t help...are there not things one can do to become a “high info donor” or direct worker for that matter?
I think that if I believed similarly to you—and if I understand correctly, you think: that abstractions are misleading, that face-to-face community building and support of loved ones and people you actually know is the important thing here, that it’s important to build your own models of the world rather than trust more knowledgeable people to do impact evaluations for you, that it’s really hard to overcome deceptive marketing practices by donation seekers......then, rather than claiming that there is no imperative to live frugally and engage with global poverty. If I believed this I think I’d advocate that more EAs set some time aside to get some hands-on, face to face involvement in with the people who generate impact evaluations (or at least, actually read the impact evaluation), that donors spend more time meeting people who do direct work, and that both donors and direct work spend more time interacting with the supposed direct beneficiaries of their work. That seems really different from saying that the “utilitarian imperative” is wrong. (And maybe you do advocate all these other things as well, I don’t mean to imply you don’t...but why advocate for just staying within yourself and your circle?)
If there’s a lot of misinformation and misleading going on, I do think there’s ways to get around that by acting to put oneself in more situations where one has more opportunities for direct experience and building one’s own models of the world. Going straight to the idea that you should just take care of yourself and people you currently know seems …a bit like giving up? And even if you don’t think a global scope is appropriate, is there not enough poverty within your immediate community and social circle that there remains an urgency to be frugal and use resources to help others?
I just don’t see how your analysis, even if totally correct, leads to the conclusion that the imperative to frugality and redistribution is destroyed. I mean, as long as we’re calling it “living like a monk”, at least some of the actual monks did it for exactly that purpose, in the absence of any explicit utilitarianism, with the people they tried to help largely on a face to face basis. it’s not an idea that rests particularly heavily on EA foundations or impact evaluations.
(I don’t want to be construed as defending frugality in particular, just claiming the general sense of the ethos of redirecting resources to people who may need it more, and the personal frugality that is sometimes motivated by that ethos, as being positive… and that the foundations of it do not rely on trusting Givewell, Effective Altruism, and so on)
Ben, curious for your thoughts on the “other reasons” Jeff gives in this comment.