This is an argument against EA that recently occurred to me and I’d like to know if there’s an existing reply to it in the philosophical literature. And please, this isn’t an invitation for you to give your personal opinion of it. I’m looking for informed opinions only, preferably with links to respected essays. If you’re Scott Alexander or someone similar, fine, otherwise keep your personal opinions to yourself.
Anyway, a core assumption of EA is that every life is equally valuable. That seems to me like it’s subject to several critiques. To take Peter Singer’s famous Drowning Child scenario, I can think of several reasons why I should care more about the drowning child next to me than the starving Ugandan 8,000 miles away:
1) I’m 99+% confident that I can help the drowning child. There are many steps between me and the Ugandan, and each step increases the potential for failure. International transfers are risky, especially to third world countries. The money could be stolen and redirected to a corrupt government official, in which case I’ve done the opposite of making the world better. For all I know the starving child doesn’t even exist, and is just a clever ploy by a corrupt NGO to get easy money from gullible westerners. You can argue that this is a superficial objection and that “effective” means donating in a way that avoids it. My response is that this is a fundamental problem that can’t be effectively solved by appropriate institutional control, at least not any better than existing institutions solve them (see #3).
2) If I’m in a first-world country, then the expected economic value of the child next to me is vastly higher than a child in Uganda. The simplest demonstration of that is to compare relative per-capita GDPs. In the US, the drowning child can be expected to someday contribute ~70k per year to global wealth while the starving Ugandan will only contribute ~2k. That difference really matters. And for those who object to using economic arguments in a moral domain, I’ll point out that a) that’s exactly what utilitarianism is (price is just a utility function) and b) as Tyler Cowen argued in a recent book, economic and moral value can be arbitraged because earning an extra $70k today means I can save an extra $70k worth of human life tomorrow.
3) As any good economist would say: solve for the equilibrium. What are the higher-order effects of your intervention? If the goal is to end hunger in Uganda, then that will require large capital flows. Those flows have to be managed by people and institutions in an environment where poverty is endemic and institutions corrupt; bad actors will inevitably intervene. The local market will adapt to both be parasitic on your donations and to prevent reliable information being reported back to the source donors. If you think you can prevent that, you then have to explain why you’re going to be better at doing that then the Ugandan government is, and if the government is better at it then why aren’t they already doing it? And even if you solve for all of that, then what about the risk that you could be creating a completely dependent culture? I heard once (no idea if it’s true, but it sounds plausible) that once foreign aid to a country passes X% of their total GDP, then that country gets poorer because everyone smart and enterprising in that country just starts engaging in zero-sum competitions for aid money. Basically at some level I feel like EA interventions have to answer the same arguments that are leveled against advocates of central planning and communism. “From each according to his ability to each according to his need” sounds great but inevitably leads to corrupt apparatchiks and bread lines. I see no reason to expect “Save the life you can” wouldn’t end up in some version of the same.
Anyway, I’d love to know if these ideas have been rigorously considered before. Please link to any on-point references and again, no uninformed “this is what I think” rants unless you’re someone who’s actually engaged with the philosophical underpinnings of EA. Thanks.
The core issue here is that you’re failing to distinguish intrinsic and instrumental value. The standard view is that all lives have equal intrinsic value. But obviously they can differ in instrumental value.
For further explanation, see this comment, along with the utilitarianism.net page on instrumental favoritism.
Thanks for the reply!
Are you saying that the notion of intrinsic value is central to philosophical underpinnings of EA? And would you say that, absent it, my objections are correct?
I think that my objections are sound even if you accept that life has intrinsic value. Robin Hanson and Tyler Cowen have made similar arguments so I’m not claiming to be original or anything, but if you can save 1 life for $X today or you can invest that money and save 3 lives for $3X in 10 years, then isn’t that the Utility-maximizing thing to do? Future lives don’t have any less intrinsic value than present lives do, (c.f. Singer’s argument that lives 8,000 miles away don’t have any less value than lives next door). The point being that if you care about human flourishing, however defined, then you have to grapple with the notion that the single greatest contributor to human flourishing is economic growth. Any charitable intervention must be judged against the value of directing that resource towards economic growth. Since economic growth is exponential, any life saved today will come at the expense of an exponentially larger set of lives later. It seems to me that any philosophically rigorous EA advocate needs to have a robust response to that issue. Has this been addressed by anyone?
They’re not “objections”, because you’ve misunderstood your target. EA is perfectly compatible with judging that it’s better to give later. That’s an open empirical question. But yes, lots has been written on it. See, e.g., Julia Wise’s Giving now vs later: a summary (and the many links contained therein).
I’d appreciate a reply to my comment, even if it’s to just to tell me that intrinsic value is inherent to the EA philosophy.
>They’re not “objections”, because you’ve misunderstood your target
Then please, explain what I’ve misunderstood.
Thanks for the link, but most of the links included therein were either broken or argued for exactly my point. For example, the link to the SSC essay concluded with “unless you think the world is more than 70% certain to end before you die, saving like Robin suggests is the best option” …. meaning that it’s smarter to invest than to donate. Do you have a better source or argument to present?
Also I’d appreciate it if you could respond to my previous question about the dependence of the EA position on the notion of intrinsic value.
Rigorously evaluating interventions on questions like these is the entire purpose of global health EA.
For example, you can read the evidence for malaria treated bednets, and find links to extremely high quality scientific studies showing that if you put some number X of bednets in a village, it leads to approximately Y less dead children than if you didn’t do that. I don’t see how any of your points would refute this.
If spending $X dollars today saves Y lives, then why isn’t it better to invest that money and 10 years you’ll have $2X dollars that could save 2Y lives? Capital grows exponentially but the value of human life does not, unless you have a principled reason to think that future life is less valuable than current life.
The cost-effectiveness of interventions doesn’t necessarily stay fixed over time. We would expect it to get more expensive to save a life over time, as the lowest-hanging fruit should get picked first.
(I’m not definitely saying that it’s better to donate now rather than investing and donating later—the changing cost-effectiveness of interventions is just one thing that needs to be taken into account)
Sure, but first world economic markets grow faster than third world economies, so deferred donation looks even better when you take this into account.
Assuming that first claim is true, I’m not sure it follows that deferred donation looks even better. You’d still need to know about the marginal cost-effectiveness of the best interventions, which won’t necessarily change at the same rate as the wider economy.
Points (1) and (3) relate to the value of the intervention rather than the value of the life of the beneficiary. If the intervention is less likely to work, or cause negative higher-order outcomes, then we should take that into account in any cost-effectiveness analysis. I think EA is very good at reviewing issues relating to point (1). Addressing point (3) is much trickier, but there is definitely some work out there looking at higher-order effects.
Point (2) relates to the difference between intrinsic and instrumental value (as previously noted by Richard). From a utilitarian perspective, it seems accurate that the economic productivity is an instrumental reason for favouring saving lives in wealthier countries.
However, this is not the only consideration when deciding where to donate. Firstly, it is typically much more expensive to save a life in a wealthy country, precisely because it is a wealthy country with relatively well-funded healthcare. Secondly, there are consequences beyond economic productivity. For example, people in wealthier countries may be more likely to regularly eat factory-farmed animals and contribute to climate change (on the other hand, because they are in a wealthier country with more resources, perhaps they are more likely to help solve these issues while also contributing to them).
>Secondly, there are consequences beyond economic productivity
Agreed, but if you consider these types of effects then it’s obvious to me that donating to a third world country is worse. I mean, just look at the two cultures: the US is objectively better than e.g. Uganda. The average Ugandan is much more likely to engage in much worse behaviors than eating factory-farmed meat. It’s also virtually impossible that they would ever contribute to scientific or technological development. When a reasonable person looks at the US and looks at Uganda and asks “which of these two things do I want more of,” everyone would say the US. This is the kind of analysis that I would expect the EA community to embrace. Their whole purpose is “making the world better through rational analysis.” In what possible way are you making the world better by diverting resources from a good culture to a bad one? Seriously, how do you justify that? Without positing some quasi-religious intrinsic value (which, for the record, I reject) I just don’t see how you can get there.
I’m no longer going to engage with you because this comes across as being deliberately offensive and provocative.
Ok then you’ve caused me to update my priors in the direction of “EA is an intellectually shallow pseudo-religious irrational cult”. My comment is 100% sincere and I think well posed. If your only response to it is to attack my motives then I think that reflects very poorly on both you and your ideology.