This is an argument against EA that recently occurred to me and I’d like to know if there’s an existing reply to it in the philosophical literature. And please, this isn’t an invitation for you to give your personal opinion of it. I’m looking for informed opinions only, preferably with links to respected essays. If you’re Scott Alexander or someone similar, fine, otherwise keep your personal opinions to yourself.
Anyway, a core assumption of EA is that every life is equally valuable. That seems to me like it’s subject to several critiques. To take Peter Singer’s famous Drowning Child scenario, I can think of several reasons why I should care more about the drowning child next to me than the starving Ugandan 8,000 miles away:
1) I’m 99+% confident that I can help the drowning child. There are many steps between me and the Ugandan, and each step increases the potential for failure. International transfers are risky, especially to third world countries. The money could be stolen and redirected to a corrupt government official, in which case I’ve done the opposite of making the world better. For all I know the starving child doesn’t even exist, and is just a clever ploy by a corrupt NGO to get easy money from gullible westerners. You can argue that this is a superficial objection and that “effective” means donating in a way that avoids it. My response is that this is a fundamental problem that can’t be effectively solved by appropriate institutional control, at least not any better than existing institutions solve them (see #3).
2) If I’m in a first-world country, then the expected economic value of the child next to me is vastly higher than a child in Uganda. The simplest demonstration of that is to compare relative per-capita GDPs. In the US, the drowning child can be expected to someday contribute ~70k per year to global wealth while the starving Ugandan will only contribute ~2k. That difference really matters. And for those who object to using economic arguments in a moral domain, I’ll point out that a) that’s exactly what utilitarianism is (price is just a utility function) and b) as Tyler Cowen argued in a recent book, economic and moral value can be arbitraged because earning an extra $70k today means I can save an extra $70k worth of human life tomorrow.
3) As any good economist would say: solve for the equilibrium. What are the higher-order effects of your intervention? If the goal is to end hunger in Uganda, then that will require large capital flows. Those flows have to be managed by people and institutions in an environment where poverty is endemic and institutions corrupt; bad actors will inevitably intervene. The local market will adapt to both be parasitic on your donations and to prevent reliable information being reported back to the source donors. If you think you can prevent that, you then have to explain why you’re going to be better at doing that then the Ugandan government is, and if the government is better at it then why aren’t they already doing it? And even if you solve for all of that, then what about the risk that you could be creating a completely dependent culture? I heard once (no idea if it’s true, but it sounds plausible) that once foreign aid to a country passes X% of their total GDP, then that country gets poorer because everyone smart and enterprising in that country just starts engaging in zero-sum competitions for aid money. Basically at some level I feel like EA interventions have to answer the same arguments that are leveled against advocates of central planning and communism. “From each according to his ability to each according to his need” sounds great but inevitably leads to corrupt apparatchiks and bread lines. I see no reason to expect “Save the life you can” wouldn’t end up in some version of the same.
Anyway, I’d love to know if these ideas have been rigorously considered before. Please link to any on-point references and again, no uninformed “this is what I think” rants unless you’re someone who’s actually engaged with the philosophical underpinnings of EA. Thanks.
[Question] A utilitarian argument that not every life is equally valuable
This is an argument against EA that recently occurred to me and I’d like to know if there’s an existing reply to it in the philosophical literature. And please, this isn’t an invitation for you to give your personal opinion of it. I’m looking for informed opinions only, preferably with links to respected essays. If you’re Scott Alexander or someone similar, fine, otherwise keep your personal opinions to yourself.
Anyway, a core assumption of EA is that every life is equally valuable. That seems to me like it’s subject to several critiques. To take Peter Singer’s famous Drowning Child scenario, I can think of several reasons why I should care more about the drowning child next to me than the starving Ugandan 8,000 miles away:
1) I’m 99+% confident that I can help the drowning child. There are many steps between me and the Ugandan, and each step increases the potential for failure. International transfers are risky, especially to third world countries. The money could be stolen and redirected to a corrupt government official, in which case I’ve done the opposite of making the world better. For all I know the starving child doesn’t even exist, and is just a clever ploy by a corrupt NGO to get easy money from gullible westerners. You can argue that this is a superficial objection and that “effective” means donating in a way that avoids it. My response is that this is a fundamental problem that can’t be effectively solved by appropriate institutional control, at least not any better than existing institutions solve them (see #3).
2) If I’m in a first-world country, then the expected economic value of the child next to me is vastly higher than a child in Uganda. The simplest demonstration of that is to compare relative per-capita GDPs. In the US, the drowning child can be expected to someday contribute ~70k per year to global wealth while the starving Ugandan will only contribute ~2k. That difference really matters. And for those who object to using economic arguments in a moral domain, I’ll point out that a) that’s exactly what utilitarianism is (price is just a utility function) and b) as Tyler Cowen argued in a recent book, economic and moral value can be arbitraged because earning an extra $70k today means I can save an extra $70k worth of human life tomorrow.
3) As any good economist would say: solve for the equilibrium. What are the higher-order effects of your intervention? If the goal is to end hunger in Uganda, then that will require large capital flows. Those flows have to be managed by people and institutions in an environment where poverty is endemic and institutions corrupt; bad actors will inevitably intervene. The local market will adapt to both be parasitic on your donations and to prevent reliable information being reported back to the source donors. If you think you can prevent that, you then have to explain why you’re going to be better at doing that then the Ugandan government is, and if the government is better at it then why aren’t they already doing it? And even if you solve for all of that, then what about the risk that you could be creating a completely dependent culture? I heard once (no idea if it’s true, but it sounds plausible) that once foreign aid to a country passes X% of their total GDP, then that country gets poorer because everyone smart and enterprising in that country just starts engaging in zero-sum competitions for aid money. Basically at some level I feel like EA interventions have to answer the same arguments that are leveled against advocates of central planning and communism. “From each according to his ability to each according to his need” sounds great but inevitably leads to corrupt apparatchiks and bread lines. I see no reason to expect “Save the life you can” wouldn’t end up in some version of the same.
Anyway, I’d love to know if these ideas have been rigorously considered before. Please link to any on-point references and again, no uninformed “this is what I think” rants unless you’re someone who’s actually engaged with the philosophical underpinnings of EA. Thanks.