They don’t just argue that we should help homeless people because some tradeoffs are difficult to make. They make a number of reasonable points including citing Yud’s “fuzzies and utilons” as a potential reason to help homeless people as well as appealing to a reasonable philosophical argument about integrity.
I’m happy for folks to read the article and judge for themselves. The author briefly references some reasonable ideas in the course of building up a fundamentally unreasonable thesis: that “The problem [with effective altruism] is that we’ve stretched optimization beyond its optimal limits,” and that sometimes donating to the local homeless over EA charities will better serve “the real value you hold dear [that is, helping people].”
They most clearly exhibit the fallacy I warn against (“some tradeoffs are unclear, therefore you might as well be an ineffective altruist”) in this passage criticizing attempted optimization:
In your case, you’re trying to optimize how much you help others, and you believe that means focusing on the neediest. But “neediest” according to what definition of needy? You could assume that financial need is the only type that counts, so you should focus first on lifting everyone out of extreme poverty, and only then help people in less dire straits. But are you sure that only the brute poverty level matters?
… if you want to optimize, you need to be able to run an apples-to-apples comparison — to calculate how much good different things do in a single currency, so you can pick the best option. But because helping people isn’t reducible to one thing — it’s lots of incommensurable things, and how to rank them depends on each person’s subjective philosophical assumptions — trying to optimize in this domain will mean you have to artificially simplify the problem. You have to pretend there’s no such thing as oranges, only apples.
I also think their discussion of integrity is fundamentally confused:
It sounds like that’s what you’re feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this person’s suffering — that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your “brain.” It’s not dumber or more irrational. It’s the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!
It’s not apples and oranges. It’s just helping people you can see vs helping people who are out of sight, and so less emotionally engaging. Those shouldn’t be different values—as the author themselves says at the start, there’s just the one value of helping people, and different strategies for how to achieve that. What they don’t acknowledge is that the strategy of prioritizing more salient / emotionally-engaging people is less effective at helping, even if it’s more effective at indulging your emotional needs. Calling the emotional bias “integrity” is not philosophically helpful or illuminating. It’s muddled thinking, running cover for blatant bias.
I don’t entirely disagree with this argument they make you quoted.
”The problem [with effective altruism] is that we’ve stretched optimization beyond its optimal limits,” and that sometimes donating to the local homeless over EA charities will better serve “the real value you hold dear [that is, helping people].”
I think sometimes helping local homeless over EA charities can be a good idea to connect us emotionally with suffering, maintain strong social cohesion and set good examples to those around us who might be interested in EA. This argument may be weak, but isn’t as terrible as you seem to make out.
Also I think when their second argument is steelmanned its not that bad either. It seems to me they are arguing that helping those directly around us can help us care more, giving us more energy capacity to do more good while we do optimise abroad. I agree with them that directly helping people that suffer can help us care enough to actually optimise with the rest of our lives (altruism begets altrusim). In that sense it can be “Apples and Oranges’ in a way”, not in that the people have different value but that the Orange is protecting the part that cares deeply, which helps us care more for the Apple—the people far away who we could optimise and help more.
I also think there’s a reasonable argument that helping those we are in proximity to at least to some extent (perhaps due to emotional bias) can show integrity to our wider goal of doing the most good. I don’t think this is necessarily “muddled thinking”
It sounds like that’s what you’re feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this person’s suffering — that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your “brain.” It’s not dumber or more irrational. It’s the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!
My overall point is that I think you are unreasonably harsh on some of the reasoning here, even if you disagree with it. Many articles I have read which criticise Effective Altruism are so bad I don’t take them seriously, whereas I feel like this one makes some reasonable arguments even if we might disagree with them.
Great article—a minor point, you might be straw-manning/pidgeon holing this article a little
https://www.vox.com/future-perfect/372519/charity-giving-effective-altruism-mutual-aid-homeless
They don’t just argue that we should help homeless people because some tradeoffs are difficult to make. They make a number of reasonable points including citing Yud’s “fuzzies and utilons” as a potential reason to help homeless people as well as appealing to a reasonable philosophical argument about integrity.
I’m happy for folks to read the article and judge for themselves. The author briefly references some reasonable ideas in the course of building up a fundamentally unreasonable thesis: that “The problem [with effective altruism] is that we’ve stretched optimization beyond its optimal limits,” and that sometimes donating to the local homeless over EA charities will better serve “the real value you hold dear [that is, helping people].”
They most clearly exhibit the fallacy I warn against (“some tradeoffs are unclear, therefore you might as well be an ineffective altruist”) in this passage criticizing attempted optimization:
I also think their discussion of integrity is fundamentally confused:
It’s not apples and oranges. It’s just helping people you can see vs helping people who are out of sight, and so less emotionally engaging. Those shouldn’t be different values—as the author themselves says at the start, there’s just the one value of helping people, and different strategies for how to achieve that. What they don’t acknowledge is that the strategy of prioritizing more salient / emotionally-engaging people is less effective at helping, even if it’s more effective at indulging your emotional needs. Calling the emotional bias “integrity” is not philosophically helpful or illuminating. It’s muddled thinking, running cover for blatant bias.
I don’t entirely disagree with this argument they make you quoted.
”The problem [with effective altruism] is that we’ve stretched optimization beyond its optimal limits,” and that sometimes donating to the local homeless over EA charities will better serve “the real value you hold dear [that is, helping people].”
I think sometimes helping local homeless over EA charities can be a good idea to connect us emotionally with suffering, maintain strong social cohesion and set good examples to those around us who might be interested in EA. This argument may be weak, but isn’t as terrible as you seem to make out.
Also I think when their second argument is steelmanned its not that bad either. It seems to me they are arguing that helping those directly around us can help us care more, giving us more energy capacity to do more good while we do optimise abroad. I agree with them that directly helping people that suffer can help us care enough to actually optimise with the rest of our lives (altruism begets altrusim). In that sense it can be “Apples and Oranges’ in a way”, not in that the people have different value but that the Orange is protecting the part that cares deeply, which helps us care more for the Apple—the people far away who we could optimise and help more.
I also think there’s a reasonable argument that helping those we are in proximity to at least to some extent (perhaps due to emotional bias) can show integrity to our wider goal of doing the most good. I don’t think this is necessarily “muddled thinking”
It sounds like that’s what you’re feeling when you pass a person experiencing homelessness and ignore them. Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this person’s suffering — that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your “brain.” It’s not dumber or more irrational. It’s the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!
My overall point is that I think you are unreasonably harsh on some of the reasoning here, even if you disagree with it. Many articles I have read which criticise Effective Altruism are so bad I don’t take them seriously, whereas I feel like this one makes some reasonable arguments even if we might disagree with them.