You appear to have missed the central point of the essay, which is strange because it’s repeated over and over. I point out that utilitarians must either dilute or swallow the poison of repugnant conclusions—dilution of the repugnancy works, but the cost is making EA weaker and more vapid, of it becoming a toothless philosophy like “do good in the world.” Instead of grappling with this criticism, you’ve transformed the thesis instead into a series of random unconnected claims, some of which don’t even represent my views. Now, I won’t address your defensive scolding of me personally, as I don’t want to engage with something so childish here. I’d rather see some actual grappling with the thesis, or at least the more interesting parts like on whether there are qualitative, in addition to quantitative, moral differences, but here’s the responses to the set of mostly uninteresting claims you’ve ascribed to me instead of dealing with the thesis.
Implicit claim: EAs are mostly utilitarians
IIRC about 70% of EAs are consequentialists, but I don’t think most will response to your claims as you do. I think your claims are largely straw men.
Already addressed in the text: “I don’t think that there’s any argument effective altruism isn’t an outgrowth of utilitarianism—e.g., one of its most prominent members is Peter Singer, who kickstarted the movement in its early years with TED talks and books, and the leaders of the movement, like William MacAskill, readily refer back to Singer’s “Famine, Affluence, and Morality” article as their moment of coming to.”
You’ll have to explain to me how so many EA leaders readily reference utilitarian philosophy, or refer to utility calculations being the thing that makes EA special, or justify what counts as an effective intervention via utilitarian definitions, without anyone actually being utilitarian. People can call themselves whatever they want, and I understand people wanting to divorce themselves from the repugnancies of utilitarianism, but so much in EA draws on a utilitarian toolbox and all the origins are (often self-admittedly!) in utilitarian thought experiments.
Claim: Utilitarians would murder 1 person to save 5
a) No they wouldn’t. They would know that they would be arrested and unable to help the many others in easier and less risky ways. Given how cheap it is to save a life, why would you risk prison?
b) Noone in EA acts like this. Noone in EA tells people to act like this. It’s a straw man.
(a) If you could get away with it utilitarianism tells you it’s moral to do, you’re just saying “in no possible world could you get away with it” which is both a way-too-strong claim and also irrelevant, for the repugnancy is found in that it is moral to do it and keep it a secret, if you can. As for (b) since harm is caused by inaction (at least according to many in EA), then diverting the charity money from say, the USA, where it will go less far and only save 1 life, to a third-world country, where it will save 5, is exactly this. You saying that “no one says to do that” seems to fly in the face of. . . what everyone is saying to do.
Claim: There is a slippery slope where you have to become a moral angel and give everything away.
I don’t say this anywhere I know of in the text.
Claim: “Is the joy of rich people hiking really worth the equivalent of all the lives that could be stuffed into that land if it were converted to high-yield automated hydroponic farms and sprawling apartment complexes?”
If there were a billion americans, the US would be less dense than the UK. It would be far less dense than England. There isn’t a tradeoff here.
You’re missing the point of this part, which is that the utilitarian arbitrage has to necessarily keep going. You just, what, stop at a billion? Why? Because it sounds good to you, Nathan Young? The moral thing to do is keep going. That’s my point about utilitarian arbitrage leading very naturally to the repugnant conclusion. So this seems to just not grok this part.
Claim: “Whereas I’m of the radical opinion that the poison means something is wrong to begin with.”
What is your alternative? Happy to read a blog about it.
“Before you criticize effective altruism come up with something better than it” seems like a pretty high standard to me.
Claim: “Basically, just keep doing the cool shit you’ve been doing, which is relatively unjustifiable from any literal utilitarian standpoint, and keep ignoring all the obvious repugnancies taking your philosophy literally would get you into, but at the same time also keep giving your actions the epiphenomenal halo of utilitarian arbitrage, and people are going to keep joining the movement, and billionaires will keep donating, because frankly what you’re up to is just so much more interesting and fun and sci-fi than the boring stuff others are doing.”
It seems like you basically agree with EA recommendations in their entirety. It seems unfair that you’ve made up a bogey monster to criticise when you yourself acknowledge that in practice it’s going really well.
I’m clear about some of the things EA is known for, like AI safety, being justifiable through other philosophies, and that I agree with some of them. Which, you’re right, the argument is focused on my in-principle disagreements, particularly that many will find the in-principle aspects repugnant, and my recommendation is to instead dilute it and use utilitarian calculations as a fig leaf. Again, a more complicated thesis that you’re simply. . . not addressing in this breakdown of unconnected supposed claims.
If this is going to leave you with the opinion that EAs are all defensive, then tell me and I’ll edit it. I wrote it in haste. But in my defence, I did read the entirety of your blog and respond to the points I thought were important.
None of these points were very important to the argument, and the ones that are, like whether or not EA is an outgrowth of utilitarianism, seem pretty settled.
Just to note: the specific accusation of it being “unreasonable” and clickbaity” relies entirely on there being a really strong difference in valence between the terms “poison” (my lay term for it) and “repugnancy” (the well-accepted academic term for it) and I just don’t think it’s the case that “this philosophy is poisonous” is an unreasonable stretch from “this philosophy is repugnant.” That may be a personal thing, but they seem within the same range of negative tone to me, and hence it also seems neither especially unreasonable or clickbaity to lead with a more understandable analogy of the same valence and then explain it in the text. Clickbait would have been if I titled it “Why do billionaires keep giving to a secretive poisonous philosophy?” not “Why I am not an effective altruist.”