Thanks Nathan, I’ll try to keep my replies brief here and address the critical points of your questions.
Am I right that this was your main point?
EA is doing well, but only because it ignores the fundamental conclusions of utilitarianism?
I wouldn’t phrase it like this. I think EA has been a positive force in the world so far, particularly in some of the weirder causes I care about (e.g., AI safety, stimulating the blogosphere, etc). But I think it’s often good practices chasing bad philosophy, and then my further suggestion is that the best thing to do is dilute that bad philosophy out of EA as much as possible (which I point out is already a trend I see happening now).
I don’t understand how utilitarianism is uniquely poisonous here. Do you write articles calling all these other worldviews poisonous? If not, why not? It’s not like they aren’t powerful. Your critcisms are are either unfair or universal.
This is why I make the metaphor to arbitrage (e.g., pointing out that arbitrage how SBF made all his money and using the term “utilitarian arbitrage”). Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. That is, just because you can identify cases of repugnancy doesn’t mean they are equivalent, as one philosophy might lead very naturally to repugnancies (as I think utilitarianism does), whereas the other might require incredibly specific states of the world (e.g., an axe murderer in your house). Even if two philosophies fail in dealing with specific cases of serial killers, there’s a really big difference in the one that encourages you to be the serial killer if you can get away with it.
Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
Only total utilitarians (if I’m getting that right) face the repugnant conclusion
From the text it should be pretty clear I disagree with this, as I give multiple examples of repugnancy that are not Parfit’s classic “the repugnant conclusion”—and I also say that adding in epicycles by expanding beyond what you’re calling “total utilitarianism” often just shifts where the repugnancy is, or trades one for another.
Again, and this is where you miss my main point, what EAs do in practice matters. You act as if no one in EA has seen the problems you raise and that we avoid them by mere accident.
I’m unaware of saying that no one in EA is aware of these problems (indeed, one of my latter points implies that they absolutely are), nor that EA avoids them by mere accident. I said explicitly that it avoids them by diluting the philosophy with more and more epicycles to make it palatable. E.g., “Therefore, the effective altruist movement has to come up with extra tacked-on axioms that explain why becoming a cut-throat sociopathic business leader who is constantly screwing over his employees, making their lives miserable, subjecting them to health violations, yet donates a lot of his income to charity, is actually bad. To make the movement palatable, you need extra rules that go beyond cold utilitarianism. . .”
What else would you have people do?
You suggest that EAs will either drink the poison and behave badly
Or dilute the poison and fail to thake their beleifs seriously
The latter.
I’ve gone on too long here after saying in the initial post I’d try to keep my replies to a minimum. Feel free to reply, but this will be my last response.
Hi Benjamin, thanks for your thoughts and replies. Some thoughts in return:
I think it should literally be thought of as dilution, where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.”
Even the high-level “key values” that you link to implies a lot of utilitarianism to me, e.g., moral obligations like”it’s important to consider many different ways to help and seek to find the best ones,” some calculation of utility like “it’s vital to attempt to use numbers to roughly weigh how much different actions help” as well as a call to “impartial altruism” that’s pretty much just saying to sum up people (what is one adding up in that case? I imagine something pretty close to hedonic utility).
My abridged response buried within a different comment thread: