You appear to have missed the central point of the essay, which is strange because it’s repeated over and over. I point out that utilitarians must either dilute or swallow the poison of repugnant conclusions—dilution of the repugnancy works, but the cost is making EA weaker and more vapid, of it becoming a toothless philosophy like “do good in the world.” Instead of grappling with this criticism, you’ve transformed the thesis instead into a series of random unconnected claims, some of which don’t even represent my views. Now, I won’t address your defensive scolding of me personally, as I don’t want to engage with something so childish here. I’d rather see some actual grappling with the thesis, or at least the more interesting parts like on whether there are qualitative, in addition to quantitative, moral differences, but here’s the responses to the set of mostly uninteresting claims you’ve ascribed to me instead of dealing with the thesis.
Implicit claim: EAs are mostly utilitarians
IIRC about 70% of EAs are consequentialists, but I don’t think most will response to your claims as you do. I think your claims are largely straw men.
Already addressed in the text: “I don’t think that there’s any argument effective altruism isn’t an outgrowth of utilitarianism—e.g., one of its most prominent members is Peter Singer, who kickstarted the movement in its early years with TED talks and books, and the leaders of the movement, like William MacAskill, readily refer back to Singer’s “Famine, Affluence, and Morality” article as their moment of coming to.”
You’ll have to explain to me how so many EA leaders readily reference utilitarian philosophy, or refer to utility calculations being the thing that makes EA special, or justify what counts as an effective intervention via utilitarian definitions, without anyone actually being utilitarian. People can call themselves whatever they want, and I understand people wanting to divorce themselves from the repugnancies of utilitarianism, but so much in EA draws on a utilitarian toolbox and all the origins are (often self-admittedly!) in utilitarian thought experiments.
Claim: Utilitarians would murder 1 person to save 5
a) No they wouldn’t. They would know that they would be arrested and unable to help the many others in easier and less risky ways. Given how cheap it is to save a life, why would you risk prison?
b) Noone in EA acts like this. Noone in EA tells people to act like this. It’s a straw man.
(a) If you could get away with it utilitarianism tells you it’s moral to do, you’re just saying “in no possible world could you get away with it” which is both a way-too-strong claim and also irrelevant, for the repugnancy is found in that it is moral to do it and keep it a secret, if you can. As for (b) since harm is caused by inaction (at least according to many in EA), then diverting the charity money from say, the USA, where it will go less far and only save 1 life, to a third-world country, where it will save 5, is exactly this. You saying that “no one says to do that” seems to fly in the face of. . . what everyone is saying to do.
Claim: There is a slippery slope where you have to become a moral angel and give everything away.
I don’t say this anywhere I know of in the text.
Claim: “Is the joy of rich people hiking really worth the equivalent of all the lives that could be stuffed into that land if it were converted to high-yield automated hydroponic farms and sprawling apartment complexes?”
If there were a billion americans, the US would be less dense than the UK. It would be far less dense than England. There isn’t a tradeoff here.
You’re missing the point of this part, which is that the utilitarian arbitrage has to necessarily keep going. You just, what, stop at a billion? Why? Because it sounds good to you, Nathan Young? The moral thing to do is keep going. That’s my point about utilitarian arbitrage leading very naturally to the repugnant conclusion. So this seems to just not grok this part.
Claim: “Whereas I’m of the radical opinion that the poison means something is wrong to begin with.” What is your alternative? Happy to read a blog about it.
“Before you criticize effective altruism come up with something better than it” seems like a pretty high standard to me.
Claim: “Basically, just keep doing the cool shit you’ve been doing, which is relatively unjustifiable from any literal utilitarian standpoint, and keep ignoring all the obvious repugnancies taking your philosophy literally would get you into, but at the same time also keep giving your actions the epiphenomenal halo of utilitarian arbitrage, and people are going to keep joining the movement, and billionaires will keep donating, because frankly what you’re up to is just so much more interesting and fun and sci-fi than the boring stuff others are doing.”
It seems like you basically agree with EA recommendations in their entirety. It seems unfair that you’ve made up a bogey monster to criticise when you yourself acknowledge that in practice it’s going really well.
I’m clear about some of the things EA is known for, like AI safety, being justifiable through other philosophies, and that I agree with some of them. Which, you’re right, the argument is focused on my in-principle disagreements, particularly that many will find the in-principle aspects repugnant, and my recommendation is to instead dilute it and use utilitarian calculations as a fig leaf. Again, a more complicated thesis that you’re simply. . . not addressing in this breakdown of unconnected supposed claims.
If this is going to leave you with the opinion that EAs are all defensive, then tell me and I’ll edit it. I wrote it in haste. But in my defence, I did read the entirety of your blog and respond to the points I thought were important.
None of these points were very important to the argument, and the ones that are, like whether or not EA is an outgrowth of utilitarianism, seem pretty settled.
You are right to criticise my tone. It wasn’t constructive and I’m sorry. I’m glad I wrote criticisms, but I wish I had written them in a more gracious way.
I won’t respond point by point, since, as you say, the points you are responding to aren’t your main points anyway.
I don’t think I understood your article initially.
Am I right that this was your main point?
EA is doing well, but only because it ignores the fundamental conclusions of utilitarianism?
If so, I have 3 main points:
All moral systems have their own “repugnant conclusions”
Never lie deontology encourages you to tell an axe murderer your friend is hiding in the house house
Liberal tolerance can’t even be intolerant of Nazis
Maximum discounting means that cleopatra should have had another biscuit even if it caused all of us to be extinct.
I don’t understand how utilitarianism is uniquely poisonous here. Do you write articles calling all these other worldviews poisonous? If not, why not? It’s not like they aren’t powerful. Your critcisms are are either unfair or universal.
EAs might be consequentialists, but that doesn’t mean they have to bite the bullets you describe here:
Only total utilitarians (if I’m getting that right) face the repugnant conclusion
Some kinds of consequentialists weigh illegal actions more heavily, so won’t be caught out by your surgeon example
If people amending utilitarianism to better fit their intuition is somehow bad then they are damned if they do damned if they don’t. Do you believe this?
Again, and this is where you miss my main point, what EAs do in practice matters. You act as if noone in EA has seen the problems you raise and that we avoid them by mere accident. Maybe we have avoided the pitfalls you state because we saw them and chose to avoid them. You yourself acknolwedge EA does a pretty good job. Maybe that’s deliberate.
Moral systems are what we make them. If utilitarianism has unintuitive consequences we can think about why that it and then modify them. I think the real answer here is that EA is made up of a wider group of consequentialists than you think and that EAs take their consequentialism a little less seriously than you fear they might. What else would you have people do?
You suggest that EAs will either drink the poison and behave badly
Or dilute the poison and fail to thake their beleifs seriously
It seems they get your judgement regardless. How will you be satisfied by the action of EAs here?
As a final point, I’m pretty happy to defend any of the things I said originally. If any of them are particularly important to you, I will.
Thanks Nathan, I’ll try to keep my replies brief here and address the critical points of your questions.
Am I right that this was your main point?
EA is doing well, but only because it ignores the fundamental conclusions of utilitarianism?
I wouldn’t phrase it like this. I think EA has been a positive force in the world so far, particularly in some of the weirder causes I care about (e.g., AI safety, stimulating the blogosphere, etc). But I think it’s often good practices chasing bad philosophy, and then my further suggestion is that the best thing to do is dilute that bad philosophy out of EA as much as possible (which I point out is already a trend I see happening now).
I don’t understand how utilitarianism is uniquely poisonous here. Do you write articles calling all these other worldviews poisonous? If not, why not? It’s not like they aren’t powerful. Your critcisms are are either unfair or universal.
This is why I make the metaphor to arbitrage (e.g., pointing out that arbitrage how SBF made all his money and using the term “utilitarian arbitrage”). Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. That is, just because you can identify cases of repugnancy doesn’t mean they are equivalent, as one philosophy might lead very naturally to repugnancies (as I think utilitarianism does), whereas the other might require incredibly specific states of the world (e.g., an axe murderer in your house). Even if two philosophies fail in dealing with specific cases of serial killers, there’s a really big difference in the one that encourages you to be the serial killer if you can get away with it.
Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
Only total utilitarians (if I’m getting that right) face the repugnant conclusion
From the text it should be pretty clear I disagree with this, as I give multiple examples of repugnancy that are not Parfit’s classic “the repugnant conclusion”—and I also say that adding in epicycles by expanding beyond what you’re calling “total utilitarianism” often just shifts where the repugnancy is, or trades one for another.
Again, and this is where you miss my main point, what EAs do in practice matters. You act as if no one in EA has seen the problems you raise and that we avoid them by mere accident.
I’m unaware of saying that no one in EA is aware of these problems (indeed, one of my latter points implies that they absolutely are), nor that EA avoids them by mere accident. I said explicitly that it avoids them by diluting the philosophy with more and more epicycles to make it palatable. E.g., “Therefore, the effective altruist movement has to come up with extra tacked-on axioms that explain why becoming a cut-throat sociopathic business leader who is constantly screwing over his employees, making their lives miserable, subjecting them to health violations, yet donates a lot of his income to charity, is actually bad. To make the movement palatable, you need extra rules that go beyond cold utilitarianism. . .”
What else would you have people do?
You suggest that EAs will either drink the poison and behave badly
Or dilute the poison and fail to thake their beleifs seriously
The latter.
I’ve gone on too long here after saying in the initial post I’d try to keep my replies to a minimum. Feel free to reply, but this will be my last response.
To be honest, I did feel like it came off this way to me as well. The majority of the piece feels like an essay on why you think utilitarianism sucks, and this post itself frames this as a criticism of EA’s “utilitarian core”. I sort of remember the point about EA just being ordinary do gooding when you strip this away as feeling like a side note, though I can reread it when I get a chance in case I missed something.
To address the point though, I’m not sure it works either, and I feel like the rest of your piece undermines it. Lots of things EA focuses on, like animal welfare and AI safety, are weird or at least weird combinations, so are plenty of its ways of thinking and approaching questions. These are consistent with utilitarianism, but they aren’t specifically tied to it, indeed you seem drawn to some of these and no one is going to accuse you of being a utilitarian after reading this, I have to imagine the idea that you do think something valuable and unique is left behind if you don’t just view EA as utilitarianism has to at least partly be behind your suggestion that we “dilute the poison” all the way out. If we already have “diluted the poison” out, I’m not sure what’s left to argue.
The point about how the founders of the movement have generally been utilitarians or utilitarian sympathetic doesn’t strike me as enough to make your point either[1]. If you mean that the movement is utilitarian at its core in the sense that utilitarianism motivated many of its founders, this is a good point. If you mean that it has a utilitarian core in the sense that it is “poisoned” by the types of implications of utilitarianism you are worried about, this doesn’t seem enough to get you there. I also think it proves far to much to mention the influence of Famine, Affluence and Morality. Non-utilitarian liberals regularly cite On Liberty, non-utilitarian vegans regularly cite Animal Liberation. Good moral philosophers generally don’t justify their points from first principles, but rather with the minimum premises necessary to agree with them on whatever specific point they’re arguing. These senses just seem crucially different to me.
I also think it’s overstated. Singer is certainly a utilitarian, but MacAskill overtly does not identify as one even though he is sympathetic to the theory and I think has plurality credence in it relative to other similarly specific theories, Ord I believe is the same, Bostrom overtly does not identify with it, Parfit moved around a bunch in his career but by the time of EA I believe he was either a prioritarian or “triple theorist” as he called it, Yudkowsky is a key example of yours but from his other writing he seems like a pluralist consequentialist at most to me. It’s true that, as your piece points out, he defends pure aggregation, but so do tons of deontologists these days, because it turns out that when you get specific about your alternative, it becomes veryhard not to be a pure aggregationist.
You appear to have missed the central point of the essay, which is strange because it’s repeated over and over. I point out that utilitarians must either dilute or swallow the poison of repugnant conclusions—dilution of the repugnancy works, but the cost is making EA weaker and more vapid, of it becoming a toothless philosophy like “do good in the world.” Instead of grappling with this criticism, you’ve transformed the thesis instead into a series of random unconnected claims, some of which don’t even represent my views. Now, I won’t address your defensive scolding of me personally, as I don’t want to engage with something so childish here. I’d rather see some actual grappling with the thesis, or at least the more interesting parts like on whether there are qualitative, in addition to quantitative, moral differences, but here’s the responses to the set of mostly uninteresting claims you’ve ascribed to me instead of dealing with the thesis.
Already addressed in the text: “I don’t think that there’s any argument effective altruism isn’t an outgrowth of utilitarianism—e.g., one of its most prominent members is Peter Singer, who kickstarted the movement in its early years with TED talks and books, and the leaders of the movement, like William MacAskill, readily refer back to Singer’s “Famine, Affluence, and Morality” article as their moment of coming to.”
You’ll have to explain to me how so many EA leaders readily reference utilitarian philosophy, or refer to utility calculations being the thing that makes EA special, or justify what counts as an effective intervention via utilitarian definitions, without anyone actually being utilitarian. People can call themselves whatever they want, and I understand people wanting to divorce themselves from the repugnancies of utilitarianism, but so much in EA draws on a utilitarian toolbox and all the origins are (often self-admittedly!) in utilitarian thought experiments.
(a) If you could get away with it utilitarianism tells you it’s moral to do, you’re just saying “in no possible world could you get away with it” which is both a way-too-strong claim and also irrelevant, for the repugnancy is found in that it is moral to do it and keep it a secret, if you can. As for (b) since harm is caused by inaction (at least according to many in EA), then diverting the charity money from say, the USA, where it will go less far and only save 1 life, to a third-world country, where it will save 5, is exactly this. You saying that “no one says to do that” seems to fly in the face of. . . what everyone is saying to do.
I don’t say this anywhere I know of in the text.
You’re missing the point of this part, which is that the utilitarian arbitrage has to necessarily keep going. You just, what, stop at a billion? Why? Because it sounds good to you, Nathan Young? The moral thing to do is keep going. That’s my point about utilitarian arbitrage leading very naturally to the repugnant conclusion. So this seems to just not grok this part.
“Before you criticize effective altruism come up with something better than it” seems like a pretty high standard to me.
I’m clear about some of the things EA is known for, like AI safety, being justifiable through other philosophies, and that I agree with some of them. Which, you’re right, the argument is focused on my in-principle disagreements, particularly that many will find the in-principle aspects repugnant, and my recommendation is to instead dilute it and use utilitarian calculations as a fig leaf. Again, a more complicated thesis that you’re simply. . . not addressing in this breakdown of unconnected supposed claims.
None of these points were very important to the argument, and the ones that are, like whether or not EA is an outgrowth of utilitarianism, seem pretty settled.
Erik,
You are right to criticise my tone. It wasn’t constructive and I’m sorry. I’m glad I wrote criticisms, but I wish I had written them in a more gracious way.
I won’t respond point by point, since, as you say, the points you are responding to aren’t your main points anyway.
I don’t think I understood your article initially.
Am I right that this was your main point?
EA is doing well, but only because it ignores the fundamental conclusions of utilitarianism?
If so, I have 3 main points:
All moral systems have their own “repugnant conclusions”
Never lie deontology encourages you to tell an axe murderer your friend is hiding in the house house
Liberal tolerance can’t even be intolerant of Nazis
Maximum discounting means that cleopatra should have had another biscuit even if it caused all of us to be extinct.
I don’t understand how utilitarianism is uniquely poisonous here. Do you write articles calling all these other worldviews poisonous? If not, why not? It’s not like they aren’t powerful. Your critcisms are are either unfair or universal.
EAs might be consequentialists, but that doesn’t mean they have to bite the bullets you describe here:
Only total utilitarians (if I’m getting that right) face the repugnant conclusion
Some kinds of consequentialists weigh illegal actions more heavily, so won’t be caught out by your surgeon example
If people amending utilitarianism to better fit their intuition is somehow bad then they are damned if they do damned if they don’t. Do you believe this?
Again, and this is where you miss my main point, what EAs do in practice matters. You act as if noone in EA has seen the problems you raise and that we avoid them by mere accident. Maybe we have avoided the pitfalls you state because we saw them and chose to avoid them. You yourself acknolwedge EA does a pretty good job. Maybe that’s deliberate.
Moral systems are what we make them. If utilitarianism has unintuitive consequences we can think about why that it and then modify them. I think the real answer here is that EA is made up of a wider group of consequentialists than you think and that EAs take their consequentialism a little less seriously than you fear they might. What else would you have people do?
You suggest that EAs will either drink the poison and behave badly
Or dilute the poison and fail to thake their beleifs seriously
It seems they get your judgement regardless. How will you be satisfied by the action of EAs here?
As a final point, I’m pretty happy to defend any of the things I said originally. If any of them are particularly important to you, I will.
As always, I hope you are well.
Thanks Nathan, I’ll try to keep my replies brief here and address the critical points of your questions.
I wouldn’t phrase it like this. I think EA has been a positive force in the world so far, particularly in some of the weirder causes I care about (e.g., AI safety, stimulating the blogosphere, etc). But I think it’s often good practices chasing bad philosophy, and then my further suggestion is that the best thing to do is dilute that bad philosophy out of EA as much as possible (which I point out is already a trend I see happening now).
This is why I make the metaphor to arbitrage (e.g., pointing out that arbitrage how SBF made all his money and using the term “utilitarian arbitrage”). Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. That is, just because you can identify cases of repugnancy doesn’t mean they are equivalent, as one philosophy might lead very naturally to repugnancies (as I think utilitarianism does), whereas the other might require incredibly specific states of the world (e.g., an axe murderer in your house). Even if two philosophies fail in dealing with specific cases of serial killers, there’s a really big difference in the one that encourages you to be the serial killer if you can get away with it.
Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
From the text it should be pretty clear I disagree with this, as I give multiple examples of repugnancy that are not Parfit’s classic “the repugnant conclusion”—and I also say that adding in epicycles by expanding beyond what you’re calling “total utilitarianism” often just shifts where the repugnancy is, or trades one for another.
I’m unaware of saying that no one in EA is aware of these problems (indeed, one of my latter points implies that they absolutely are), nor that EA avoids them by mere accident. I said explicitly that it avoids them by diluting the philosophy with more and more epicycles to make it palatable. E.g., “Therefore, the effective altruist movement has to come up with extra tacked-on axioms that explain why becoming a cut-throat sociopathic business leader who is constantly screwing over his employees, making their lives miserable, subjecting them to health violations, yet donates a lot of his income to charity, is actually bad. To make the movement palatable, you need extra rules that go beyond cold utilitarianism. . .”
The latter.
I’ve gone on too long here after saying in the initial post I’d try to keep my replies to a minimum. Feel free to reply, but this will be my last response.
Thanks for talking :)
To be honest, I did feel like it came off this way to me as well. The majority of the piece feels like an essay on why you think utilitarianism sucks, and this post itself frames this as a criticism of EA’s “utilitarian core”. I sort of remember the point about EA just being ordinary do gooding when you strip this away as feeling like a side note, though I can reread it when I get a chance in case I missed something.
To address the point though, I’m not sure it works either, and I feel like the rest of your piece undermines it. Lots of things EA focuses on, like animal welfare and AI safety, are weird or at least weird combinations, so are plenty of its ways of thinking and approaching questions. These are consistent with utilitarianism, but they aren’t specifically tied to it, indeed you seem drawn to some of these and no one is going to accuse you of being a utilitarian after reading this, I have to imagine the idea that you do think something valuable and unique is left behind if you don’t just view EA as utilitarianism has to at least partly be behind your suggestion that we “dilute the poison” all the way out. If we already have “diluted the poison” out, I’m not sure what’s left to argue.
The point about how the founders of the movement have generally been utilitarians or utilitarian sympathetic doesn’t strike me as enough to make your point either[1]. If you mean that the movement is utilitarian at its core in the sense that utilitarianism motivated many of its founders, this is a good point. If you mean that it has a utilitarian core in the sense that it is “poisoned” by the types of implications of utilitarianism you are worried about, this doesn’t seem enough to get you there. I also think it proves far to much to mention the influence of Famine, Affluence and Morality. Non-utilitarian liberals regularly cite On Liberty, non-utilitarian vegans regularly cite Animal Liberation. Good moral philosophers generally don’t justify their points from first principles, but rather with the minimum premises necessary to agree with them on whatever specific point they’re arguing. These senses just seem crucially different to me.
I also think it’s overstated. Singer is certainly a utilitarian, but MacAskill overtly does not identify as one even though he is sympathetic to the theory and I think has plurality credence in it relative to other similarly specific theories, Ord I believe is the same, Bostrom overtly does not identify with it, Parfit moved around a bunch in his career but by the time of EA I believe he was either a prioritarian or “triple theorist” as he called it, Yudkowsky is a key example of yours but from his other writing he seems like a pluralist consequentialist at most to me. It’s true that, as your piece points out, he defends pure aggregation, but so do tons of deontologists these days, because it turns out that when you get specific about your alternative, it becomes very hard not to be a pure aggregationist.