Hi Benjamin, thanks for your thoughts and replies. Some thoughts in return:
In particular, we can impose limits on utilitarianism, but they’re arbitrary and make EA contentless. Does this seem like a reasonable summary?
I think it should literally be thought of as dilution, where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.”
However, it’s also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only thing that matters, or a moral obligation.
Even the high-level “key values” that you link to implies a lot of utilitarianism to me, e.g., moral obligations like”it’s important to consider many different ways to help and seek to find the best ones,” some calculation of utility like “it’s vital to attempt to use numbers to roughly weigh how much different actions help” as well as a call to “impartial altruism” that’s pretty much just saying to sum up people (what is one adding up in that case? I imagine something pretty close to hedonic utility).
But I think all moral theories imply crazy things (“poison”) if taken to extremes
My abridged response buried within a different comment thread:
Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. . . Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it’s possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that’s into maximising).
However, I don’t think it’s easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that’s a good thing to do. And that if you can 100 lives with the same cost, that’s an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
If one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
I don’t see why it implies nihilism. I think it’s shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.
Hi Benjamin, thanks for your thoughts and replies. Some thoughts in return:
I think it should literally be thought of as dilution, where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.”
Even the high-level “key values” that you link to implies a lot of utilitarianism to me, e.g., moral obligations like”it’s important to consider many different ways to help and seek to find the best ones,” some calculation of utility like “it’s vital to attempt to use numbers to roughly weigh how much different actions help” as well as a call to “impartial altruism” that’s pretty much just saying to sum up people (what is one adding up in that case? I imagine something pretty close to hedonic utility).
My abridged response buried within a different comment thread:
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it’s possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that’s into maximising).
However, I don’t think it’s easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that’s a good thing to do. And that if you can 100 lives with the same cost, that’s an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
I don’t see why it implies nihilism. I think it’s shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.