where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it’s possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that’s into maximising).
However, I don’t think it’s easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that’s a good thing to do. And that if you can 100 lives with the same cost, that’s an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
If one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
I don’t see why it implies nihilism. I think it’s shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it’s possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that’s into maximising).
However, I don’t think it’s easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that’s a good thing to do. And that if you can 100 lives with the same cost, that’s an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
I don’t see why it implies nihilism. I think it’s shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.