I just wanted to leave a very quick comment (sorry I’m not able to engage more deeply).
I think yours is an interesting line of criticism, since it tries to get to the heart of what EA actually is.
My understanding of your criticism is that EAs attempts to find an interesting middle ground between full utilitarianism and regular sensible do-gooding, whereas you claim there isn’t one. In particular, we can impose limits on utilitarianism, but they’re arbitrary and make EA contentless. Does this seem like a reasonable summary?
I think the best argument that an interesting middle ground exists the fact that EAs in practice have come up with ways of doing that that aren’t standard (e.g. only a couple of percent of US philanthropy is spent on evidence-backed global health at best, and << 1% on ending factory farming + AI safety + ending pandemics).
More theoretically, I see EA as being about something like “maximising global wellbeing while respecting other values”. This is different from regular sensible do-gooding in being more impartial, more wellbeing focused and more focused on finding the very best ways to contribute (rather than the merely good). I think another way EA is different is being more skeptical, open to weird ideas and trying harder to take a bayesian, science-aligned approach to finding better ways to help. (Cf the key values of EA.)
However, it’s also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only thing that matters, or a moral obligation.
(Another way to understand EA is the claim that we should pay more attention to consequences, given the current state of the world, but not that only consequences matter.)
You could respond that there’s arbitrariness in how to adjudicate conflicts between maximising wellbeing and other values. I basically agree.
But I think all moral theories imply crazy things (“poison”) if taken to extremes (e.g. not lying to the axe murder as a deontologist; deep ecologists who think we should end humanity to preserve the environment; people who hold the person-affecting view in population ethics who say there’s nothing bad about creating a being who’s life is only suffering).
So imposing some level of arbitrary cut offs on your moral views is unavoidable. The best we can do is think hard about the tradeoffs between different useful moral positions, and try to come up with an overall course of action that’s non-terrible on the balance of them.
Hi Benjamin, thanks for your thoughts and replies. Some thoughts in return:
In particular, we can impose limits on utilitarianism, but they’re arbitrary and make EA contentless. Does this seem like a reasonable summary?
I think it should literally be thought of as dilution, where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.”
However, it’s also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only thing that matters, or a moral obligation.
Even the high-level “key values” that you link to implies a lot of utilitarianism to me, e.g., moral obligations like”it’s important to consider many different ways to help and seek to find the best ones,” some calculation of utility like “it’s vital to attempt to use numbers to roughly weigh how much different actions help” as well as a call to “impartial altruism” that’s pretty much just saying to sum up people (what is one adding up in that case? I imagine something pretty close to hedonic utility).
But I think all moral theories imply crazy things (“poison”) if taken to extremes
My abridged response buried within a different comment thread:
Even if it were true one can find repugnant conclusions from any notion of morality whatsoever (I’m not sure how one would prove this), there would still be great and lesser degrees of repugnance, and there would also be the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism, as I tried to explain in the piece, is based on treating morality like a market and performing arbitrage. So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions. . . Also, if one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it’s possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that’s into maximising).
However, I don’t think it’s easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that’s a good thing to do. And that if you can 100 lives with the same cost, that’s an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
If one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?
I don’t see why it implies nihilism. I think it’s shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.
Hi Erik,
I just wanted to leave a very quick comment (sorry I’m not able to engage more deeply).
I think yours is an interesting line of criticism, since it tries to get to the heart of what EA actually is.
My understanding of your criticism is that EAs attempts to find an interesting middle ground between full utilitarianism and regular sensible do-gooding, whereas you claim there isn’t one. In particular, we can impose limits on utilitarianism, but they’re arbitrary and make EA contentless. Does this seem like a reasonable summary?
I think the best argument that an interesting middle ground exists the fact that EAs in practice have come up with ways of doing that that aren’t standard (e.g. only a couple of percent of US philanthropy is spent on evidence-backed global health at best, and << 1% on ending factory farming + AI safety + ending pandemics).
More theoretically, I see EA as being about something like “maximising global wellbeing while respecting other values”. This is different from regular sensible do-gooding in being more impartial, more wellbeing focused and more focused on finding the very best ways to contribute (rather than the merely good). I think another way EA is different is being more skeptical, open to weird ideas and trying harder to take a bayesian, science-aligned approach to finding better ways to help. (Cf the key values of EA.)
However, it’s also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only thing that matters, or a moral obligation.
(Another way to understand EA is the claim that we should pay more attention to consequences, given the current state of the world, but not that only consequences matter.)
You could respond that there’s arbitrariness in how to adjudicate conflicts between maximising wellbeing and other values. I basically agree.
But I think all moral theories imply crazy things (“poison”) if taken to extremes (e.g. not lying to the axe murder as a deontologist; deep ecologists who think we should end humanity to preserve the environment; people who hold the person-affecting view in population ethics who say there’s nothing bad about creating a being who’s life is only suffering).
So imposing some level of arbitrary cut offs on your moral views is unavoidable. The best we can do is think hard about the tradeoffs between different useful moral positions, and try to come up with an overall course of action that’s non-terrible on the balance of them.
Hi Benjamin, thanks for your thoughts and replies. Some thoughts in return:
I think it should literally be thought of as dilution, where you can dilute the philosophy more and more, and as you do so, EA becomes “contentless” in that it becomes closer to just “fund cool stuff no one else is really doing.”
Even the high-level “key values” that you link to implies a lot of utilitarianism to me, e.g., moral obligations like”it’s important to consider many different ways to help and seek to find the best ones,” some calculation of utility like “it’s vital to attempt to use numbers to roughly weigh how much different actions help” as well as a call to “impartial altruism” that’s pretty much just saying to sum up people (what is one adding up in that case? I imagine something pretty close to hedonic utility).
My abridged response buried within a different comment thread:
Makes sense. It just seems to me that the diluted version still implies interesting & important things.
Or from the other direction, I think it’s possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.
I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that’s into maximising).
However, I don’t think it’s easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that’s a good thing to do. And that if you can 100 lives with the same cost, that’s an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.
Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.
I don’t see why it implies nihilism. I think it’s shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.