Here are features of EA that are justifiable from a utilitarian perspective, but not from other moral frameworks:
1. DALY-based evaluations of policies. The idea of a DALY assumes that quality of life is interchangeable between people and aggregatable across people, which is not common sense and not true in a rights-based framework, because rights are not interchangeable between people.
2. Longtermism. Most arguments for longtermism are of the form “there will be far more people in the future, so the future is more important to preserve” which is a utilitarian argument. Maybe you could make a “future people have rights” argument, but that doesn’t answer why their rights are potentially more important than neartermist concerns—only a population-weighting view does that.
3. (Relatedly) Population ethics. Almost every non-utilitarian view entails person-affecting views: an act is only bad if it’s bad for someone. Creating happy lives is not a moral good in other philosophies, whereas (many though not all) EAs are motivated by that.
4. Animal welfare. Animal welfare concerns as we imagine them stem from trying to reduce animal pain. You could bend over backward to describe why animals have rights, but most rights-based frameworks derive rights from some egalitarian source (making them unsuitable to say animals have “less rights than people but still some rights”, which is the intuition most of us have). Moreover, even if you could derive animal rights, it would be super unclear what actions are best to support animal rights (how do you operationalize animal dignity?), whereas a utilitarian view allows you to say “the best actions are the ones that minimize the pain that animals experience” leading you to solutions like eliminating battery cages.
I don’t think you can reject utilitarianism without rejecting these features of EA. Utilitarianism could be “wrong” in an abstract sense but I think 70% of EAs see it as the best practical guide to making the world better. It often does conflict with common-sense ethics—the common sense of most people would suggest animal suffering doesn’t matter, and that future people matter significantly less than people alive today! Utilitarianism is not an unwanted appendage to EA that could hamper it in the future. It’s the foundation of EA’s best qualities: an expanding moral circle and the optimizing of altruistic resources.
The use of DALYs and QALYs is not specifically utilitarian. They can be used in other frameworks. The difference is how they are weighted. For example, a utilitarian may only care about the net gain across the whole population, whereas someone motivated by (say) a Rawlsian perspective would place more moral weight on achieving gains to the worst off.
Here are features of EA that are justifiable from a utilitarian perspective, but not from other moral frameworks:
1. DALY-based evaluations of policies. The idea of a DALY assumes that quality of life is interchangeable between people and aggregatable across people, which is not common sense and not true in a rights-based framework, because rights are not interchangeable between people.
2. Longtermism. Most arguments for longtermism are of the form “there will be far more people in the future, so the future is more important to preserve” which is a utilitarian argument. Maybe you could make a “future people have rights” argument, but that doesn’t answer why their rights are potentially more important than neartermist concerns—only a population-weighting view does that.
3. (Relatedly) Population ethics. Almost every non-utilitarian view entails person-affecting views: an act is only bad if it’s bad for someone. Creating happy lives is not a moral good in other philosophies, whereas (many though not all) EAs are motivated by that.
4. Animal welfare. Animal welfare concerns as we imagine them stem from trying to reduce animal pain. You could bend over backward to describe why animals have rights, but most rights-based frameworks derive rights from some egalitarian source (making them unsuitable to say animals have “less rights than people but still some rights”, which is the intuition most of us have). Moreover, even if you could derive animal rights, it would be super unclear what actions are best to support animal rights (how do you operationalize animal dignity?), whereas a utilitarian view allows you to say “the best actions are the ones that minimize the pain that animals experience” leading you to solutions like eliminating battery cages.
I don’t think you can reject utilitarianism without rejecting these features of EA. Utilitarianism could be “wrong” in an abstract sense but I think 70% of EAs see it as the best practical guide to making the world better. It often does conflict with common-sense ethics—the common sense of most people would suggest animal suffering doesn’t matter, and that future people matter significantly less than people alive today! Utilitarianism is not an unwanted appendage to EA that could hamper it in the future. It’s the foundation of EA’s best qualities: an expanding moral circle and the optimizing of altruistic resources.
The use of DALYs and QALYs is not specifically utilitarian. They can be used in other frameworks. The difference is how they are weighted. For example, a utilitarian may only care about the net gain across the whole population, whereas someone motivated by (say) a Rawlsian perspective would place more moral weight on achieving gains to the worst off.