I’m happy with more critiques of total utilitarianism here. :)
For what it’s worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.
I may have missed it, but I didn’t see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, “Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.”).
I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks.
I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people’s preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.
I’m happy with more critiques of total utilitarianism here. :)
For what it’s worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.
I may have missed it, but I didn’t see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, “Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.”).
I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks.
I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people’s preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.