There’s a proof showing that any utilitarian ideology violates either the repugnant or sadistic conclusion (or anti-egalitarianism, incentivizing an unequal society), so you can’t cleverly avoid these two conclusions with some fancy math. To add, any fancy view you create will be in some sense unmotivated—you just came with a formula that you like, but why would such a formula be true? Totalism and averagism seem to be the two most interpretable utilitarian ideologies, with totalism caring only about pain/pleasure (and not by whom this pain/pleasure is experienced) and averagism being the same except population-neutral, not incentivizing a larger population unless it has higher average net pleasure. Anything else is kind of an arbitrary view invented by someone who is too into math.
The anti-egalitarianism one seems to me to be the least obviously necessary of the three [1]. It doesn’t seem obviously wrong that for this abstract concept of ‘utility’ (in the hedonic sense), there may be cases and regions in which it’s better to have one person with a bit more and another with a bit less.
But more importantly, I think, why is it so bad that it is ‘unmotivated’. In many domains we think that ‘a balance of concerns’ or ‘a balance of inputs’ yields the best outcome under the constraints.
So why shouldn’t a reasonable moral valuation (‘axiology’) involve some balance of interest in total welfare and interest in average welfare? It’s hard to know where that balancing point should lie (although maybe some principles could be derived). But that still doesn’t seem to invalidate it… any more than my liking some combination of work and relaxation, or believing that beauty lies in a balance between predictability and surprise, etc.
I wouldn’t think ‘invented by someone too into math’ (if that’s possible :) ). If anything I think the opposite. I am accepting that a valuation of what is moral could be valid and defensible even if it can’t be stated in as stark axiomatic terms as the extreme value systems.
In other domains, when we combine different metrics to yield one frankenstein metric, it is because these different metrics are all partial indicators of some underlying measure we cannot directly observe. The whole point of ethics is that we are trying to directly describe this underlying measure of “good”, and thus it doesn’t make sense to me to create some frankenstein view.
The only instance I would see this being ok is in the context of moral uncertainty, where we’re saying “I believe there is some underlying view but I don’t know what it is, so I will give some weight to a bunch of these plausible theories”. Which maybe is what you’re getting at? But in that case, I think it’s necessary to believe that each of the views you are averaging over could be approximately true on its own, which IMO really isn’t the case with a complicated utilitarianism formula, especially since we know there is no formula out there that will give us all we desire. Though this is another long philosophical rabbit hole, I’m sure.
There’s a proof showing that any utilitarian ideology violates either the repugnant or sadistic conclusion (or anti-egalitarianism, incentivizing an unequal society), so you can’t cleverly avoid these two conclusions with some fancy math. To add, any fancy view you create will be in some sense unmotivated—you just came with a formula that you like, but why would such a formula be true? Totalism and averagism seem to be the two most interpretable utilitarian ideologies, with totalism caring only about pain/pleasure (and not by whom this pain/pleasure is experienced) and averagism being the same except population-neutral, not incentivizing a larger population unless it has higher average net pleasure. Anything else is kind of an arbitrary view invented by someone who is too into math.
The anti-egalitarianism one seems to me to be the least obviously necessary of the three [1]. It doesn’t seem obviously wrong that for this abstract concept of ‘utility’ (in the hedonic sense), there may be cases and regions in which it’s better to have one person with a bit more and another with a bit less.
But more importantly, I think, why is it so bad that it is ‘unmotivated’. In many domains we think that ‘a balance of concerns’ or ‘a balance of inputs’ yields the best outcome under the constraints.
So why shouldn’t a reasonable moral valuation (‘axiology’) involve some balance of interest in total welfare and interest in average welfare? It’s hard to know where that balancing point should lie (although maybe some principles could be derived). But that still doesn’t seem to invalidate it… any more than my liking some combination of work and relaxation, or believing that beauty lies in a balance between predictability and surprise, etc.
I wouldn’t think ‘invented by someone too into math’ (if that’s possible :) ). If anything I think the opposite. I am accepting that a valuation of what is moral could be valid and defensible even if it can’t be stated in as stark axiomatic terms as the extreme value systems.
Although many EAs seem to be ok with the repugnant conclusion also.
In other domains, when we combine different metrics to yield one frankenstein metric, it is because these different metrics are all partial indicators of some underlying measure we cannot directly observe. The whole point of ethics is that we are trying to directly describe this underlying measure of “good”, and thus it doesn’t make sense to me to create some frankenstein view.
The only instance I would see this being ok is in the context of moral uncertainty, where we’re saying “I believe there is some underlying view but I don’t know what it is, so I will give some weight to a bunch of these plausible theories”. Which maybe is what you’re getting at? But in that case, I think it’s necessary to believe that each of the views you are averaging over could be approximately true on its own, which IMO really isn’t the case with a complicated utilitarianism formula, especially since we know there is no formula out there that will give us all we desire. Though this is another long philosophical rabbit hole, I’m sure.