It is people who are uncertain about whether utilitarianism is correct in the first place who decide to factor in moral uncertainty. Also open question how you actually factor it in, and whether this improved version also doesn’t run into its own set of repugnant conclusions.
Utilitarianism factors in uncertainty, moral and epistemic. Sure, if you can find a way to criticize factoring in uncertainty into utilitarianism, I’m all ears! But of course, then whatever was the superior solution is what utilitarianism recommends as well. Utilitarianism is best thought of as something engineered, not given.
I would also like to separate moral uncertainty from moral parliament. Moral parliament is usually for multiple people with different values to provide their inputs to a decision process (such as superintelligent AI’s values). Moral uncertainty can exist inside the mind of a single person.
I’ve always heard of moral parliament as being primarily about an individual reconciling their own different moral intuitions into a single aggregate judgment. Never heard it used in the sense you’re describing. Here’s Newberry & Ord, which is clearly about reconciling one’s own diverse moral intuitions, rather than a way of aggregating the moral judgments of a group.
We introduce a novel approach to the problem of decision-making under moral uncertainty, based on an analogy to a parliament. The appropriate choice under moral uncertainty is the one that would be reached by a parliament comprised of delegates representing the interests of each moral theory, who number in proportion to your credence in that theory.
It does seem helpful to have a term for aggregating moral judgments of multiple people, but “moral parliament” is already taken.
Utilitarianism comes with more assumptions than a vague non-formalised sense of “do what you think is better”, it formalises “better decisions” in a very specific way.
I was going to keep arguing, but I wanted to ask—it seems like you might be concerned that utilitarianism is “morally unfalsifiable.” In general, my own argument here may convey the idea that “whatever moral frameowrk is correct is utilitarian.” In which case, it’s only tautologically “true” and doesn’t provide any actual decision-making guidance of its own. I don’t think this is actually true about utilitarianism, but I can see how my own writing here could give that impression. Is this getting at the point you’re making?
I’ll have to think about that. I’ve been working on a response, but on consideration, perhaps it’s best to reserve “utilitarianism” for the act of evaluating world-states according to overall sentient affinity for those states.
Utilitarianism might say that X is bad insofar as people experience the badnesss of X. The sum total of badness that people subjectively experience from X determines how bad it is.
Deontology would reject that idea.
And it might be useful to have utilitarianism refuse to accept that “deontology might have a point,” and vice versa.
Utilitarianism factors in uncertainty, moral and epistemic. Sure, if you can find a way to criticize factoring in uncertainty into utilitarianism, I’m all ears! But of course, then whatever was the superior solution is what utilitarianism recommends as well. Utilitarianism is best thought of as something engineered, not given.
I’ve always heard of moral parliament as being primarily about an individual reconciling their own different moral intuitions into a single aggregate judgment. Never heard it used in the sense you’re describing. Here’s Newberry & Ord, which is clearly about reconciling one’s own diverse moral intuitions, rather than a way of aggregating the moral judgments of a group.
It does seem helpful to have a term for aggregating moral judgments of multiple people, but “moral parliament” is already taken.
I was going to keep arguing, but I wanted to ask—it seems like you might be concerned that utilitarianism is “morally unfalsifiable.” In general, my own argument here may convey the idea that “whatever moral frameowrk is correct is utilitarian.” In which case, it’s only tautologically “true” and doesn’t provide any actual decision-making guidance of its own. I don’t think this is actually true about utilitarianism, but I can see how my own writing here could give that impression. Is this getting at the point you’re making?
Can you please explain how utilitarianism factors in moral uncertainty?
As far as I’m aware it has nothing to say on the matter.
I’ll have to think about that. I’ve been working on a response, but on consideration, perhaps it’s best to reserve “utilitarianism” for the act of evaluating world-states according to overall sentient affinity for those states.
Utilitarianism might say that X is bad insofar as people experience the badnesss of X. The sum total of badness that people subjectively experience from X determines how bad it is.
Deontology would reject that idea.
And it might be useful to have utilitarianism refuse to accept that “deontology might have a point,” and vice versa.