Human utility functions seem clearly inconsistent with infinite utility.
If you’re not 100% sure that they are inconsistent then presumably my argument is still going to go through, because you’ll have a non-zero credence that actions can elicit infinite utilities and so are infinite in expectation?
I don’t identify 100% with future versions of myself, and I’m somewhat selfish, so I discount experiences that will happen in the distant future. I don’t expect any set of possible experiences to add up to something I’d evaluate as infinite utility.
So maybe from the self interest perspective you discount future experiences. However from a moral perspective that doesn’t seem too relevant, these are experiences and they count the same so if there are an infinite number of positive experiences then they would sum to an infinite utility. Also note that even if your argument counted in the moral realm too then unless you’re 100% sure it does then my reply to your other point will work here too?
I think it’s more appropriate to use Bostrom’s Moral Parliament to deal with conflicting moral theories.
Your approach might be right if the theories you’re comparing used the same concept of utility, and merely disagreed about what people would experience.
But I expect that the concept of utility which best matches human interests will say that “infinite utility” doesn’t make sense. Therefore I treat the word utility as referring to different phenomena in different theories, and I object to combining them as if they were the same.
Similarly, I use a dealist approach to morality. If you show me an argument that there’s an objective morality which requires me to increase the probability of infinite utility, I’ll still ask what would motivate me to obey that morality, and I expect any resolution of that will involve something more like Bostrom’s parliament than like your approach.
If you’re not 100% sure that they are inconsistent then presumably my argument is still going to go through, because you’ll have a non-zero credence that actions can elicit infinite utilities and so are infinite in expectation?
So maybe from the self interest perspective you discount future experiences. However from a moral perspective that doesn’t seem too relevant, these are experiences and they count the same so if there are an infinite number of positive experiences then they would sum to an infinite utility. Also note that even if your argument counted in the moral realm too then unless you’re 100% sure it does then my reply to your other point will work here too?
I think it’s more appropriate to use Bostrom’s Moral Parliament to deal with conflicting moral theories.
Your approach might be right if the theories you’re comparing used the same concept of utility, and merely disagreed about what people would experience.
But I expect that the concept of utility which best matches human interests will say that “infinite utility” doesn’t make sense. Therefore I treat the word utility as referring to different phenomena in different theories, and I object to combining them as if they were the same.
Similarly, I use a dealist approach to morality. If you show me an argument that there’s an objective morality which requires me to increase the probability of infinite utility, I’ll still ask what would motivate me to obey that morality, and I expect any resolution of that will involve something more like Bostrom’s parliament than like your approach.