You don’t account for the value of your future self, but do you account for the values of a version of yourself that is idealized in some appropriate way? E.g. more rational, thought about morality for longer, smarter etc. Whether this would have significant impact on your values, is an open question, which also depends on how you’d ‘idealize’ yourself. I’d be very interested in thoughts on how much we should expect our moral views to change upon further deliberation by the way.
On moral realism, I assume you mean that we have absolutely no evidence about the truth of either utilitarianism or anti-utilitarianism so we should apply a principle of indifference as to which one is more likely? I think I agree with that idea, but there still remains slightly higher chance that utilitarianism is true—simply because more people think it is, even if we find their evidence for that questionable.
Then of course there’s still the question of why one should care about such an objective morality anyway—my approach would be to evaluate whether I’m an agent who’s goal it is to do what’s objectively moral or who’s goal it is to do some other thing that I find moral.
You don’t account for the value of your future self, but do you account for the values of a version of yourself that is idealized in some appropriate way? E.g. more rational, thought about morality for longer, smarter etc. Whether this would have significant impact on your values, is an open question, which also depends on how you’d ‘idealize’ yourself. I’d be very interested in thoughts on how much we should expect our moral views to change upon further deliberation by the way.
On moral realism, I assume you mean that we have absolutely no evidence about the truth of either utilitarianism or anti-utilitarianism so we should apply a principle of indifference as to which one is more likely? I think I agree with that idea, but there still remains slightly higher chance that utilitarianism is true—simply because more people think it is, even if we find their evidence for that questionable. Then of course there’s still the question of why one should care about such an objective morality anyway—my approach would be to evaluate whether I’m an agent who’s goal it is to do what’s objectively moral or who’s goal it is to do some other thing that I find moral.