I’ve been a moral realist for a very long time and generally agree with this post.
I will caveat though that there is a difference between moral realism (there are moral truths), and motivational internalism (people will always act according to those truths when they know them). I think the latter is much less clearly true, and one of the primary confusions that occur when people argue about moral realism and AI safety.
I also think that moral truths are knowledge, and we can never know things with 100% certainty. This means that even if there are moral truths in the world (out there), it is very possible to still be wrong about what they are, and even a superintelligence may not figure them out necessarily. Like most things, we can develop models, but they will generally not be complete.
I’ll admit I kinda skimmed some of Bentham’s arguments and some of them do sound a bit like rhetoric that rely on intuition or emotional appeal rather than deep philosophical arguments.
If I wanted to give a succinct explanation for my reasons for endorsing moral realism, it would be that morality has to do with what subjects/sentients/experiencers value, and these things they value, while subjective in the sense that they come from the perceptions and judgments of the subjects, are objective in the sense that these perceptions and in particular the emotions or feelings experienced because of them, are true facts about their internal state (i.e. happiness and suffering, desires and aversions, etc.). These can be objectively aggregated together as the sum of all value in the universe from the perspective of an impartial observer of said universe.
Thanks for the response. What you describe doesn’t sound very objectionable to me, but I don’t think it’s what Bentham is arguing for. As far as I know, Bentham endorses non-naturalist moral realism, so he would not think that moral facts would be facts about natural phenomena such as our internal psychological states.
Ah, good catch! Yeah, my flavour of moral realism is definitely naturalist, so that’s a clear distinction between myself and Bentham, assuming you are correct about what he thinks.
I’ve been a moral realist for a very long time and generally agree with this post.
I will caveat though that there is a difference between moral realism (there are moral truths), and motivational internalism (people will always act according to those truths when they know them). I think the latter is much less clearly true, and one of the primary confusions that occur when people argue about moral realism and AI safety.
I also think that moral truths are knowledge, and we can never know things with 100% certainty. This means that even if there are moral truths in the world (out there), it is very possible to still be wrong about what they are, and even a superintelligence may not figure them out necessarily. Like most things, we can develop models, but they will generally not be complete.
I understand endorsing moral realism, but do you think Bentham presents any good arguments here?
I’ll admit I kinda skimmed some of Bentham’s arguments and some of them do sound a bit like rhetoric that rely on intuition or emotional appeal rather than deep philosophical arguments.
If I wanted to give a succinct explanation for my reasons for endorsing moral realism, it would be that morality has to do with what subjects/sentients/experiencers value, and these things they value, while subjective in the sense that they come from the perceptions and judgments of the subjects, are objective in the sense that these perceptions and in particular the emotions or feelings experienced because of them, are true facts about their internal state (i.e. happiness and suffering, desires and aversions, etc.). These can be objectively aggregated together as the sum of all value in the universe from the perspective of an impartial observer of said universe.
Thanks for the response. What you describe doesn’t sound very objectionable to me, but I don’t think it’s what Bentham is arguing for. As far as I know, Bentham endorses non-naturalist moral realism, so he would not think that moral facts would be facts about natural phenomena such as our internal psychological states.
Ah, good catch! Yeah, my flavour of moral realism is definitely naturalist, so that’s a clear distinction between myself and Bentham, assuming you are correct about what he thinks.