Whoops. I can see how my responses didn’t make my own position clear.
I am an anti-realist, and I think the prospects for identifying anything like moral truth are very low. I favor abandoning attempts to frame discussions of AI or pretty much anything else in terms of converging on or identifying moral truth.
I consider it a likely futile effort to integrate important and substantive discussions into contemporary moral philosophy. If engaging with moral philosophy introduces unproductive digressions/confusions/misplaced priorities into the discussion it may do more harm than good.
I’m puzzled by this remark:
I think anything as specific as this sounds worryingly close to wanting an AI to implement favoritepoliticalsystem.
I view utilitronium as an end, not a means. It is a logical consequence of wanting to maximize aggregate utility and is more or less a logical entailment of my moral views. I favor the production of whatever physical state of affairs yields the highest aggregate utility. This is, by definition, “utilitronium.” If I’m using the term in an unusual way I’m happy to propose a new label that conveys what I have in mind.
The descriptive task of determining what ordinary moral claims mean may be more relevant to questions about whether there are objective moral truths than is considered here. Are you familiar with Don Loeb’s metaethical incoherentism? Or the empirical literature on metaethical variability? I recommend Loeb’s article, “Moral incoherentism: How to pull a metaphysical rabbit out of a semantic hat.” The title itself indicates what Loeb is up to.