I am an anti-realist, and I think the prospects for identifying anything like moral truth are very low. I favor abandoning attempts to frame discussions of AI or pretty much anything else in terms of converging on or identifying moral truth.
Ah, okay. Well, in that case you can just read my original comment as an argument for why one would want to use psychology to design an AI that was capable of correctly figuring out just a single person’s values and implementing them, as that’s obviously a prerequisite for figuring out everybody’s values. The stuff that I had about social consensus was just an argument aimed at moral realists, if you’re not one then it’s probably not relevant for you.
(my values would still say that we should try to take everyone’s values into account, but that disagreement is distinct from the whole “is psychology useful for value learning” question)
I’m puzzled by this remark:
Sorry, my mistake—I confused utilitronium with hedonium.
Ah, okay. Well, in that case you can just read my original comment as an argument for why one would want to use psychology to design an AI that was capable of correctly figuring out just a single person’s values and implementing them, as that’s obviously a prerequisite for figuring out everybody’s values. The stuff that I had about social consensus was just an argument aimed at moral realists, if you’re not one then it’s probably not relevant for you.
(my values would still say that we should try to take everyone’s values into account, but that disagreement is distinct from the whole “is psychology useful for value learning” question)
Sorry, my mistake—I confused utilitronium with hedonium.