I think objective ordering does imply “one should” so I subscribe to moral realism. However, recently I’ve been highly appreciating the importance of your insistence that the “should” part is kind of fake—i.e. it means something like “action X is objectively the best way to create most value from the point of view of all moral patients” but it doesn’t imply that an ASI that figures out what is morally valuable will be motivated to act on it.
(Naively, it seems like if morality is objective, there’s basically a physical law formulated as “you should do actions with characteristics X”. Then, it seems like a superintelligence that figures out all the physical laws internalizes “I should do X”. I think this is wrong mainly because in human brains, that sentence deceptively seems to imply “I want to do x” (or perhaps “I want to want x”) whereas it actually means “Provided I want to create maximum value from an impartial perspective, I want to do x”. In my own case, the kind of argument for optimism around AI doom in the style that @Bentham’s Bulldog advocated in Doom Debates seemed a bit more attractive before I truly spelled this out in my head.)
I think objective ordering does imply “one should” so I subscribe to moral realism. However, recently I’ve been highly appreciating the importance of your insistence that the “should” part is kind of fake—i.e. it means something like “action X is objectively the best way to create most value from the point of view of all moral patients” but it doesn’t imply that an ASI that figures out what is morally valuable will be motivated to act on it.
(Naively, it seems like if morality is objective, there’s basically a physical law formulated as “you should do actions with characteristics X”. Then, it seems like a superintelligence that figures out all the physical laws internalizes “I should do X”. I think this is wrong mainly because in human brains, that sentence deceptively seems to imply “I want to do x” (or perhaps “I want to want x”) whereas it actually means “Provided I want to create maximum value from an impartial perspective, I want to do x”. In my own case, the kind of argument for optimism around AI doom in the style that @Bentham’s Bulldog advocated in Doom Debates seemed a bit more attractive before I truly spelled this out in my head.)