I guess I’ll also use the comments to offer my hot take which I don’t think I can immediately justify in a fully explicit manner:
Of course our moral judgements are unreliable (though not in some of the ways investigated in the literature and almost certainly in additional ways that aren’t investigated in the literature). There are some moral judgements which are stable but some which aren’t and even the smaller set of unstable judgements is very concerning—especially for EAs and others trying to maximize (cf. optimizer’s curse) and taking unconventional views. I don’t think the expertise defense or most of the other responses are particularly successful. I think the “moral engineering” approach is essential. We should be building ethical systems that explicitly acknowledge and account for the noise in our intuitions. For example, something like rule utilitarianism seems substantially more appealing to me now than it did before I looked into this area; the regularization imposed by rule utilitarianism limits our ability to fit to noise (cf. bias-variance trade-off).
I guess I’ll also use the comments to offer my hot take which I don’t think I can immediately justify in a fully explicit manner:
Of course our moral judgements are unreliable (though not in some of the ways investigated in the literature and almost certainly in additional ways that aren’t investigated in the literature). There are some moral judgements which are stable but some which aren’t and even the smaller set of unstable judgements is very concerning—especially for EAs and others trying to maximize (cf. optimizer’s curse) and taking unconventional views. I don’t think the expertise defense or most of the other responses are particularly successful. I think the “moral engineering” approach is essential. We should be building ethical systems that explicitly acknowledge and account for the noise in our intuitions. For example, something like rule utilitarianism seems substantially more appealing to me now than it did before I looked into this area; the regularization imposed by rule utilitarianism limits our ability to fit to noise (cf. bias-variance trade-off).