That post on deliberative alignment seems to be just about one method by which we might build aligned AIs, not about the idea of moral alignment in general.
I’m probably less skeptical than you are because take as evidence that we align humans to moral value systems all the time. And although we don’t do it perfectly, there are some very virtuous folks out there who take their morals seriously. So I think alignment to some system of morality is certainly possible.
Whether or not we can figure out which moral judgements are “right” is another matter, although perhaps we can at least build AI that is aligned with universally recognized norms like “don’t murder” and “save lives”.
That post on deliberative alignment seems to be just about one method by which we might build aligned AIs, not about the idea of moral alignment in general.
I’m probably less skeptical than you are because take as evidence that we align humans to moral value systems all the time. And although we don’t do it perfectly, there are some very virtuous folks out there who take their morals seriously. So I think alignment to some system of morality is certainly possible.
Whether or not we can figure out which moral judgements are “right” is another matter, although perhaps we can at least build AI that is aligned with universally recognized norms like “don’t murder” and “save lives”.