AFAIK Yudkowsky’s position is utilitarian, and none of the linked posts and sequences challenge utilitarianism.
I’ve added the word ‘hedonistic’ and fixed a duplicate link. Maybe he’s an atypical utilitarian, depending on our definitions. He’s consequentialist and I think he endorses following a utility function but he certainly opposes simple hedonistic utilitarianism, or the maximisation of any simple good.
the enigmatic (or merely misunderstood) Metaethics sequence
This looks like mind projection fallacy. If so, the obvious explanation is that you don’t understand Yudkowsky’s position properly.
Yes, I found Eliezer’s Metaethics sequence difficult but so did lots of people. Eliezer agrees:
I’ve been pondering the unexpectedly large inferential distances at work here—I thought I’d gotten all the prerequisites out of the way for explaining metaethics, but no. I’m no longer sure I’m even close. I tried to say that morality was a “computation”, and that failed; I tried to explain that “computation” meant “abstracted idealized dynamic”, but that didn’t work either. No matter how many different ways I tried to explain it, I couldn’t get across the distinction my metaethics drew between “do the right thing”, “do the human thing”, and “do my own thing”.
I’ve added the word ‘hedonistic’ and fixed a duplicate link. Maybe he’s an atypical utilitarian, depending on our definitions. He’s consequentialist and I think he endorses following a utility function but he certainly opposes simple hedonistic utilitarianism, or the maximisation of any simple good.
Yes, I found Eliezer’s Metaethics sequence difficult but so did lots of people. Eliezer agrees: