Eliezer Yudkowsky has challenged utilitarianism and some forms of moral realism in the Fun Theory sequence, the enigmatic (or merely misunderstood) Metaethics sequence and the fictionalised dilemma Three Worlds Collide.
I’m confused. AFAIK Yudkowsky’s position is utilitarian, and none of the linked posts and sequences challenge utilitarianism. 3WC being an obvious example where only one specific branch—average preference utilitarianism—is argued to be wrong. The sequences are attempts to specify parts of the utility function and its behavior—even going so far as to argue for deontological laws as part of utilitarianism for corrupt humans—not refutations.
the enigmatic (or merely misunderstood) Metaethics sequence
This looks like mind projection fallacy. If so, the obvious explanation is that you don’t understand Yudkowsky’s position properly.
AFAIK Yudkowsky’s position is utilitarian, and none of the linked posts and sequences challenge utilitarianism.
I’ve added the word ‘hedonistic’ and fixed a duplicate link. Maybe he’s an atypical utilitarian, depending on our definitions. He’s consequentialist and I think he endorses following a utility function but he certainly opposes simple hedonistic utilitarianism, or the maximisation of any simple good.
the enigmatic (or merely misunderstood) Metaethics sequence
This looks like mind projection fallacy. If so, the obvious explanation is that you don’t understand Yudkowsky’s position properly.
Yes, I found Eliezer’s Metaethics sequence difficult but so did lots of people. Eliezer agrees:
I’ve been pondering the unexpectedly large inferential distances at work here—I thought I’d gotten all the prerequisites out of the way for explaining metaethics, but no. I’m no longer sure I’m even close. I tried to say that morality was a “computation”, and that failed; I tried to explain that “computation” meant “abstracted idealized dynamic”, but that didn’t work either. No matter how many different ways I tried to explain it, I couldn’t get across the distinction my metaethics drew between “do the right thing”, “do the human thing”, and “do my own thing”.
I’m confused. AFAIK Yudkowsky’s position is utilitarian, and none of the linked posts and sequences challenge utilitarianism. 3WC being an obvious example where only one specific branch—average preference utilitarianism—is argued to be wrong. The sequences are attempts to specify parts of the utility function and its behavior—even going so far as to argue for deontological laws as part of utilitarianism for corrupt humans—not refutations.
This looks like mind projection fallacy. If so, the obvious explanation is that you don’t understand Yudkowsky’s position properly.
I’ve added the word ‘hedonistic’ and fixed a duplicate link. Maybe he’s an atypical utilitarian, depending on our definitions. He’s consequentialist and I think he endorses following a utility function but he certainly opposes simple hedonistic utilitarianism, or the maximisation of any simple good.
Yes, I found Eliezer’s Metaethics sequence difficult but so did lots of people. Eliezer agrees: