Hi Michael, thanks for the post! I was really happy to see something like this on the EA Forum. In my view, EAs* significantly overestimate the plausibility of total welfarist consequentialism**, in part due to a lack of familiarity with the recent literature in moral philosophy. So I think posts like this are important and helpful.
* I mean this as a generic term (natural language plurals (usually) aren’t universally quantified).
** This isn’t to suggest that I think there’s some other moral theory that is very plausible. They’re all implausible, as far as I can tell; which is partly why I lean towards anti-realism in meta-ethics.
Hi Michael, thanks for the post! I was really happy to see something like this on the EA Forum. In my view, EAs* significantly overestimate the plausibility of total welfarist consequentialism**, in part due to a lack of familiarity with the recent literature in moral philosophy. So I think posts like this are important and helpful.
* I mean this as a generic term (natural language plurals (usually) aren’t universally quantified).
** This isn’t to suggest that I think there’s some other moral theory that is very plausible. They’re all implausible, as far as I can tell; which is partly why I lean towards anti-realism in meta-ethics.