Not really. EA really shouldn’t presume a moral consequentialism approach, just real-world effectiveness.
I would note that advocating for improving utility (a core EA concept!) is not the same thing as utilitarianism.
yeah but I think you’d just be setting yourself up for an uphill battle having to explain to every 5th person how EA is not a utilitarian movement despite the office name
(I agree)
Not really. EA really shouldn’t presume a moral consequentialism approach, just real-world effectiveness.
I would note that advocating for improving utility (a core EA concept!) is not the same thing as utilitarianism.
yeah but I think you’d just be setting yourself up for an uphill battle having to explain to every 5th person how EA is not a utilitarian movement despite the office name
(I agree)