The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.
Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.
I agree that most EAs identify with consequentialism, and that proportion was likely higher in the past. I also lean consequentialist myself. But that’s not what we disagree about. You move from ‘The majority of EAs lean consequentialist’ to ‘The only ideas EA should consider seriously are utilitarian ones’ - and that I disagree with.
Moral Uncertainty is a book about what to do given there are multiple plausible ethical theories, written by two of EA’s leading lights Toby Ord and Will MacAskill (in addition to Krister Bykvist). Perhaps you could consider it.
The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.
Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.
I agree that most EAs identify with consequentialism, and that proportion was likely higher in the past. I also lean consequentialist myself. But that’s not what we disagree about. You move from ‘The majority of EAs lean consequentialist’ to ‘The only ideas EA should consider seriously are utilitarian ones’ - and that I disagree with.
Moral Uncertainty is a book about what to do given there are multiple plausible ethical theories, written by two of EA’s leading lights Toby Ord and Will MacAskill (in addition to Krister Bykvist). Perhaps you could consider it.