That’s… a lot to unpack. I think we probably disagree on a lot, and I’m not sure further back-and-forth will be all that productive. I trust other readers to assess whose responses were substantive or convincing.
Two final comments:
1) As mentioned in McMahan’s ‘Philosophical Critiques of Effective Altruism’, the earliest arguments by Singer and Unger were based on intuition to a thought experiment and consistency, and “there is no essential dependence of effective altruism on utilitarianism.”
2) Even if we grant that early EA was 100% and whole-heartedly utilitarian, does it follow that EA today should be?
The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.
Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.
I agree that most EAs identify with consequentialism, and that proportion was likely higher in the past. I also lean consequentialist myself. But that’s not what we disagree about. You move from ‘The majority of EAs lean consequentialist’ to ‘The only ideas EA should consider seriously are utilitarian ones’ - and that I disagree with.
Moral Uncertainty is a book about what to do given there are multiple plausible ethical theories, written by two of EA’s leading lights Toby Ord and Will MacAskill (in addition to Krister Bykvist). Perhaps you could consider it.
That’s… a lot to unpack. I think we probably disagree on a lot, and I’m not sure further back-and-forth will be all that productive. I trust other readers to assess whose responses were substantive or convincing.
Two final comments:
1) As mentioned in McMahan’s ‘Philosophical Critiques of Effective Altruism’, the earliest arguments by Singer and Unger were based on intuition to a thought experiment and consistency, and “there is no essential dependence of effective altruism on utilitarianism.”
2) Even if we grant that early EA was 100% and whole-heartedly utilitarian, does it follow that EA today should be?
The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.
Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.
I agree that most EAs identify with consequentialism, and that proportion was likely higher in the past. I also lean consequentialist myself. But that’s not what we disagree about. You move from ‘The majority of EAs lean consequentialist’ to ‘The only ideas EA should consider seriously are utilitarian ones’ - and that I disagree with.
Moral Uncertainty is a book about what to do given there are multiple plausible ethical theories, written by two of EA’s leading lights Toby Ord and Will MacAskill (in addition to Krister Bykvist). Perhaps you could consider it.