In theory, the core principles of EA (âusing reason and evidence to do as much âgoodâ as possible,â for some definition of âgoodâ) can be applied to moral philosophies besides utilitarianism. What types of moral systems can be combined with EA?
Motivation: I would like to see EA being adapted to more different belief systems so it can appeal to more people; many of us already in the movement are not fully utilitarian anyway. Right now, it seems like most EA cause prioritization efforts use utilitarian reasoning, which limits how many people can apply them without doing the hard work of adapting them to their own moral frameworks.
The following paper is relevant: Pummer & Crisp (2020). Effective Justice, Journal of Moral Philosophy, 17(4):398-415.
From the abstract:
âEffective Justice, a possible social movement that would encourage promoting justice most effectively, given limited resources. The latter minimal view reflects an insight about justice, and our non-diminishing moral reason to promote more of it, that surprisingly has gone largely unnoticed and undiscussed. The Effective Altruism movement has led many to reconsider how best to help others, but relatively little attention has been paid to the differences in degrees of cost-effectiveness of activities designed to [in]crease injustice.â
In âThe Definition of Effective Altruismâ, William MacAskill writes that
âEffective altruism is often considered to simply be a rebranding of utilitarianism, or to merely refer to applied utilitarianism...It is true that effective altruism has some similarities with utilitarianism: it is maximizing, it is primarily focused on improving wellbeing, many members of the community make significant sacrifices in order to do more good, and many members of the community self-describe as utilitarians.
But this is very different from effective altruism being the same as utilitarianism. Unlike utilitarianism, effective altruism does not claim that one must always sacrifice oneâs own interests if one can benefit others to a greater extent. Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.
Unlike utilitarianism, effective altruism does not claim that one ought always to do the good, no matter what the means; indeed...there is a strong community norm against âends justify the meansâ reasoning.
Finally, unlike utilitarianism, effective altruism does not claim that the good equals the sum total of wellbeing. As noted above, it is compatible with egalitarianism, prioritarianism, and, because it does not claim that wellbeing is the only thing of value, with views on which non-welfarist goods are of value.
In general, very many plausible moral views entail that there is a pro tanto reason to promote the good, and that improving wellbeing is of moral value. If a moral view endorses those two ideas, then effective altruism is part of the morally good life.â (emphasis added)
This project might be of interest. They tried to answer the following questions:
How can people with non-utilitarian ethical views, such as egalitarians and justice-oriented individuals, find a place in the effective altruism community?
And are effective altruism methods helpful when we seek to reduce systemic inequalities and social injustices?
And they tried to find the best charity to donate to for these goals.
There are chapters here on Buddhism, Orthodox Judaism and Christianity in this book on religion and EA.
I think there is a simple reason why EA is compatible with many moral views: increasing welfare is an important element of any sensible moral view. Utilitarianism is just the view that this is the only element that matters. But any other sensible moral view will acknowledge that increasing welfare matters at least alongside other considerations.
Plus: the element of increasing welfare has become more important in the past 3-4 decades since our opportunities for increasing welfare have increased a lot compared to the previous history of humanity. Thus, the âutilitarian elementâ of any sensible moral view has become practically more relevant in the past 3-4 decades. And since EA helps us to exploit these opportunities, EA matters according to any sensible moral view.
Here is my rambling answer to your question.
I like virtue ethics, and I see it as compatible. I think that EA would be a slightly better movement is the level of utilitarianism was reduced by 5% and the level of virtue ethics was increased by 5%. My rough thoughts are that while I am influenced by various ethical ideas/âschools of thought, I tend to be slightly less of a utilitarian and slightly more of a virtue ethicist than the EA-aligned people I see (which is admittedly a very small and non-representative sample).
I view âbeing a good personâ not merely as âhaving large positive impactâ but also conducting oneself properly. Thinking critically, treating people respectfully, and being honest are things I value, not as rules in a deontological sense, but as aspirations for the type of person I want to be. I find stoicism very influential on me, and the classic grouping of wisdom, justice, courage, and moderation lines up nicely with the type of person I want to be. This is, of course, aspiration. I am still very far from those ideals.
My rough impression (again, from a small and non-representative sample) is that EAs tend to not value justice very much, not value âproper conductâ very much, and not value wisdom very much. I find it strange to see people doing things that are not respectful of others, to see people not being gentle or kind or welcoming, to see people unaware of what causes happiness in themselves.
Honestly also seems undervalued among EAs. I dislike seeing people use inflated/âexaggerated titles and descriptions, such as having a title of âdirectorâ or âpresidentâ when in reality they a working manager of one person at an organization they founded, or âinvited to speak at Cambridgeâ when it was really EA Cambridge that invited them to present to a student group on a Zoom call (not real examples). Maybe these people have more impact as a result of this polishing/âdeception, but I wish that these EAs were more virtuous.
In a simplistic toy example, I find it odd that the person who turned $100 into $200 is lauded, and the person who turned $10 into $50 is ignored. I often think less of âwhat has this person accomplishedâ than âwhat choices has this person made, considering what this person has accomplished and considering what they started with and considering all the other challenges and assistance they had.â
The above thoughts are sort of some of the reasons why I like Julie Wise and her writings so much. I donât know anything about her family background, but her writings strike me as much more humble and pragmatic, and are not littered with âI went to an expensive schoolâ and âlook how impressive I amâ which I see elsewhere.
To heavily simplify my view, EA is largely formed by two key aspects:
An ethical/ânormative view that âMaximizing [good] is ideal/âdesirableâ;
An epistemic and methods emphasis on things like open-mindedness, using more critical thinking vs. passion impulses, emphasizing the importance of research, thinking at the marginal level (debatably), etc.
The first aspect in my view is open to a lot of interpretation around the word âgood,â and is the only aspect that should matter here, I think. Utilitarianism defines good in terms of consequences (either pleasure vs. suffering or preferences, depending on your flavor of util). Deontology defines good in terms of rights, duties, categorical imperatives, etc. Virtue ethics focuses on virtue⊠and so on. This shouldnât pose any problem for alternative ethical theories.
I know that some ethical theories (somewhat strangely/âartificially in my view) have this in-built thing saying âoh, you donât need to maximize goodness, some good actions are just supererogatoryâ (@Deontology). This might seem to pose issues for compatibility, but to head off this rabbit trail (which I can explain in more detail if anyone is actually curious enough to ask), I donât think it is an issue.
The second aspect of EA is irrelevant to any (Âżlegitimate?) moral theories in my view, since I donât think that âmoral theoriesâ should (definitionally speaking) go beyond identifying what is âgoodâ/âwhat makes one world better than an alternative world. (You could theoretically try to bundle a bunch of epistemic or prescriptive claims like âYou should emphasize listening to women/âmarginalized groupsâ along with some ethical theory and call the whole bundle an ethical theory, but that would presumably be misleading)
However, the parenthetical above does hit on a potential key issue, which is that I think different ethical theories probably tend to be associated with different epistemic/âetc. worldviews.