What About Deontology? Ethics of Social Belonging and Conformity in Effective Altruism
At first glance, Effective Altruism (EA) seems inseparable from utilitarianism—a consequentialist philosophy pioneered by Bentham that emphasizes maximizing overall well-being. By contrast, deontology, a moral ethics associated with Kant, argues that some actions are inherently wrong, regardless of their outcomes — in other words, ends don’t always justify means. A 2017 EA survey revealed that over half of respondents leaned towards utilitarianism, 12% endorsed other forms of consequentialism, while less than 4% identified with deontology. Benjamin Todd, in 2020, even felt compelled to debunk the common misconception that EA is synonymous with utilitarianism.
But what if EA has more in common with deontology than it first appears?
———
Moving towards deontology ethics
Benjamin Todd himself has noted that EA does not demand pursuing the good at all costs. He wrote that “unlike utilitarianism, effective altruism does not claim that one ought always to do the good, no matter what the means”. With William MacAskill (2017), he argued against pursuing careers with overwhelmingly negative effects—even if expected benefits could be greater than potential losses—by calling to attention the risk of violating fundamental rights or moral principles. This reasoning echoes deontological ethics, where certain careers, such as Marketing and R&D for compulsive behaviors (smoking, gambling, and so on), Factory Farming, or Weapons Development, are ruled out as categorically wrong—they contend that “in these jobs, the negatives are likely too great compared to the benefits, even if you donate. Some might also be morally impermissible for non-consequentialist reasons.”
Kant’s categorical imperative—the principle that one should act only according to maxims that could become universal law, that is only out of absolute necessity—offers a useful framework to explore this alignment. Alongside theoretical universalizability, EA seems to promote practical principles that distinguish between actions fundamentally “bad” or “good,” regardless of utilitarian calculations. For instance, the overarching influence of longtermism within the community—i.e. “taking seriously just how big the future could be and how high the stakes are in shaping it” (opinion by MacAskill in the NY Times in 2017)— and the focus on principles that emphasize catastrophic risk avoidance, engender a disproportionate emphasis on potential, anticipated risks. Consequently, by explicitly prioritizing curing the future, maybe at the cost of finding solutions for today’s existing challenges (e.g. global poverty, food insecurity, unequal access to education), a common sentiment pushes for a certain type of action in a deontological fashion. Drawing on Kant’s idea of universalizability, therefore, I will argue that conformity to social norms, along the idea of fundamentally “good” versus “bad” actions, seems to drive EA action and account for part of the movement’s priorities.
An ethics of conformity despite uncertainty: distinguishing Right from Wrong
I believe that, by conforming to EA trended career pathways and correct thinking, deontological commandments are an important part of what constitutes the EA community. My argument first stems from the idea that some career paths are overwhelmingly advised for within the EA community, with AI safety and governance at the forefront, but also Animal Welfare and Global Health. As a matter of fact, among the highest-impact careers paths 80,000 Hours has identified so far, at least 5 or 6 out of 10 options relate to AI safety and governance. Meanwhile, in Preventing an AI-related catastrophe, Benjamin Hilton clearly acknowledged “significant uncertainty about how big [AI as an existential threat] risk is”, estimating it at only 1%. He reached that figure by “incorporating considerations of the argument in favor of the risk (which is itself probabilistic) as well as reasons why this argument might be wrong”. More generally, various experts call on pursuing careers in which uncertainty doesn’t allow to envision as of now whether they represent an actual danger, and if so, to what extent.
MacAskill (2019)’s defined Effective Altruism as “the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms”. Even if we were to hold that choices are promoted through a careful and thoughtful consideration of his conception, I question whether EA priorities actually transcend strict cost-benefit analysis. In my view, actions within the EA community could be treated as ends in themselves, and not just as means to maximize impact. Conforming to community leaders’ takes on moral action and pursuing hot topics like AI-related or pandemic prevention careers, I believe, is undertaken out of moral easiness and social animality.
The role of social belonging
Various paradigmatic experiments, such as Solomon Asch’s conformity line experiment —whereby individuals sensing some kind of social pressure disbelieved their eyes and stated manifestly erroneous answers regarding the size of lines that were presented before them—or Stanley Milgram’s obedience experiment—in which subjects guided by so-called lab coat professionals accepted to inflict lethal electric shocks to some (fictive) peers placed in another room—have shown, throughout the last century, to what extent the impulse for social belonging influences individual behaviors and our in-context representations of what is good or bad. Since Singer’s foundational article on the Drowning Child and Moral Circle Expansion, the EA community has greatly called upon its members to engage critically with their world, to take our global responsibility, and to “free ourselves from our absurd conception of success”. Nevertheless, alignment with EA trend-setters and moral deference could, I sense, lead to certain actions be deemed inherently “right” by community norms, no matter their expected utility, thus limiting the realm of possible commitments available for EA members.
Stemming from our reasoning, as one acts in keeping with community consensus and bien-pensance, we could argue that one acts in compliance with a universal law of nature. If we were to push it further, we could recall David Radney’s words, in How minds change, who showed that what allow conspiracy groups to form and thrive are feelings of belonging and acceptance, which can be “more important than any unusual detail [and can make one] willing to suspend [one’s] disbelief not to feel alone”. All in all, Singer’s call for people to lead “an ethical life [as] one in which we identify ourselves with other, larger, goals, thereby giving meaning to our lives” has a corollary: the creation, enforcement and retribution by community peers of an evaluative framework that one ought to follow. In theory, the goal is to maximize one’s effective altruist action, which entails calculating ends and means. In practice, the quest for a properly altruistic action gets blurred by one’s willingness to fulfill one’s feeling of belonging to the group and enjoy the social reward of acting in line with EA tenets—which in and of itself mixes deontological and consequentialist concerns.
———
If we are to hold that EA community members act out of a sense of duty but also according to the social benefits they expect from their course of action, one could argue that EA operates under a “half-deontological, half-utilitarian” framework: moral norms and community trends influence behavior alongside welfarist considerations. Nevertheless, our line of criticism remains as vivid: questioning dominant priorities—such as AI safety or pandemic prevention—could feel like transgressing community norms, deterring genuine scrutiny and potentially limiting advancements in how issues are understood, framed, and acted upon.
As Kant argued in his Grounding for the Metaphysics of Morals (1785), actions must arise from duty to be truly moral in deontological ethics. To remain true to its mission, one open line is to cultivate a culture of open critique and self-reflection. This means subjecting even the most rooted cause areas to ongoing evaluation while encouraging members to disentangle their motivations.
By reflecting on these dynamics, thereby recognizing the interplay between deontological and utilitarian ethics, and cultivating a culture of open inquiry, the movement can continue to redefine what it means to practice altruism effectively.
Neither deontology nor utilitarianism: virtue ethics. It is the only one that considers human behavior in its sense of cultural evolution. Kant’s deontology did not allow him to take a rational and impartial position on ethical issues such as women’s rights, extreme social inequality or slavery, in which he was dependent on the prejudices of his time. And consequentialism-utilitarianism is not consequentialist enough if it ignores that all human action depends on internalized patterns of moral behavior: lifestyle, ethos.
If we want to develop the greatest good for the greatest number, the most convenient thing is to develop the most benevolent, empathetic and rational human behavior possible as a lifestyle and foundation of a prosocial culture.