Against Impartial Altruism

What is it about Effective Altruism that strikes the greater well-intending public as alien and misguided, and should EAs heed this judgment, and if so what should they do instead? This essay is an argument for what we might call “effective deontology”, and an attempt to give some rational form to a ubiquitous emotional reaction to the basic idea of EA, which I share—that the people who comprise the EA movement should feel a strong duty to those in their immediate proximity (along any shared axis), not those who live far away and certainly not those in the far future who have others to take care of them. That they don’t, and that they focus on global altruism and longterm-ism instead, is offensive to human moral instincts, with, I believe, good reason. In my opinion this focus arises for reasons which are subtle and ultimately selfish, and they should stop and instead prioritize something that looks like honoring what seems to be asked of them—taking responsibility for those around themselves.

It’s not about the utilitarianism

Criticisms of EA that address the underlying utilitarian ethics in terms of its perverse incentivizes and repugnant conclusions are far off the mark. Utilitarianism in any practical form may begin as a naive optimization question, but then has to work out how to optimize while accounting for numerous uncertainties: in the definition of utility, as evidenced by interminable undergraduate debates; in the ability to measure utility even according to a single definition; and in the higher-order effects of well-intending activities—in particular the “interaction terms” in this “series of consequences”: the ways people are affected by the well-being of others, or respond in their own best interest, or are affected by knowing about the existence of an agent optimizing utility, just to name a few.

Without massive and unrealistic simplifying assumptions, utilitarianism is evidently intractable, and has the flavor of a toy physical model, only applicable to reality in limiting cases; whereas morality in reality more resembles quantum field theory: actually a vast space of theories, which perhaps have categorizable structures, and are inapplicable to reality unless deeply analyzed to extract higher-order emergent phenomena, i.e. a prescription of what is good, which may turn out to be represented in terms that bear no reference to any explicit sense of “utility”, in the way that the Standard Model of physics reduces to any of classical mechanics, special relativity, or elementary quantum mechanics, or even simpler toy models, depending on the asymptotic case under study, and also support many “potential theories” that are instructive examples of the analytic tools but are ultimately discarded as models of reality in general..

With this analogy in hand, I find it extremely likely the conclusion that utilitarianism’s implementation in practice might not make direct reference to any sense of “utility” or any of those elementary conclusions—so likely, in fact, and so self-evident, that utilitarianism is nearly irrelevant to the question of “what is good”, except to the extent that it’s necessary to for an analytic mind to pick over it and be familiar with its pitfalls to move on to other ideas. Aware of this, there is no point in taking on utility.

Emotions are prior to ethics

Scientific theories, of course, can be tested against experiment, what is the standard of truth for ethical principles? It is emotion; our innate moral instincts—reactions to things that happen to us, or are known to us, and in the diluted form of reactions to hypotheticals. Everything we do in contemplating any ethical principle is emotional: we set up hypothetical situations, determine the conclusions of our principles, and then ask how we feel about them (or how we would feel). Here we have our trolley problems and other elementary examples of utilitarianism, as well as all kinds of more sophisticated analysis. (Criminal justice, for example, is a solution to a problem posed by emotions—in an emotionless society, no justice system would be needed, except perhaps for strictly game-theoretic reasons.)

Emotions are evidently real—they do happen, and they affect us, and we find ideas to explain and justify them; often we start from what we feel to work out what we think. But the relationship between emotion and ethics is not simple. At the same time, we can think things that change what we feel—the two are intertwined. This is he therapeutic model of cognitive behavioral therapy, for one—just the awareness of our feelings, and the improved practice of isolating them and seeing them clearly, lets us be less controlled by them. It is also a principle of rationalism that our feelings may lead to bad ideas and biases, and that ideas derived from feeling may not agree with other ideas, and that with clear thinking we can work out what is more true and more important and discard what is less so, and what is less good—bad for us or for the world or for others.

Exercises like the shallow pond (I’m intentionally engaging with the present public idea of EA) are an effort to change our interaction with the world, by forming a new idea about the world, by changing our feelings, by introducing a new idea. Modifying our sense of good with the introduction of ideas seems to have the effect of diluting our emotions—we care about more things, but not as intensely as what we cared about before. As we modify our emotional relationship to the world with the application of thought, our engagement becomes increasingly intellectual and less emotional, despite an appeal to emotion being a component of that thinking.

At the same time, EA’s emphasis on “effectiveness” emerges from a countervailing emotion: a desire for honesty; for doing the best work one can and no less, and certainly not for doing less than the best and then congratulating oneself; for always ensuring the good can supersede one’s own pride; for engaging with the world as it is and not as it we imagine it to be, or as it would be convenient for it to be for the work of doing good to be easy. It contrasts to “ineffective” (or not explicitly effective) altruism—the entire idea is borne of a cynicism for the prioritization of good-seeming over the actually-good; or, for the prioritization of the emotional sense of good over the intellectual/​moral sense that emerges from intelligent examination. As a severe case we think of the “philanthropic-industrial complex” turning out in some cases to do nearly vacuous or merely self-serving work, either due to incompetence or selfishness (and in this light, incompetence is selfish). EA by comparison comes off as a “Protestant Reformation” of globalized philanthropy. (I find it tedious to claim that EA is a religion; evidently it isn’t, both are particular types of a more general class, which involve many of the same emotions, faculties, and tendencies.)

EA prioritizes rationality over emotionality by disposition

Emotions are not irrelevant to ethics or effectiveness; emotions lead to ideas and ideas lead to emotions. And yet EAs are generally—though I doubt this will be contentious—people who tend to be particularly unemotional, and greatly prefers rational argument over emotional appeal in principle.

This is of course because empirical truth forms a common reality among people whose emotions do not necessarily agree, and because, being, generally, academics and people working in finance or tech, they—or rather we, because though I do not count myself as an EA I certainly fit the profile—possess dispositions and talents particularly suited to rational thinking and less so towards emotion; and furthermore we are justifiably wary of the over-emotional or overly-aesthetic and under-rational and incompetent modes associated with past ideologies.

These dispositions have a tendency towards asceticism, and towards gentle and safe lives. Asceticism is self-inflicted, not borne of necessity. The classical idea of an ascetic prioritizes something else over their own desires and well-being; perhaps they want things and go without, or perhaps they have learned to want nothing at all. It’s less compelling if they don’t know what they want; or if they’re an ascetic not because their resources are being used elsewhere but because they are unconvinced that deserve they deserve anything for themself. At worst, it’s a display of the appearance of being good—having nothing—without actually doing anything good.

EA members tend to live low-risk lives compared to the general population, both due to being born with considerable privilege, and tending to make safer decisions. Corporate and finance jobs tend to be extremely well-paid, and for many of us, we reached those roles mostly by doing what we were supposed to—going to college, studying STEM of some kind, getting a job, getting another—and doing well. I think it’s likely that EAs live lives of relatively little extremely-heightened emotion. We easily imagine communities that talk out their problems; they handle anger effectively, but also aren’t gripped by wanton passions. (Is your life less chaotic than the other people you grew up with, in general? Is it safer?) It is a predominantly male group, though perhaps not more-so than the demographics its constituents draw from. I don’t mean to say these things are bad, but that the EA group is relatively impassive.

There is also something undeniably fun about the rational mode of discourse and analysis, for those with a talent for it. There is joy in fulfillment of curiosity, in understanding the world, in building up towards the achievement of real results by your understanding—and in finding our what you’re capable of, and showing others, and feeling like you said something smart, and in fitting in with other respectable intelligent people. Why wouldn’t you prefer this mode, over unreliable and often painful emotions?

And yet emotions seem to be essentially connected to morality—is it possible that EA, guided as it is by rationality, is completely missing something essential?

Altruism and longterm-ism abdicate duty

In one of its simplest forms, EA represents the principle that charitable giving should be directed to maximize some object like lives-saved, globally. Longterm-ism takes this even further. Not even considering the likely intractability of this problem, this implies a maximization problem where all human lives are considered equal. This is a statement of belief about one’s duty.

Utilitarianism implies a presumption of equal obligation to all humans under consideration—there’s no getting away from deontology. (I happen to suspect that, if one could “sum the series of consequences” it would also imply deontology in practice.)

This view of duty is ahistorical. Most people ever have considered their duty to extend to their friends, families, and neighbors; to people like them and near them and hierarchically above and below them, and along any other axis of commonality that intersects with the themselves. Common humanity is one, but it’s not at the top of the list. Even equipped with enlightened liberal philosophy, or international marxist solidarity, or religious love, each emphasizing common humanity, people do not typically act like their main moral responsibility is to distant people.

This localized sense of duty derives principally from:

i) the fact that one’s sense of moral responsibility is derived from emotions, and there are people nearby (on any of those axes) who stimulate emotional reactions; it is only in the modern era when you can be exposed to far more information about suffering and need 5000 miles away than in your own city, and it is fair to say that in a sense your world consists of the information you take in and the things you know about

ii) because their power to do good was also fairly localized, because in another sense your world consists of what you have power over

iii) because distant people have other people to take care of them!

The EA position that one’s duty is to the entire global (or future) population tends to be very upsetting to most other people, because they have other ideas about your duty! EA is a movement of some of the most powerful people in our society, and a certain “natural’ reaction occurs when those with power seem to not take up a responsibility that ought to be theirs: that they have abdicated a serious duty, and that they correspondingly deserve to have their power taken away.

EAs, I think, don’t see this clearly—and I believe they ought to. They cannot see it because they don’t comprehend their own power, because they are too dispositionally unemotional, and because of what I view as a kind of insufficiency of character; a childishness which inflects them towards certain kinds of work, which I intend to give form to so it can begin to be corrected.

What does power look like?

In its barest form, the function of power is to be insulated against necessity. It’s not having to worry about stuff, it’s peace and the freedom to have fun. Suburbs and tech companies are places of considerable power. Money is power, abstracted—the power to cause other people to do things for your advantage. Slightly more broadly, power lessens the obstacles to free action (but it does not necessarily involving acting with that freedom.)

When we think of power we imagine wanton cruelty, or taking, or influence over government positions, or lives of luxury and excess. It can take all of these forms, but these things really constitute a kind of aesthetic idea of power—and an easy form to hate unambiguously. (Incidentally, this rejection of the aesthetic of power seems to be the most common basis for veganism—a common practice in EA.) When we think of power we imagine it being possessed by people who want power, and act with great intention on the world to get it. Not tech workers who somewhat lazily found themselves in highly-paid careers, and only then started wondering what was worth doing. (Think of how many American millionaires don’t think of themselves as wealthy!)

(Incidentally, with this view of ubiquitous and inactive power, where even an exchange of labor is a kind of mutual exchange of power, one also concludes that having power over others is not necessarily bad—the criteria for what is right or wrong is different than just asking if an action involves the exercise of power, even involuntarily? This is a good argument against vegetarianism.)

Power likes to legitimize or downplay itself, or clothe itself in some garb other than the bare interest in power. Historically it has liked to arranged for the less-powerful to be in debt to the more-powerful, or for the powerless to deserve their station, or to explain the preservation of power as deserved, paternalistic, just stewardship, or benevolence. Under capitalism, “deservingness” translates to meritocracy, although it is quite obviously contingent more on the circumstances of your life far more than anything else. (Marxism, in this sense, was the invention of a new negative sense of aesthetic for power, for industrial capitalism rather than the aristocracy.)

It seems bizarrely ahistorical that the rich tech world has so much power, and yet so little interest in actually procuring it, or using it—in the exercise of free will for their own benefit! Except for the aesthetics, a wealthy tech worker arguably has similar power to an aristocrat of 200 years ago. Modern financial capitalism and big tech benefit from an immense concentration power, often without any of the customary aesthetics of power—ad-supported businesses don’t even take your money! (Just your time and attention, and some psychic toll—while giving you something in return, yes.) For the most part this new ruling class is disinterested in explicit exercises of power—they buy land, they build generational wealth, they amuse themselves with investments or just video games. But they are deeply interested in perpetuating the regime that maintains their power—none of these companies would knowingly act too much against their best interest.

Power comes with responsibility

If power is your ability to cause another to act in your own benefit or according to your own will, then duty—obligation—is a willful sharing of power; allowing yourself to act for someone else’s benefit. The view of EA that finds offensive its attention on the distant and the far future, is operating on this principle: that personal power, in excess of what it takes to be safe and reasonably, should be shared with those around you. It should be shared with those who perceive you, and perceive you as related to them in some way, and should be preferentially shared with those who are nearest to you, solely on the basis of their proximity, but it could also be justified that this just divvies up the work of caring for others among those with power. (There are plenty of other arguments here; and one could also argue that generosity is justified by humility, observing the contingency and relative arbitrariness of one’s success.)

And people deeply resent a concentration of power that does not act according to this duty. In this case of the tech population, it’s not even an active rejection of the duty, but just a failure to observe it—either because they don’t recognize their own power, or they are so isolated humans with needs that they don’t recognize the duty that their power entails, or that they don’t feel the duty because they lack consistent access to their own feelings at all. EA of course observes a duty, but, according to the above and general human consensus, the wrong one.

(As an aside, it is my experience that the ascetic approach to life tends to pinch off access to my own feelings—if I do not do things I am interested in or that I desire, I down-regulate those desires, and with them all of my emotional access. This suggests a prescription for the lack of access to feelings: intentionally fan the flames of your interests and desires. Practice at doing things because you want to and for no greater reason. This is the way to reignite the fire.)

Actually, what seems to go wrong is a greater abdication that simply failing to recognize implied duty—I believe the rationally-dispositioned person actively resists this sense of duty, and finds actually following it to be deeply uncomfortable, and I would characterize this as a kind of childish cowardice, with all the negativity that connotes.

As mentioned above, there is a certain fun in the use of rational ability—the thrill of finding out how clever you are, of being the first to think of something novel, of getting to have your name on it. EAs are deeply motivated to find overlooked areas to do good—and this can be good—because here are the low-hanging fruits, the easy wins. They prefer easy wins because dealing with the complex problems of the world around them is genuinely hard. They want to be effective, and do honest and humble work, but there is a kind of bias towards work that is easy for them—tractable to their talents, and consistent with their idea of who they are, a kind of well-meaning wizard who shows generosity in bringing their cleverness to bear to helps others. Taking that raw intellect and using it to become an expert in something else, something far messier and less neatly-tractable—isn’t really an option, because it doesn’t appeal to that thrill.

I sense also in the EA approach a childish-seeming aversion to “playing by the rules”—a desire to disrupt even “doing good”, as if to show that you can do something new yourself. An unwillingness to work within existing structures (granting the distrust of the non-rational establishment is generally well-placed!) because learning someone else’s ideas is far less fun, and you get less credit, and fixing complicated broken movements is more difficult than easy philanthropic wins.

I even sense a childishness in the immense interest in criticism. There is a kind of archetypical progression: children get to play, secured by the power of others. They start to learn and develop skills, and delight in their abilities. They start to get interested in the world, and take on little responsibilities, and test their own limits and see what their abilities can accomplish, but still cautiously—it is a stage of seeking approval, and being validated by the engaged attention of others. This feels like the current stage of growth of the entire tech/​rationalist culture, out of which EA has emerged. But there’s another stage on this archetypal journey: as soon as they start to try their powers out on the world, they immediately realize that this power inherently entails responsibility—that they can no longer be kids; something is suddenly asked of them. They grow up too fast.

Working on difficult thorny things for the people directly around you will ask a lot of you emotionally. You have to understand deeply, and bridge gaps of understanding, and educate people, and work with people who think differently than you. You have to figure out how to compromise and deliver the news and deal with disappointment; you have to take responsibility for your successes and failures. You have to play the part of the person who’s responsible. It asks much more of you than working on something distant and clean and intellectual where most of the work is posting on the internet. We should be wary of things that are easy.

When you live in a complete vacuum of necessity, it is very hard to tell what is important. It is easy to fall for things that are interesting or thrilling or novel or self-flattering. There is no reason to expect that important work should have any of these qualities, and we should be wary if it does. Important work is important, and (according to the implied duty to others), it is important because it helps those who need you to escape from the constraints of necessity and have access their own freedom. It is a mark of maturity, of being the adult in the room, to realize that nobody is going to decide what’s important for you—you have to decide yourself. You’re probably not even going to like it.

So what should EAs do?

As movements go, EA gets a lot right. I have no objection to “effective” at all, nor “altruism” in general. Some institutions are particularly impartial in their application of altruism, and this is probably appropriate. It is the presumption of a global duty that raises an objection. Longterm-ism of the X-risk flavor in particular I have no patience for. If you buy the argument assembled here, I would conclude that nearly all longterm-ist efforts should be discarded, keeping only the most imminently practical (likely relating to clean energy, food supplies, and epidemics.) Really, I’ve argued against all of EA as it currently exists, but so much of what it does is valuable—I would not want to see that go away. Just reallocated.

Instead of looking globally, look hyper-locally. Leave other places to other people—we’ll all divide up the work, if we trust each other. Find something to take responsibility for, and do it until you earn the trust of others that it’s taken care of. Find a person to help with your money, or work on figuring out what practical changes can meaningfully improve the lives of those who it around you, and personally apply your faculties to the project of enacting those changes. Work with others and don’t get too attached to the credit. Live in the world and delight in it. Touch grass, and enjoy it.

Create meaning. Decide what’s important for the people around you, and develop your conviction for it, and spread it. Construct “sacredness” around the things that are important, not in a religious sense but in the sense that elections are sacred to democracy—and if reality comes up short of the significance it’s imbued with (as elections do), endeavor to change reality to match. Significance requires work. A world without significance is one where nothing bears any emotional saliency; where we zoom in so close to everything bad it appears purely causal and rational and inevitable, and, not knowing what else we can do, just sigh and say: “it is what it is”.


This is an EA criticism contest entry. I am not an EA follower myself, and am completely new to the forum. This essay is my attempt to give coherent shape to the objection I feel when I read about EA, and which I recognize in others’ harsh emotional reactions to EA, but which other criticisms I have read do not seem to capture sufficiently.