Michael Nielsen’s “Notes on effective altruism”

Link post

Quantum physicist Michael Nielsen has published a powerful critical essay about EA.

Summary:

Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom.

Some passages I highlighted:

I have EA friends who donate a large fraction of their income to charitable causes. In some cases it’s all their income above some fairly low (by rich developed world standards) threshold, say $30k. In some cases it seems plausible that their personal donations are responsible for saving dozens of lives, helping lift many people out of poverty, and preventing many debilitating diseases, often in some of the poorest and most underserved parts of the world. Some of those friends have directly helped save many lives. That’s a simple sentence, but an extraordinary one, so I’ll repeat it: they’ve directly helped save many lives.

As extraordinary as my friend’s generosity was, there is something further still going on here. Kravinsky’s act is one of moral imagination, to even consider donating a kidney, and then of moral conviction, to follow through. This is an astonishing act of moral invention: someone (presumably Kravinsky) was the first to both imagine doing this, and then to actually do it. That moral invention then inspired others to do the same. It actually expanded the range of human moral experience, which others can learn from and then emulate. In this sense a person like Kravinsky can be thought of as a moral pioneer or moral psychonaut, inventing new forms of moral experience.

Moral reasoning, if taken seriously and acted upon, is of the utmost concern, in part because there is a danger of terrible mistakes. The Nazi example is overly dramatic: for one thing, I find it hard to believe that the originators of Nazi ideas didn’t realize that these were deeply evil acts. But a more everyday example, and one which should give any ideology pause, is overly self-righteous people, acting in what they “know” is a good cause, but in fact doing harm. I’m cautiously enthusiastic about EA’s moral pioneering. But it is potentially a minefield, something to also be cautious about.

when EA judo is practiced too much, it’s worth looking for more fundamental problems. The basic form of EA judo is: “Look, disagreement over what is good does nothing directly to touch EA. Indeed, such disagreement is the engine driving improvement in our notion of what is good.” This is perhaps true in some God’s-eye, omniscient, in-principle philosopher’s sense. But EA community and organizations are subject to fashion and power games and shortcomings and biases, just like every other community and organization. Good intentions alone aren’t enough to ensure effective decisions about effectiveness. And the reason many people are bothered by EA is not that they think it’s a bad idea to “do good better”. But rather that they doubt the ability of EA institutions and community to live up to the aspirations.

These critiques can come from many directions. From people interested in identity politics I’ve heard: “Look, many of these EA organizations are being run by powerful white men, reproducing existing power structures, biased toward technocratic capitalism and the status quo, and ignoring many of the things which really matter.” From libertarians I’ve heard: “Look, EA is just leftist collective utilitarianism. It centralizes decision-making too much, and ignores both price signals and the immense power that comes from having lots of people working in

their own self-interest, albeit inside a system designed so that self-interest (often) helps everyone collectively .” From startup people and inventors I’ve heard: “Aren’t EAs just working on public goods? If you want to do the most good, why not work on a startup instead? We can just invent and scale new technology (or new ideas) to improve the world!” From people familiar with the pathologies of aging organizations and communities, I’ve heard: “Look, any movement which grows rapidly will also start to decay. It will become dominated by ambitious careerists and principal agent problems, and lose the sincerity and agility that characterized the pioneers and early adopters”

All these critiques have some truth; they also have significant issues. Without getting into those weeds, the immediate point is that they all look like “merely” practical problems, for which EA judo may be practiced: “If we’re not doing that right, we shall improve, we simply need you to provide evidence and a better alternative”. But the organizational patterns are so strong that these criticisms seem more in-principle to me. Again: if your social movement “works in principle” but practical implementation has too many problems, then it’s not really working in principle, either. The quality “we are able to do this effectively in practice” is an important (implicit) in-principle quality.

I’ve heard several EAs say they know multiple EAs who get very down or even depressed because they feel they’re not having enough impact on the world. As a purely intellectual project it’s fascinating to start from a principle like “use reason and evidence to figure out how do the most good in the world” and try to derive things like “care for children” or “enjoy eating ice cream” or “engage in or support the arts” as special cases of the overarching principle. But while that’s intellectually interesting, as a direct guide to living it’s a terrible mistake. The reason to care for children (etc) isn’t because it helps you do the most good. It’s because we’re absolutely supposed to care for our children. The reason art and music and ice cream matter aren’t because they help you do the most good. It’s because we’re human beings – not soulless automatons – who respond in ways we don’t entirely understand to things whose impact on our selves we do not and cannot fully apprehend.

Now, the pattern that’s been chosen by EA has been to insert escape clauses. Many talk about having a “warm fuzzies” budget for “ineffective” giving that simply makes them feel good. And they carve out ad hoc extension clauses like the one about having children or setting aside an ice cream budget or a dinner budget, and so on. It all seems to me like special pleading at a frequency which suggests something amiss. You’ve started from a single overarching principle that seems tremendously attractive. But now you’ve either got to accept all the consequences, and make yourself miserable. Or you have to start, as an individual, grafting on ad hoc extension clauses.

EA is an inspiring meaning-giving life philosophy. It invites people to strongly connect with some notion of a greater good, to contribute to that greater good, and to make it central in their life. EA-in-practice has done a remarkable amount of direct good in the world, making people’s lives better. It’s excellent to have the conversational frame of “how to do the most good” readily available and presumptively of value. EA-in-practice also provides a strong community and sense of belonging and shared values for many people. As moral pioneers EA is providing a remarkable set of new public goods.

All this makes EA attractive as a life philosophy, providing orientation and meaning and a clear and powerful core, with supporting institutions. Unfortunately, strong-EA is a poor life philosophy, with poor boundaries that may cause great distress to people, and underserves core needs. EA-in-practice is too centralized, too focused on absolute advantage; the market often does a far better job of providing certain kinds of private (or privatizable) good. However, EA-in-practice likely does a better job of providing certain kinds of public good than do many existing institutions. EA relies overmuch on online charisma: flashy but insubstantial discussion of topics like the simulation argument and x-risk and AI safety have a tendency to dominate conversation, rather than more substantial work. (This does not mean there aren’t good discussions of such topics.) EA-in-practice is too allied with existing systems of power, and does little to question or change them. Appropriating the term “effective” is clever marketing and movement-building, but intellectually disingenuous. EA views illegibility as a problem to be solved, not as a fundamental condition. Because of this it does poorly on certain kinds of creative and aesthetic work. Moral utilitarianism is a useful but limited practical tool, mistaking quantification that is useful for making tradeoffs with a fundamental fact about the world.

I’ve strongly criticized EA in these notes. But I haven’t provided a clearly and forcefully articulated alternative. It amounts to saying that someone’s diet of ice cream and chocolate bars isn’t ideal, without providing better food; it may be correct, but isn’t immediately actionable. Given the tremendous emotional need people have for a powerful meaning-giving system, I don’t expect it to have much impact on those people. It’s too easy to arm-wave the issues away, or ignore them as things which can be resolved by grafting some exception clauses on. But writing the notes both helped me better understand why I’m not EA, and also why I think the EA principle would, with very considerable modification, make a valuable part of some larger life philosophy. But I don’t yet understand what that life philosophy is.