Here’s a link to my entry to the Criticism and Red Teaming Contest.
My argument is that EA’s underlying principles default towards a form of totalitarianism. Ultimately, I conclude that we need a reformulated concept of EA to safeguard against this risk.
Questions, comments and critiques are welcomed.
EDIT 16 JUNE 2022: Just a quick note to thank everyone for their comments. This is my first full post on the forum and it’s really rewarding to see people engaging with the post and offering their critiques.
I wasn’t convinced by your argument that basic EA principles have totalitarian implications.
The argument given seems too quick, and relies on premises that seem pretty implausible to me, namely:
Given that this is the weakest part of the piece, I think the title is unfortunate.
Thanks for your three comments, all of which make excellent points. To briefly comment on each one:
(1)
The distinction you draw between (a) do the most good (with your entire life) and (b) do the most good (with whatever fraction of resources you’ve decided to allocate to altruistic ends) is a really good one. I firmly agree with your recommendation that the EA materials make it clearer that EA is recommending (b). If EA could reformulate its objectives in terms of (b) this would be exactly the type of strengthened weak-EA I am arguing for in my piece.
(2)
Thanks for the links here. All of these are good examples of discussions of a form of weak EA as discussed by Michael Nielsen in his notes and built upon in my piece. I note that in each of the linked cases, there is a form of subjective ‘ad-hocness’ to the use of weak EA to moderate EA’s strong tendencies. I therefore have the same concerns as outlined in my piece.
(3)
You’ve touched upon what was actually (and still is) my second largest concern with the piece (see my response to ThomasWoodside above for the first).
I’m conscious that totalitarianism is a loaded term. I’m also conscious that my piece does not spend much time kicking the tyres of the concept. I deliberated for a while as to whether the piece would be stronger if I found another term, or limited my analysis to totalisation. I expect that the critique you’ve made is a common one amongst those who did not enjoy the piece.
My rationale for sticking with the term totalitarianism was twofold:
(A) my piece argues that we need to take (what I argue are the) logical outcomes of strong EA seriously, even if such consequences are clearly not the case today. As set out in my piece, my view is that the logical outcomes of an unmitigated form of strong EA would be (i) a totalising framework (i.e. it would have the ability to touch all human life), and (ii) a small number of centralised organisations which are able to determine the moral value of actions. When you put these two outcomes together, there is at least potential for an ideology which I think fits quite neatly into Dreher’s definition of totalitarianism as used in my piece and applied in your comment above. I therefore reached the view that to duck away from use of the term would be unfaithful to my own argument, as it would be turning a blind eye to what I see as a potential strong EA of tomorrow due to the state of EA today.
(B) I thought totalitarianism was the best way of capturing and synthesising the two separate strains of my argument (externalisation and totalisation). Totalisation is only one element of this.
Thanks again for your really engaging comments.
This reminds me of Adorno and Horkheimer’sThe Dialectic of Enlightenment, which argues, for some of the same reasons you do, that “Enlightenment is totalitarian.” A piece that feels particularly related:
They would probably say “alienation” rather than “externalization,” but have some of the same criticisms.
(I don’t endorse the Frankfurt School or critical theory. I just wanted to note the similarities.)
One thing to consider is moral and epistemic uncertainty. The EA community already does this to some extent, for instance MacAskill’s Moral Uncertainty, Ord’s Moral Parliament, the unilateralist’s curse, etc. but there is an argument that it could be taken more seriously.
This is a really interesting parallel—thank you!
It ties neatly into one of my major concerns with my piece -whether it can be interpreted as anti-rationality / a critique of empiricism (which is not the intention).
My reflexive reaction to the claim that “enlightenment is totalitarian” is fairly heavy scepticism (whereas, obviously, I lean in the opposite direction as regards to EA), so I’m curious what distinctions there are between the arguments made in Dialectic and the arguments made in my piece. I will have a read of Dialectic and think through this further.
Strong EA “doing the most good”, which has risks of slipping to “at any cost” and thus totalitarianism as you say, perhaps should be called “optimized altruism.”
Thanks for engaging with my piece and for these interesting thoughts—really appreciate it.
I agree that, on a personal level, turning ‘doing the most good’ into an instrumental goal towards the terminal goal of ‘being happy’ sounds like an intuitive and healthy way to approach decision-making. My concern however is that this is not EA, or at least not EA as embodied by its fundamental principles as explored in my piece.
The question that comes to my mind as I read your comment is: ‘is instrumental EA (A) a personal ad hoc exemption to EA (i.e. a form of weak EA), or (B) a proposed reformulation of EA’s principles?’
If the former, then I think this is subject to the same pressures as outlined in my piece. If the latter, then my concern would be that the fundamental objective of this reformulation is so divorced from EA’s original intention that the concept of EA becomes meaningless.
I think J.S. Mill’s On Liberty offers a compelling argument for why utilitarians (and, by extension, Strong EAs) ought to favour pluralism, “experiments in living”, and significant spheres of personal liberty.
So, as a possible suggestion for the “What should EA do?” section: Read On Liberty, and encourage other EAs to do likewise. (In the coming year I’ll be adding a ‘study guide’ on this to utilitarianism.net, which should be more accessible to a modern audience than the 19th century original.)
fwiw, my sense is that more EAs already share a Millian ethos rather than a totalitarian one! But it’s certainly important to maintain this.
Thanks for the recommendation. This dovetails nicely with my 4th recommendation (identify a firm philosophical foundation for the weakened form of EA I am proposing). The ‘spheres of personal liberty’ concept sounds like a decent starting point for a reformulation of the principle.
Hi, I enjoyed your article. Parts of this remind me of Popper’s “Utopia and Violence” in Conjectures and Refutations. Given that (strong) longtermist philosophy leads one to consider the value of an action in light of how much it could help bring about a particular utopia (often a techno-utopia), you might find inspiration to expand your critique in Popper’s essay. (I don’t want to endorse any specific view here, I just thought this might help you build a better argument).
Some quotes:
Thanks for this, and I can definitely see the parallels here.
Interestingly, from an initial read of the extracts you helpfully posted above, I can see Popper’s argument working for or against mine.
On one hand, it is not hard to identify a utopian strain in EA thought (particularly in long-termism as you have pointed out). On the other, I think there is a strong case to be made that EA is doing exactly what Popper suggests when he says: Work for the elimination of concrete evils rather than for the realization of abstract goods. Do not aim at establishing happiness by political means. Rather aim at the elimination of concrete miseries. I see the EA community’s efforts in areas like malaria and direct cash transfers as falling firmly within the ‘elimination of concrete evils’ camp.
I agree 100% that the EA community’s efforts in areas like malaria and direct cash transfers are falling quite firmly within the ‘elimination of concrete evils’ camp. IIRC you differentiate between the philosophical foundations and actual practice of effective altruism in your essay. So even if most EA work currently is part of the aforementioned camp, the philosophical foundations might not actually imply this.
I’m skeptical of the section of your argument that goes “weak EA doesn’t suffer from totalization, but strong EA does, and therefore EA does.”
Why do you take strong EA as the “default” and weak EA as something that’s just “present”? I could equally say
Adjudicating between these boils down to whether strong EA or weak EA is the better “true representation” of EA. And in answering that, I want to emphasize—EA is not a person with goals or positions. EA is what EAs do. This is normally a semantic quibble because we use “EA has the position X” as a useful shorthand for “most EAs believe X, motivated by their EA values and beliefs”. But making this distinction is important here, because it distinguishes between weak EA (what EAs do) and strong EA (what EAs mostly do not do). If most EAs believe in and practice weak EA, then I feel like it’s the only reasonable “true representation” of EA.
You address this later on by saying that weak EA may be dominant today, but we can’t speak to how it might be tomorrow. This doesn’t feel very substantial. Suppose someone objects to utilitarianism on the grounds “the utilitarian mindset could lead people to do horrible things in the name of the greater good, like harvesting people’s organs.” They then clarify, “of course no utilitarian today would do that, but we can’t speak to the behavior of utilitarians tomorrow, so this is a reason to be skeptical of utilitarianism today.” Does this feel like a useful criticism of utilitarianism? Reasonable people could disagree, but to me it feels like appealing to the future is a way to attribute beliefs to a large group even when almost nobody holds them, because they could hold those views.
Moreover, I think future beliefs and practices are reasonably predictable, because movements experience a lot of path-dependency. The next generation of EAs is unlikely to derive their beliefs just by introspecting towards the most extreme possible conclusions of EA principles. Rather, they are much more likely to derive their beliefs from a) their pre-existing values, b) the beliefs and practices of their EA peers and other EAs who they respect. Both of these are likely to be significantly more moderate than the most extreme possible EA positions.
Internalizing this point moderates your argument to a different form, “EA principles support a totalitarian morality”. I believe this claim to be true, but the significance of that as “EA criticism” is fairly limited when it is so removed from practice.
I agree with the following statement, which is well put:
I think there are some good examples of this, but they’re not sufficiently prominent in the introductory materials.
One I saw recently, from Luke Muehlhauser:
In a not-very-prominent article in the Key Ideas series, Ben Todd writes:
There’s also You have more than one goal, and that’s fine by Julia Wise.
Thanks for the post. I’ll post some quick responses, split into separate comments...
I agree that “do the most good” can be understood in a totalising way. One can naturally understand it as either:
I read it as (b).
In my experience, people who think there are strong moral arguments for (a) tend to nonetheless think that (b) is a better idea to promote (on pragmatic grounds).
I’ve long thought it’d be good if introductions to effective altruism would make it clearer that: