Hello, I’m Devin, I blog here along with Nicholas Kross. Currently working on a bioethics MA at NYU.
Devin Kalish
Marginal Cases, On Trial
To be clear, I also agree with this.
These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.
The point about ideologies being a minefield, with Nazis as an example, particularly stands out to me. I pattern match this to the parts of harsher critiques that go something like “look at where your precious ideology leads when taken to an extreme, this place is terrible!” Generally, the substantial mistake these make is just casting EA as ideologically purist and ignoring the centrality of projects like moral uncertainty and worldview diversification, as well as the limited willingness of EAs to bite bullets they in principle endorse much of the background logic of (see Pascal’s Mugging and Ajeya Cotra’s train to crazy town).
By not getting into telling us what terrible things we believe, but implying that we are at risk of believing terrible things, this piece is less unflattering, but is on shakier ground. It involves this same mistake about EA’s ideological purism, but on top of this has to defend this other higher level claim rather than looking at concrete implications.
Was the problem with the Nazis really that they were too ideologically pure? I find it very doubtful. The philosophers of the time attracted to them generally were weird humanistic philosophers with little interest in the types of purism that come from analytic ethics, like Heidegger. Meanwhile most philosophers closer to this type of ideological purity (Russell, Carnap) despised the Nazis from the beginning. The background philosophy itself largely drew from misreadings of people like Nietzsche and Hegel, popular anti-semitic sentiment, and plain old historical conspiracy theories. Even at the time intellectual critiques of Nazis often looked more like “they were mundane and looking for meaning from charismatic, powerful men” (Arendt) or “they aesthetisized politics” (Benjamin) rather than “they took some particular coherent vision of doing good too far”.
The truth is the lesson of history isn’t really “moral atrocity is caused by ideological consistency”. Occasionally atrocities are initiated by ideologically consistent people, but they have also been carried out casually by people who were quite normal for their time, or by crazy ideologues who didn’t have a very clear, coherent vision at all. The problem with the Nazis, quite simply, is that they were very very badly wrong. We can’t avoid making the mistakes they did from the inside by pattern matching aspects of our logic onto them that really aren’t historically vindicated, we have to avoid moral atrocity by finding more reliable ways of not winding up being very wrong.
To be clear, I wasn’t trying to imply that Tomasik supports extinction, just that, if I have to think about the strongest case against preventing it, it’s the sort of Tomasik on my shoulder that is speaking loudest.
Good clarifications, endorsed.
“Tomasikian” refers to the Effective Altruist blogger Brian Tomasik, who is known for pioneering an extremely bullet-biting version of “suffering-focused ethics” (roughly negative utilitarianism, though from my readings, he may also mix some preference satisfactionism and prioritarianism in as well). The suffering empathy exercises I’m referring to aren’t really a specific thing, but more sort of the style he uses when writing about suffering to try to get people to understand his perspective on it. Usually this involves describing real world cases of extreme suffering, and trying to get people to see the desperation one would feel if they were actually experiencing it, and to take that seriously, and inadequacy of academic dismissals in the face of it. A sort of representative quote:
“Most people ignore worries about medical pain because it’s far away. Several of my friends think I’m weird to be so parochial about reducing suffering and not take a more far-sighted view of my idealized moral values. They tend to shrug off pain, saying it’s not so bad. They think it’s extremely peculiar that I don’t want to be open to changing my moral perspective and coming to realize that suffering isn’t so important and that other things matter comparably. Perhaps others don’t understand what it’s like to be me. Morality is not an abstract, intellectual game, where I pick a viewpoint that seems comely and elegant to my sensibilities. Morality for me is about crying out at the horrors of the universe and pleading for them to stop. Sure, I enjoy intellectual debates, interesting ideas, and harmonious resolutions of conflicting intuitions, and I realize that if you’re serious about reducing suffering, you do need to get into a lot of deep, recondite topics. But fundamentally it has to come back to suffering or else it’s just brain masturbation while others are being tortured.”
The relevant post:
https://reducing-suffering.org/the-horror-of-suffering/
The big one I can think of, which is related to some of the ones you mention, is leximin or strong enough prioritarianism. The worst off beings human persistence would cause to exist are likely to live net negative lives, possibly very strongly net negative lives if we persist long enough, and on theories like this, benefits to these beings (like preventing their lives) count for vastly more than benefits to better off beings (like by giving those beings good lives rather than no lives). I don’t endorse this view myself, but I think it is the argument that most appeals to me in my moods when I am most sympathetic to extinction. When I sort of inhabit a Tomasikian suffering empathy exercise, and imagine the desperation of the cries of the very worst off being from the future, calling back to me, I can be tempted to decide that rescuing this being in some way is most of what should matter to me.
Oops, thanks!
Thanks for the comment. I agree with most of this, and think that this is one of the major possible costs of labels like this, but I worry that some of these costs get more attention than the subtler costs that come from failing to label groups like this. Take the label of “Effective Altruism” itself for example, the label does mean that people in the movement might have a tendency to rest easy, knowing that their conformity to certain dogmas is shared by “their people”, but not using the label would mean sort of willfully ignoring something big that was actually true to begin with about one’s social identity/biases/insularity, and hamper certain types of introspection and social criticism.
Even today there are pretty common write ups by people looking to dissolve some aspect of “Effective Altruism” as a group identifier as opposed to a research project or something. This is well meaning, but in my opinion has led to a pretty counterproductive movement-wide motte and bailey often influencing discussions. When selling the movement to others, or defending it from criticism, Effective Altruism is presented as a set of uncontroversial axioms pretty much everyone should agree with, but in practice the way Effective Altruism is discussed and works internally does involve implicit or explicit recognition that the group is centered around a particular network of people and organizations, with their own internal norms, references, and overton window.
I think a certain cost like this, if to a lesser extent, comes from failing to label the real cliques and distinct styles of reasoning and approaches to doing good that to some extent polarize the movement. This is particularly the case for some of the factors I discuss in the post, like the fact that different parts of the movement feel vastly more or less welcoming to some people than others, or that large swaths of the movement may feel like a version of “Effective Altruism” you can identify with, and others aren’t, and this makes using the label of Effective Altruism itself less useful. For people who have been involved in different parts of the movement and are comfortable moving between the different subcultures, I would count myself here for instance, this tension may be harder to relate to, but it is a story I often hear, especially relating to people first being exposed to the movement. I think this is enough to make using these labels useful, at least within certain contexts.
Agreed, in retrospect it is pretty obvious that there is no good way to attach the prefix “a” to a word that starts with an “a” and have anyone intuitively get what you mean.
Thank you! I’ll admit my experience is a bit limited, and I haven’t had much exposure to Desi EAs, but this sounds like a good extra category I, and maybe others in EA from the US or UK, often neglect in our analyses. “IDW” stands for “Intellectual Dark Web”, which is sort of the name given to a group of centrist or right leaning, anti-woke public intellectuals like Steven Pinker and Sam Harris. “A-aesthetic” is just supposed to mean “non-aesthetic”, as in avoiding attaching some particular cultural aesthetic to one’s messaging.
Thanks! I don’t think that’s quite it. I think wholesome EA is in general more skeptical of weird, unproven ideas than many of the others, and is at pains to qualify arguments for such ideas with disclaimers that it is going to sound strange and is maybe wrong, whereas contrarian EA is less interested in that. Cheerful utilitarians are also to an extent characterized by being unconcerned with ideas others find weird, but pretty specifically along the moral access, and with less attraction to weirdness per say than the contrarians. I view them as more being the faction that will casually talk about how excited they are to continuously line our lightcone with computronium powering orgy simulations in a trillion years. There is maybe some tendency to emphasize the weird ideas a bit in this group as well, but not so much because these ideas are just fun to think about, as because it is sassy in a way cheerful utlitarians are disposed to be.
I’d be very curious to see something like this. My guess is it will be hard to extract the type of vague cultural currents I’m talking about from other distinctions that might exist in the data, like people focusing on different cause areas, or from different parts of the political spectrum.
I hope so, something like this is maybe one of my motives in making these distinctions explicit. I think this is part of what I meant when discussing the phenomenon of people feeling deceived when the movement looks much different from what they thought. In all likelihood the side of it they were originally interested in does exist and plays a genuine role in the movement, but it may have been put forward with something like the sense that “this is what Effective Altruism looks like” rather than “this is a side of Effective Altruism that might work for you”.
The Many Faces of Effective Altruism
One theory that I’m fond of, both because it has some explanatory power, and because unlike other theories about this with explanatory power, it is useful to keep in mind and not based as directly on misconceptions, goes like this:
-A social group that has a high cost of exit, can afford to raise the cost of staying. That is, if it would be very bad for you to leave a group you are part of, the group can more successfully pressure you to be more conformist, work harder in service of it, and tolerate weird hierarchies.
-What distinguishes a cult, or at least one of the most important things that distinguishes it, is that it is a social group that manually raises the cost of leaving, in order to also raise the cost of staying. For instance it relocates people, makes them cut off other relationships, etc.
-Effective Altruism does not manually raise the cost of leaving for this purpose, and neither have I seen it really raise the cost of staying. Even more than most social groups I have been part of, being critical of the movement, having ideas that run counter to central dogmas, and being heavily involved in other competing social groups, are all tolerated or even encouraged. However,
-The cost of leaving for many Effective Altruists is high, much of this self-inflicted. Effective Altruists like to live with other Effective Altruists, make mostly Effective Altruist close friends, enter romantic relationships with other Effective Altruists, work at Effective Altruist organizations, and believe idiosyncratic ideas mostly found within Effective Altruism. Some of this is out of a desire to do good, speaking from experience, much of it is because we are weirdos who are most comfortable hanging out with people who are similar types of weirdos to us, and have a hard time with social interactions in general. Therefore,
-People looking in sometimes see things from point four, the things that contribute to the high cost of leaving, and even if they can’t put what’s cultish about it into words, are worried about possible cultishness, and don’t know the stuff in point three viscerally enough to be disuaded of this impression. Furthermore, even if EA isn’t a cult, point four is still important, because it increases the risk of cultishness creeping up on us.
Overall, I’m not sure what to do with this. I guess be especially vigilant, and maybe work a little harder to have as much of a life as possible outside of Effective Altruism. Anyway, that’s my take.
Really? I didn’t find their reactions very weird, how would you expect them to react?
I’m not so sure about this. Speaking as someone who talks with new EAs semi-frequently, it seems much easier to get people to take the basic ideas behind longtermism seriously than, say, the idea that there is a significant risk that they will personally die from unaligned AI. I do think that diving deeper into each issue sometimes flips reactions—longtermism takes you to weird places on sufficient reflection, AI risk looks terrifying just from compiling expert opinions—but favoring the approach that shifts the burden from the philosophical controversy to the empirical controversy doesn’t seem like an obviously winning move. The move that seems both best for hedging this, and just the most honest, is being upfront both about your views on the philosophical and the empirical questions, and assume that convincing someone of even a somewhat more moderate version of either or both views will make them take the issues much more seriously.
Re: “In particular, there is no secret EA database of estimates of effectiveness of every possible action (sadly). When you tell people effective altruism is about finding effective, research-based ways of doing good, it is a natural reaction to ask: “so, what are some good ways of reducing pollution in the Baltic Sea / getting more girls into competitive programming / helping people affected by [current crisis that is on the news]” or “so, what does EA think of the effectiveness of [my favorite charity]”. Here, the honest answer is often “nobody in EA knows””
Yeees, this is such a common first reaction I have found in people first being introduced to Effective Altruism. I always really want to give some beginning of an answer but feel self-conscious that I can’t even give an honest best guess from what I know without sort of disgracing the usual standards of rigor of the movement, and misrepresenting its usual scope.
I think this has gotten better, but not as much better as you would hope considering how long EAs have known this is a problem, how much they have discussed it being a problem, and how many resources have gone into trying to address it. I think there’s actually a bit of an unfortunate fallacy here that it isn’t really an issue anymore because EA has gone through the motions to address it and had at least some degree of success, see Sasha Chapin’s relevant thoughts:
https://sashachapin.substack.com/p/your-intelligent-conscientious-in?s=r
Some of the remaining problem might come down to EA filtering for people who already have demanding moral views and an excessively conscientious personality. Some of it is probably due to the “by-catch” phenomenon the anon below discusses that comes with applying expected value reasoning to having a positively impactful career (still something widely promoted, and probably for good reason overall). Some of it is this other, deeper tension that I think Nielson is getting at:
Many people in Effective Altruism (I don’t think most, but many, including some of the most influential) believe in a standard of morality that is too demanding for it to be realistic for real people to reach it. Given the prevalence of actualist over possiblist reasoning in EA ethics, and just not being totally naive about human psychology, pretty much everyone who does believe this is onboard with compartmentalizing do-gooding or do-besting from the rest of their life. The trouble runs deeper than this unfortunately though, because once you buy an argument that letting yourself have this is what will be best for doing good overall, you are already seriously risking undermining the psychological benefits.
Whenever you do something for yourself, there is a voice in the back of your head asking if you are really so morally weak that this particular thing is necessary. Even if you overcome this voice, there is a worse voice that instrumentalizes the things you do for yourself. Buying icecream? This is now your “anti-burnout icecream”. Worse, have a kid (if you, like in Nielson’s example, think this isn’t part of your best set of altruistic decisions), this is your “anti-burnout kid”.
It’s very hard to get around this one. Nielson’s preferred solution would clearly be that people just don’t buy this very demanding theory of morality at all, because he thinks that it is wrong. That said, he doesn’t really argue for this, and for those of us who actually do think that the demanding ideal of morality happens to be correct, it isn’t an open avenue for us.
The best solution as far as I can tell is to distance your intuitive worldview from this standard of morality as much as possible. Make it a small part of your mind, that you internalize largely on an academic level, and maybe take out on rare occasions for inspiration, but insist on not viewing your day to day life through it. Again though, the trickiness of this, I think, is a real part of the persistence of some of this problem, and I think Nielson nails this part.