Milk EA, Casu Marzu EA

Pretty much everyone starts off drinking milk, and while adult consumption varies culturally, genetically, and ethically, if I put milk on my morning bran flakes that’s a neutral choice around here. If my breakfast came up in talking with a friend they might think it was dull, but they wouldn’t be surprised or confused. Some parts of effective altruism are like this: giving money to very poor people is, to nearly everyone, intuitively and obviously good.

Most of EA, however, is more like cheese. If you’ve never heard of cheese it seems strange and maybe not so good, but at least in the US most people are familiar with the basic idea. Distributing bednets or deworming medication, improving the treatment of animals, developing vaccines, or trying to reduce the risk of nuclear war are mild cheeses like Cheddar or Mozzarella: people will typically think “that seems good” if you tell them about it, and if they don’t it usually doesn’t take long to explain.

In general, work that anyone can see is really valuable is more likely to already be getting the attention it needs. This means that people who are looking hard for what most needs doing are often going to be exploring approaches that are not obvious, or that initially look bizarre. Pursuit of impact pushes us toward stranger and stronger cheeses, and while humanity may discover yet more non-obvious cheeses over time I’m going to refer to the far end of this continuum as the casu marzu end, after the cheese that gets its distinctive flavor and texture from live maggots that jump as you eat it. EAs who end up out in this direction aren’t going to be able to explain to their neighbor why they do what they do, and explaining to an interested family member probably takes several widely spaced conversations.

Sometimes people talk casually as if the weird stuff is longtermist and the mainstream stuff isn’t, but if you look at the range of EA endeavors the main focus areas of EA all have people working along this continuum. A typical person likely easily sees the altruistic case for “help governments create realistic plans for pandemics” but not “build refuges to protect a small number of people from global catastrophes”; “give chickens better conditions” but not “determine the relative moral differences between insects of different ages”; “plan for the economic effects of ChatGPT’s successors” but not “formalize what it means for an agent to have a goal”; “organize pledge drives” but not “give money to promising high schoolers”. And I’d rate these all at most bleu.

I’ve seen this dynamic compared to motte-and-bailey or bait-and-switch. The idea is that someone presents EA to newcomers and only talks about the mild cheeses, when that’s not actually where most of the community—and especially the most highly-engaged members—think we should be focusing. People might then think they were on board with EA when they actually would find a lot of what goes on under its banner deeply weird. I think this is partly fair: when introducing EA, even to a general audience, I think it’s important not to give the impression that these easy-to-present things are the totality of EA. In addition to being misleading, that also risks people who would be a good fit for the stranger bits bouncing off. On the other hand, EA isn’t the kind of movement where “on board” makes much sense. We’re not about signing onto a large body of thought, or expecting everyone within the movement to think everyone else’s work is valuable. We’re united by a common question, how we can each do the most good, along with culture and intellectual tools for approaching this question.

I think it’s really good that EA is open to the very weird, the mainstream, and everything in between. One of the more valuable things that EA provides, however, is intellectual company for people who are, despite often working in very different fields, pushing down this fundamentally lonely path away from what everyone can see is good.

Crossposted from LessWrong (18 points, 0 comments)