Milk EA, Casu Marzu EA
Pretty much everyone starts off drinking milk, and while adult consumption varies culturally, genetically, and ethically, if I put milk on my morning bran flakes thatâs a neutral choice around here. If my breakfast came up in talking with a friend they might think it was dull, but they wouldnât be surprised or confused. Some parts of effective altruism are like this: giving money to very poor people is, to nearly everyone, intuitively and obviously good.
Most of EA, however, is more like cheese. If youâve never heard of cheese it seems strange and maybe not so good, but at least in the US most people are familiar with the basic idea. Distributing bednets or deworming medication, improving the treatment of animals, developing vaccines, or trying to reduce the risk of nuclear war are mild cheeses like Cheddar or Mozzarella: people will typically think âthat seems goodâ if you tell them about it, and if they donât it usually doesnât take long to explain.
In general, work that anyone can see is really valuable is more likely to already be getting the attention it needs. This means that people who are looking hard for what most needs doing are often going to be exploring approaches that are not obvious, or that initially look bizarre. Pursuit of impact pushes us toward stranger and stronger cheeses, and while humanity may discover yet more non-obvious cheeses over time Iâm going to refer to the far end of this continuum as the casu marzu end, after the cheese that gets its distinctive flavor and texture from live maggots that jump as you eat it. EAs who end up out in this direction arenât going to be able to explain to their neighbor why they do what they do, and explaining to an interested family member probably takes several widely spaced conversations.
Sometimes people talk casually as if the weird stuff is longtermist and the mainstream stuff isnât, but if you look at the range of EA endeavors the main focus areas of EA all have people working along this continuum. A typical person likely easily sees the altruistic case for âhelp governments create realistic plans for pandemicsâ but not âbuild refuges to protect a small number of people from global catastrophesâ; âgive chickens better conditionsâ but not âdetermine the relative moral differences between insects of different agesâ; âplan for the economic effects of ChatGPTâs successorsâ but not âformalize what it means for an agent to have a goalâ; âorganize pledge drivesâ but not âgive money to promising high schoolersâ. And Iâd rate these all at most bleu.
Iâve seen this dynamic compared to motte-and-bailey or bait-and-switch. The idea is that someone presents EA to newcomers and only talks about the mild cheeses, when thatâs not actually where most of the communityâand especially the most highly-engaged membersâthink we should be focusing. People might then think they were on board with EA when they actually would find a lot of what goes on under its banner deeply weird. I think this is partly fair: when introducing EA, even to a general audience, I think itâs important not to give the impression that these easy-to-present things are the totality of EA. In addition to being misleading, that also risks people who would be a good fit for the stranger bits bouncing off. On the other hand, EA isnât the kind of movement where âon boardâ makes much sense. Weâre not about signing onto a large body of thought, or expecting everyone within the movement to think everyone elseâs work is valuable. Weâre united by a common question, how we can each do the most good, along with culture and intellectual tools for approaching this question.
I think itâs really good that EA is open to the very weird, the mainstream, and everything in between. One of the more valuable things that EA provides, however, is intellectual company for people who are, despite often working in very different fields, pushing down this fundamentally lonely path away from what everyone can see is good.
To continue the metaphor, suppose EA is the dairy industry, and realizes markedly higher profits (impact) the weirder up the dairy ladder a consumer goes (e.g., makes 5x as much from a cheddar consumer as milk, 5x as much from a bleu consumer than a cheddar one, etc.).
What does the extended metaphor suggest about how to market to maximize profit/âimpact? Obviously you want to make milk, cheddar, bleu, and casu marzu customers feel like welcome members of the dairy empire. Given that the potential market size substantially diminishes as you step up the weird-dairy latter, and the cost of customer acquisition increases, how much of your marketing resources will be spent on promoting each type of dairy?
My guess is that the EA ecosystem under-emphasizes acquiring new cheddar consumers, but I could easily be wrong. My theory is that that the potential market for cheddar is still very large, and that most conversions to bleu will come from the cheddar crowd anyway.
Iâm not sure the metaphor holds up.
I imagine there are many more people interested in AI Safety, Biosecurity, Nuclear Risks who would be put off if they had to start by learning about the GWWC pledge.
Kelsey Piper writing about Vox analytics - âGlobal poverty stuff doesnât do very well. This is something that makes me very sad, and it makes my mother very sad. She reads all my articles, and sheâs like, âThe global poverty stuff is the best, you should do more of that.â I also would love to do more of that. I think itâs a really important topic, but it doesnât get nearly as many views or as much attention as both the existential risk stuff and sort of the animal stuff and the weird big ideas sort of content.â
Fair point (although Voxâs readers may not be representative of all or even most audiences, and pageviews may be only loosely correlated with willingness to commit. I find many things interesting to read and even write about that I wouldnt devote my career or serious money to.).
Maybe itâs not true of all potential cause areas, but I think most of them have a range of options from cheddar to maggot cheese. So cheddar does not necessarily imply global health, and maggots donât necessarily imply x-risk.
I think youâre maybe treating the âclearly goodâ /â mild end of this spectrum as being specific to global poverty? But I think thereâs a lot of x-risk work thatâs towards this end too: reducing the risk of nuclear war, reducing airborne pathogen spread, etc.
But with Jasonâs extension of the metaphor, I also think maybe Kelseyâs audience on Vox wants to be challenged a bit, and the clearly-good stuff is less interesting. But that doesnât mean hitting them with the weirdest ideas anyone within EA is playing with is going to work well! You still need to match your offering to your audience, and balance wanting to introduce stranger things against not overwhelming them with something too different.
I think every cause can be presented normally/âweirdly depending on how you do it, it was just in that example Kelsey was discussing global dev and I think a lot of people in EA assume that more people are interested in global development as they are just looking outside their bubble into a slightly larger bubble.
I would agree that itâs usually best to introduce people to ideas closer to their interests (in any cause area) before moving onto related ones. Although sometimes theyâll be more interested in the âweirdâ ideas before getting involved in EA, and EA helps them approach it practically.
On FB someone replied:
My response was that work you thought was positive on the basis of complicated reasoning is unusually likely to turn out to be negative for reasons you missed, and this is a real risk of trying to go so far from well-explored territory. So Iâll endorse this aspect of the metaphor.
[EDIT: also see Counterproductive Altruism: The Other Heavy Tail]
I still think it works for some causes. I met people who thought it wasnât just bad, but evil to do wild animal welfare stuff. Iâm not sure why, maybe their introduction to the idea was about predator euthanasia or something.
Yeah my personal intro to EA is generally pretty aversive. This is weird and you might not like it. Rather than bednets. The people who push through that I think are happy to be in a weird movement, but I wouldnât want people to be blindsided.
I found it surprising that you described cash transfers as âmilkâ and bednets, vaccines and avoiding nuclear war as âcheeseâ.
In my experience, itâs more likely to be the latter category which is, âto nearly everyone, intuitively and obviously good.â
By contrast, Iâve heard lots of people confidently and knowingly say that cash transfers donât work (because they donât get to the root of the problem, because the poor will waste the money on alcohol, etc)
I interpret those criticisms of cash transfers as people saying they think you can do more good other ways, not that poor people having more money is neutral or harmful?
For the ones I described as mild cheeses, the idea is thereâs a little background knowledge required before you can see that the work is valuable, but people tend to already have that background.
One way to get at this is to look at what you see in world religions around charity: thereâs a lot about giving to the poor and not much about more complex ways of trying to make the world better.
Actually I think the popular concept is that cash transfers are neutral or harmful. Thatâs one reason why there was no charity like GiveDirectly until ~15 years ago, and arguably GiveDirectly would not exist today without funding from EA sources. The earliest news coverage I could find about GiveDirectly is not until 2011 (Time/âNPR/âBoston.com) and two of those pieces described it as âradicalâ.
Thanks for digging up the early news coverage!
I interpret the âradicalâ claim in Time and NPR as âgive directly proposes a massive change in how we address povertyâ. What about it makes you think itâs intended in a âyou would think that this proposal is actually harmful, but itâs notâ sort of way?
Unfortunately all three articles no longer have a comment section, and I couldnât load comments through the Internet Archive. But my memory about the non EA discussion at the time was that it was all âthereâs got to be something better you can doâ and not âthis is useless or counterproductiveâ?
In my experience, an extremely common lay objection to GiveDirectly is something along the lines of, âWonât recipients waste the money on alcohol/âdrugs/âtobacco/âluxuries/âetc.?â, with a second-tier objection of, âWonât cash transfers cause inflation/âconflict/âdependence/âetc.?â.
I think both these questions have been pretty well addressed by the research, but those who are not aware of (or do not trust) that research are, I think, pretty likely to believe that cash transfers are neutral or harmful.
The second objection does sound like saying it is harmful, thanks!
The first one is more mixed. My interpretation has always been that people were saying they didnât think it was very useful, not that it was harmful: I doubt the person making the objection thinks that all of the money will go to buy luxuries, and if some of the money goes to buy valuable things and some of it goes to buy luxuries that are essentially morally neutral then the effect is less positive than if it all went to buy valuable things. But maybe they think that providing luxuries is actually harmful, and not just neutral? (Which, conditional on thinking they spend lots of the money on drugs and alcohol, it could easily be, since itâs funding people to buy addictive drugs they wonât be able to continue consuming.)