Revisiting EA’s media policy
Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.
Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant—nonetheless, it seems worth having the conversation in public.
CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it was an unofficial policy well before that. In a low-fidelity nutshell, this is the claim that EA ideas are somewhat nuanced and media reporting often isn’t, and so it’s generally not worth pursuing—and often worth actively discouraging—media communication unless you’re a) extremely confident the outlet in question will report the ideas exactly as you describe them and b) you’re qualified to deal with the media.
In practice, because CEA pull many strings, this being CEA policy makes it de facto EA policy. ‘Qualified to deal with the media’ seems to mean ‘CEA-sanctioned’, and I have heard of at least one organisation being denied CEA-directed money in part because it was considered too accommodating of the media. Given that ubiquity, I think it’s worth discussing the policy more depth. We have five years of results to look back on and, to my knowledge, no further public discussion of the subject. I have four key concerns with the current approach:
It’s not grounded in research
It leads to a high proportion of negative coverage for EA
It assumes a Platonic ideal of EA
It contributes to the hero-worship/concentration of power in EA
Not empirically grounded
The article assumes that low-fidelity spreading of EA ideas is necessarily bad, but doesn’t give any data beyond some very general anecdotes to support this. There’s an obvious trade-off to be had between a small number of people doing something a lot like what we want and a larger number doing something a bit like what we want, and it’s very unclear which has higher expectation.
To see the case for the alternative, we might compare the rise of the animal rights movement in the wake of Peter Singer’s original argument for animal welfare. The former is a philosophically mutated version of the latter, so on fidelity model reasoning would have been something that’s ‘similar but different’ - apparently treated on the fidelity model as undesirable. Similarly, the emergence of reducetarianism/flexitarianism looks very like what the fidelity model would consider to be a ‘diluted’ version of the practice of veganism Singer advocated. My sense is that both of these have nonetheless been strong net positives for animal welfare.
High proportion of negative coverage
If you have influence over a group of supporters and you tell them not to communicate with the media, one result you might anticipate is that a much higher proportion of your media coverage comes from the detractors, who you don’t have influence over. Shutting out the media can also be counterproductive—they’re (mostly) human, and so tend to deal more kindly with people who deal more kindly with them. I have three supporting anecdotes, one admittedly eclipsing the others:
At the time of writing, if you Google ‘effective altruism news’, you still get something like this.
Similarly, if you look at Will’s Tweetstorm decrying SBF’s actions, the majority of responses are angrily negative responses to a movement that consorted with crypto billionaires, as though that’s all EA has ever been about. It seems we’ll to have to deal with this being the first impression many people have formed of the movement for quite some time.
‘The child abuse thing’
A few years ago I was at a public board games event. Sitting at the table to introduce myself with my usual social flair, I decided to mention that I was into EA as a conversation starter. My neighbour’s response was ‘effective altruism… oh right—the child abuse thing.’
I won’t link to the source that informed that delightful conversation, but in brief: a former EA with serious mental health issues had committed suicide after allegedly being sexually harassed by rationalists and/or EAs—she had written an online suicide note that didn’t distinguish between the two groups, and that made a lot of allegations against the broader communities, most of which I believe demonstratedly lacked justification. But evidently, this was passed around widely enough by people who disliked the movement that it was literally the only thing my interlocutor had heard of us.
Shortly after CEEALAR (then the EA Hotel) was founded, I was contacted by an Economist journalist excited about the project who wanted to write about it. Not knowing then of the CEA policy I invited him to come and view it. I mentioned this to a friend, who informed me of the CEA policy and, on the basis of it, strongly urged us not to engage.* So we backtracked and asked the journalist not to visit. He did anyway, and we literally turned him away at the front door. You can read the article here and form your own opinions—mine is that the last paragraph feels very like a substitution for what would have been an engaged look at what people were doing in the hotel if we hadn’t both reduced the substance of the journalist’s story and presumably pissed him off in the process.
*[edit: she pointed out in the comments this was mostly advice from her own experience, not based on CEA policy
edit 2: to be clear, we weren’t aware of any pressure from CEA to do so—just that they had published articles advising against engaging with journalists]
Assumes a Platonic ideal of EA
One of the most acclaimed forum posts of all time (karma adjusted) is Effective Altruism is a Question (not an Ideology). It is hard to square the letter of that post, let alone the spirit, with the thought that sharing EA ideas among the masses will distort them into ideas that are ‘related to, but importantly different from, the ideas we want to spread’ - and that this will necessarily be a bad outcome.
The fidelity article describes EA as ‘nuanced’, but honestly, I don’t think it particularly is at its core. People who publicly condemn it don’t seem to have got any factual details wrong. They either have a different emotional response to it, or they’re critical of how it’s put into practice. In the former case, maybe there was nothing we could have done—or maybe more exposure would have made the ideas feel more normal. In the latter case, if we withhold the logic behind these practices, we give up ~8 billion chances for it to be improved by critical scrutiny.
Contributes to the hero-worship and the concentration of power in EA
To my knowledge, there have been three major EA publicity campaigns. The first was the launch of Giving What We Can, which in practice (and to his chagrin) focused the attention on Toby Ord more than his project. The second—and arguably ongoing—was the launch around Doing Good Better, which intentionally put Will in the spotlight: during and since, he has given multiple high-profile interviews and CEA pays for dozens of copies of the book to be given away at every EA conference. The most recent—and arguably ongoing—was the launch around What We Owe the Future, which intentionally put Will in the spotlight: during and since he has given multiple high-profile interviews, and CEA seem to intend to pay for dozens of copies to be given away at every future EA conference. To a lesser degree, The Precipice has also been supported, again being given away by the dozen at EA conferences.
Will and Toby are arguably the main founders of the EA movement, so it’s natural to focus on them to some extent—but this effect can still cause a feedback loop that amplifies their opinions beyond what seems epistemically healthy.
Also, identifying the movement with a small number of individuals creates a huge failure mode, which we might be in the middle of facing. If Will’s reputation is tarnished by the above association with FTX, however unfairly, the movement will suffer, to say nothing of the fact that SBF himself was one of the few EAs whose media engagement was encouraged. Even if those associations fade over time, the risk remains that highlighting a very small number of thought leaders inevitably gives the movement critical points of failure.
It may also be bad for Will himself, since it puts him under an incredible amount of pressure to adhere to middle-of-the-road social norms, some of which he may be uncomfortable with.
Some (over?)generalisations of the concern
I have a couple of broader hypotheses about shortcomings of EA epistemics to which this is related. Both deserve their own post, but since it might be some time before I can write those posts, it seems worth raising them here, for potential side discussion. Needless to say, my own epistemic status on these is ‘tentative’:
both CEA and the wider EA community should take more seriously the idea that when someone rejects an EA idea (perhaps beyond the foundational notion of ‘optimise do-gooding’), it might be because they have some insight into the ‘question’ of effective altruism—not just because they didn’t understand it. This dismissive attitude seems to inform, for example, the focus on recruiting young people to the movement, which seems to have been justified in part because they’re basically less likely to reject our way of doing things.
the EA community seems far too willing to rely for long periods on rough and ready heuristic reasoning, often based on a single speculative argument, on some very important questions which deserve serious research. This is a theme I’ve raised and seen raised in various other contexts.
With all that’s going on at the moment, this might be an inopportune time for everyone to start rushing out to chat to journalists. So I don’t have any specific replacement policy in mind, but I want to propose some ideas:
Public discussion of the policy between EAs, CEA employees, and the employees of other EA fundmakers—including the latter making explicit to what extent media engagement will be a consideration in their funding decisions
More explicit acknowledgement from CEA of the epistemic and PR problems of promoting a very small number of thought leaders
More of an experimental approach to media policy in ways that wouldn’t be too damaging if they went wrong. For example, CEA could start by trying a more liberal policy in languages with relatively small numbers of native speakers
Some kind of historical/statistical research into the outcomes when other groups (and early EAs) have had to make a similar choice
Thanks to Linda Linsefors, Ze Shen, Michal Keda, Shakeel Hashim, Siao Si Looi and Emily Dardaman for feedback and encouragement on this post.
I’m able to access the article freely, but at least one person said it was paywalled for them, so the paragraph in question is this—though I would suggest reading the rest of the article for context if you can:
‘If residents tire of their selfless work, Blackpool’s central pier—home to a “true Romany palmist” and scores of arcade games—is a short stroll away. Visitors are welcome at the hotel (partly to deter “cult-like tendencies”), though prices for non-altruists are set above market rates. None of the residents was keen to talk to The Economist. So far, the new arrivals do not seem to have caused much of a stir in Blackpool. But one hotelier complains that a recent party kept up his guests. “They’re noisy fuckers,” he grumbles of the do-gooders. Keeping the volume down would at least be one easy way for altruists to improve the lives of locals.’
Luisa’s post starts with the epistemic status ‘In general, I consider it a first step toward understanding this threat from civilizational collapse — not a final or decisive one’, but in conversation she said she had the sense people have been treating it as a concrete answer to the question.