Selective truth-telling: concerns about EA leadership communication.

Intro

I have had concerns about EA leadership communication as long as I’ve known about EA, and the FTX meltdown has persuaded me I should have taken them more seriously.

(By leadership I mean well-known public-facing EAs and organizations.)

This post attempts to explain why I’m concerned by listing some of the experiences that have made me uncomfortable. tl;dr: EA leadership has a history of being selective in the information they share in a way that increases their appeal, and this raises doubts for me over what was and wasn’t known about FTX.

I do not think I’m sharing any major transgressions here, and I suspect some readers will find all these points pretty minor.

I’m sharing them anyway because I’m increasingly involved in EA (two EAGs and exploring funding for an EA non-profit) and I’ve lost confidence in leadership of the movement. A reflection on why I’ve lost confidence and what would help me regain it seems like useful feedback, and it may also resonate with others.

i.e. This is intended to be a personal account of why I’m lacking confidence, not an argument for why you, the reader, should also lack confidence.

2014: 80K promotional event in Oxford.

I really wish I could find/​remember more concrete information on this event, and if anyone recognizes what I’m talking about and has access to the original promotional material then please share it.

In 2014 I was an undergraduate at Oxford and had a vague awareness of EA and 80,000 hours as orgs that cared about highly data-driven charitable interventions. At the time this was not something that interested me, I was really focussed on art!

I saw a flyer for an event with a title something like ‘How to be an artist and help improve the world!’ I don’t remember any mention of 80K or EA, and the impression it left on me was ‘this is an event on how to be a less pretentious version of Bono from U2’. (I’m happy to walk all of this back if someone from 80K still has the flyer somewhere and can share it, but this is at least the impression it left on me.)

So I went to the event, and it was an 80K event with Ben Todd and Will MacAskill. The keynote speaker was an art dealer (I cannot remember his name) who talked about his own career, donating large chunks of his income, and encouraging others to do the same. He also did a stump speech for 80K and announced ~£180K of donations he was making to the org.

This was a great event with a great speaker! It was also not remotely the event I had signed up for. Talking to Ben after the event didn’t help: his answers to my questions felt similar to the marketing for the event itself, i.e. say what you need to say to get me in the door. (Two rough questions I remember: Q: Is your approach utilitarian? A: It’s utilitarian flavoured. Q: What would you say to someone who e.g. really cares about art and doesn’t want to earn to give? A: Will is actually a great example of someone I think shouldn’t earn to give (he intended to at the time) as we need him doing philosophical analysis of the best ways to donate instead.)

This all left me highly suspicious of EA, and as a result I didn’t pay much attention to them after that for years. I started engaging again in 2017, and more deeply in 2021, when I figured everyone involved had been young, they had only been minorly dishonest (if I was even remembering things correctly), and I should just give them a pass.

Philosophy, but also not Philosophy?: Underemphasizing risk on the 80K website

My undergraduate degree was in philosophy, and when I started thinking about EA involvement more seriously I took a look at global priorities research. It was one of five top-recommended career paths on 80K’s website and required researchers in philosophy. 80K website at time of writing:

In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy

(https://​​80000hours.org/​​problem-profiles/​​global-priorities-research/​​)

This article contrasts sharply with the 80K page on philosophy:

the academic job market for philosophy is extremely challenging. Moreover, the career capital you acquire working toward a career in philosophy isn’t particularly transferable. For these reasons we currently believe that, for the large majority of people who are considering it, pursuing philosophy professionally is unlikely to be the best choice.

(https://​​80000hours.org/​​career-reviews/​​philosophy-academia/​​)

It seems like there are significant risks to pursuing further study in philosophy that 80K are well aware of, and it does not look great that they mention them in the context of general philosophical research (that they presumably don’t care about their readers pursuing) but omit them when discussing a career path they are eager for their readers to pursue. Spending 7 years getting a philosophy PhD because you want to research global priorities and then failing to find a position (the overwhelmingly likely outcome) does not sound like much fun.

This is a particularly clear example of a more general experience I’ve had with 80K material, namely being encouraged to make major life choices without an adequate treatment of the risks involved. I think readers deserve this information upfront.

Public Interviews (where is the AI?)

If you talk about EA’s priorities in 2022 and fail to mention AI, I do not think you are giving an accurate representation of EA’s priorities in 2022. But I’ve seen prominent TV and Radio interviews this year where AI isn’t mentioned at all, I assume because interviewees are worried it won’t appeal to viewers/​listeners.

Here is Ben Todd on a show titled ‘What’s the best job to do good?’ from the BBC: https://​​www.bbc.co.uk/​​programmes/​​m000ystj . (Will MacAskill was also on the Daily Show recently `https://​​www.youtube.com/​​watch?v=Lm3LjX3WhUI` though they since seem to have pulled the video, perhaps due to FTX).

I think EA’s answer to ‘What’s the best job to do good?’ is, all other things being equal, AI Safety and Biorisk work. But Ben barely mentions biorisk, and doesn’t mention AI at all. I was really uncomfortable listening to this, and I think most listeners encountering EA for the first time could justifiably feel bait-and-switched if they took a look at 80K’s website after listening to the show.

Recent Internet Takes

A lot of stuff has come out in the wake of FTX that wasn’t publicly discussed in EA but seems like it should have been.

I was pretty alarmed by this thread from Kerry Vaughan which touches on Ben Delo, a major EA donor prior to SBF who pled guilty for “willfully failing to establish, implement, and maintain an anti-money laundering (‘AML’) program at BitMEX”: https://​​twitter.com/​​KerryLVaughan/​​status/​​1591508697372663810. The implication here is that Ben Delo’s involvement with EA just quietly stopped being talked about without any kind of public reflection on what could be done better moving forwards.

I was also surprised to see that Will MacAskill is in a signal chat with Elon Musk, in which he previously tried to connect SBF with Elon to fund Elon’s twitter acquisition https://​​twitter.com/​​MattBinder/​​status/​​1591091481309491200. Not only does this strike me as a strange potential use of significant financial resources, it raises questions about Will’s unadvertised relationship with a controversial public figure, and one who founded a wildly successful AI Capabilities Research Lab. Furthermore, Will’s tweet here about WWOTF implied to me that he didn’t know Elon personally: https://​​twitter.com/​​willmacaskill/​​status/​​1554378994765574144. It turns out he did, the above text messages were sent several months prior to that tweet.

Edit: I think I messed the above paragraph up, I’m leaving it in so the comments make sense but thanks to Rob Bensinger for calling it out and see my subsequent comment here.

Finally, Rhodri Davies recently wrote a blogpost (this was actually prior to the FTX scandal) titled ‘Why am I not an Effective Altruist?’ including the text below:

And there are even whistle blowers accounts of the inner workings of the EA community, with rumours of secret Google docs and WhatsApp groups in which the leaders of the movement discuss how to position themselves and how to hide their more controversial views or make them seem palatable. I have no idea how much of this is true, and how much is overblown conspiracy theory, but it certainly doesn’t make the whole thing feel any less cult-like.

https://​​whyphilanthropymatters.com/​​article/​​why-am-i-not-an-effective-altruist/​​

I have zero evidence for or against this happening, but it unfortunately fits the pattern of my prior experience with EA leadership communications.

Conclusion

Nobody made a single false statement in any of the examples I’ve given above, but they are all cases in which I have felt personally misled by omission. These examples range from cases that could be honest mistakes (the 80K careers page example) to ones where the omissions seem pretty intentional (the ‘art’ event, Ben Delo).

My suggestion to any public-facing EAs: don’t deliberately do this, and if you do this by mistake, take it seriously and course-correct. Failing to share information because you suspect it will make me less supportive or more critical of your views, decisions, or actions smells of overconfidence and makes you difficult to trust, and this has regularly happened to me in my engagement with EA. Otherwise, well, I’ll probably still stick around because EA contains plenty of great people, but I’ll have to be much more careful about who I collaborate with, and I won’t be able to endorse or trust EA’s public figures.