EA needs more humor

In the wake of the FTX collapse, much ink has been spilled on EA reform by people smarter and more experienced in this space than I am. However, as someone who has been engaging with EA over the past few years and who has become increasingly connected with the community, I have a modest proposal I’d like to share: EA needs more humor.

Criticism of EA has roots back in the “earning to give” era. This Stanford Social Innovation Review editorial from 2013 describes EA as “cold and hyper-rationalistic,” deeming the idea of numerically judging charities as “defective altruism.” This piece in Aeon from 2014 essentially argues that the EA utilitarian worldview opposes art, aesthetic beauty, and creativity in general.

Criticism of EA has only heightened in recent years with the rise of longtermism. Another Aeon editorial from 2021 characterizes the “apocalypticism” of longtermist thought as “profoundly dangerous” while also lampooning EA organizations like the “grandiosely named Future of Humanity Institute” and “the even more grandiosely named Future of Life Institute.”

In the last few months before the FTX situation, criticism was directed at Will MacAskill’s longtermist manifesto, What We Owe the Future. A Wall Street Journal review concludes that “‘What We Owe the Future’ is a preposterous book” and that it is “replete with highfalutin truisms, cockamamie analogies and complex discussions leading nowhere.” A Current Affairs article once again evokes the phrase “defective altruism” and asserts that MacAskill’s book shows how EA as a whole “is self-righteous in the most literal sense.”

The above example are, of course, just a small snapshot of the criticism EA has faced. However, I think these examples capture a common theme in EA critiques. Overall, it seems that critics tend to characterize EA as a community of cold, calculating, imperious, pretentious people who take themselves and their ostensible mission to “save humanity” far too seriously.

To be honest, a lot of EA criticism seems like it’s coming from cynical, jaded adults who relish in the opportunity to crush young people’s ambitious dreams about changing the world. I also think many critics don’t really understand what EA is about and extrapolate based on a glance at the most radical ideas or make unfair assumptions based on a list of EA’s high-profile Silicon Valley supporters.

However, there is a lot of truth to what critics are saying: EA’s aims are incredibly ambitious, its ideas frequently radical, and its organizations often graced with grandiose names. I also agree that the FTX/​SBF situation has exposed glaring holes in EA philosophy and shortcomings in the organization of the EA community.

However, my personal experience in this community has been that the majority of EAs are not cold, calculating, imperious, pretentious people but warm, intelligent, honest, and altruistic individuals who wholeheartedly want to “do good better.”

I think one thing the EA community could do moving forward to improve its external image and internal function is to embrace a bit more humor. EA could stand to acknowledge and make fun of the craziness of comparing the effectiveness of charities as disparate as a deworming campaign and a policy advocacy group, or the absurdity of a outlining a superintelligent extinction event.

I say these ideas are absurd not because I don’t believe in them; I have the utmost respect for rigorous charity evaluators like GiveWell and am convinced that AI is indeed the most important problem facing humanity. But I think that acknowledging the external optics of these ideas and, to a degree, joking about how crazy they may seem could make EA less disagreeable for many people on the outside looking in.

There have already been solid examples of humor in the community. Criticism aside, Will MacAskill’s What We Owe the Future demonstrates an excellent instance of levity. On page 15, MacAskill announces that he will depict all of the potential future generations by using little stick figures each representing 10 billion humans. The next three pages are filled with figures from top to bottom, and at the end MacAskill notes that the full version would actually fill 20,000 full pages.

Some reviewers have not appreciated this stick figure stunt, but I personally laughed out loud when I flipped through these pages. I think this is a perfect example of subtle humor that acknowledges the absurdity of longtermism while also imparting critical understandings on the scope and scale of the far-off future. By using pictograms that look like the gender sign on a bathroom door, MacAskill adds playfulness to his analysis and softens the grandiosity inherent in declaring that you will depict all of humanity’s future generations. These pages successfully joke about the ambitions of longtermism while still taking the subject seriously.

Another example of EA humor can be found in Holden Karnofsky’s “The Most Important Century” blog. For those unfamiliar, this is a series of posts arguing that the 21st century could be the most important in humanity’s history—i.e., a project only an EA would have the audacity to undertake.

A key part of Karnofsky’s argument is the potential for transformative AI to arrive this century. AI is, for EAs, a deadly serious topic. But talking about AI development has a tendency to get really science fiction-y really quickly and seems (especially to people outside of EA) odd at best or overblown at worst. So, in a stroke of genius, Karnofsky defines his conception of transformative AI as “Process for Automating Scientific and Technological Advancement, or PASTA.”

In a mockingly apologetic footnote, Karnofsky writes, “I’m sorry. But I do think the rest of the series will be slightly more fun to read this way.” And he’s absolutely right. PASTA embraces the absurdity of the transformative AI topic and makes the blog posts more enjoyable to read. Karnofsky even adds a “Hotline Bling”-inspired meme featuring the Terminator and the Flying Spaghetti Monster. While perhaps a tad cringy, this move is so unexpected as to be hilarious on balance. Quite frankly, EA could stand to see more memes like this.

The use of PASTA in no way diminishes Karnofsky’s arguments or makes his conclusions any less important. Rather, the use of a funny acronym to refer to AI makes the whole discussion seem less pretentious. I think PASTA is a model of subtle humor that, if imitated elsewhere, would make EA seem more relatable to anyone.

In terms of models outside of EA, John Oliver’s Last Week Tonight (hereafter: LWT) offers a template that could work well for EA topics. If you haven’t already heard of LWT, Oliver does weekly deep dive video segments on problems in the United States and around the world. Topics are wide-ranging, with the three most recent feature stories concerning the World Cup, the British Monarchy, and election subversion.

LWT features rigorous research and reporting while also employing top-notch slapstick humor. Oliver manages to make every episode both enlightening and entertaining, shining a light on overlooked societal problems in an accessible and enjoyable way. LWT shows that joking about something doesn’t mean you are making light of it or not taking it seriously. Humor is just a way to make the bitter medicine of truth go down a little bit easier, enabling people to process information they otherwise couldn’t stand to hear.

Oliver’s approach has proven genuinely impactful. A Time article from 2015 coined the term “the John Oliver Effect” to explain several instances where LWT episodes have contributed to real societal change, including ending unfair bail requirements in New York City, causing the resignation of FIFA president Sepp Blatter, and convincing the FCC to adopt net neutrality regulations, among other cases. LWT’s impact has only grown in the past seven years, the use of humor remaining integral to Oliver’s continued commercial and political success.

I think LWT could be model for future EA content. Incorporating humor into discussions on existential risk, for instance, would make the information more approachable, encouraging people to learn about heavy and heady topics. Humor also diminishes any pretension that may come along with asserting that an issue is one of the most important problems facing humanity.

LWT is just one half-baked idea of how EAs could incorporate more humor into their work. To be sure, Oliver’s in-your-face style will not work in many contexts or with many topics. However, I think LWT is a model showing how leveraging levity can be a truly effective approach to reaching people and driving societal change.

In closing, I am not suggesting that EAs should become outright comedians and make a joke out of everything. And in any case, perhaps EA should wait a bit to let the FTX storm die down before getting too jocular. But moving forward, I think the more EA learns to laugh at itself and in general, the less the public will view EA as so self-righteous or self-important. If EA can introduce a little more humor into its culture, EA may be able to connect with more people and shape the world for the better.