EA is good, actually

The last year has been tough

The last year has been tough for EA.

FTX blew up in the most spectacular way and SBF has been found guilty of one of the biggest frauds in history. I was heartbroken to learn that someone I trusted hurt so many people, was heartbroken for the people who lost their money, and was heartbroken about the projects I thought would happen that no longer will. The media piled on, and on, and on.

The community has processed the shock in all sorts of ways — some more productive than others. Many have published thoughtful reflections. Many have tried to come up with ways to ensure that nothing like this will ever happen again. Some people rallied, some people looked for who to blame, we all felt betrayed.

I personally spent November–February working more than full-time on a secondment to Effective Ventures. Meanwhile, there were several other disappointments in the EA community. Like many people, I was tired. Then in April, I went on maternity leave and stepped away from the Forum and my work to spend time with my children (Earnie and Teddy) and to get to know my new baby Charley. I came back to an amazing team who continued running event after event in my absence.

In the last few months I attended my first events since FTX and I wasn’t sure how I would feel. But when I attended the events and heard from serious, conscientious people who want to think hard about the world’s most pressing problems I felt so grateful and inspired. I teared up watching Lizka, Arden, and Kuhan give the opening talk at EAG Boston, which tries to reinforce and improve important cultural norms around mistakes, scout mindset, deference, and how to interact in a world where AI risk is becoming more mainstream. I went home so motivated!

And then, OpenAI.

I’m still processing it and I don’t know what happened. Almost nobody does. I have spent far too much time searching for answers online. I’ve seen some thoughtful write-ups and also many many posts that criticize a version of EA that doesn’t match my experience. This has sometimes made me feel sad or defensive, wanting to reply to explain or argue. I haven’t actually done that because I’m generally pretty shy about posting and I’m not sure how to engage. Whatever happened, it seems the results are likely bad for AI safety. Whatever happened, I think I’ve reached diminishing returns on my doomscrolling, and I’m ready to get back to work.

The last year has been hard and I want us to learn from our mistakes, but I don’t want us to over-update and decide EA is bad. I think EA is good!

Sometimes when people say EA, they’re referring to the ideas like “let’s try to do the most good” and “cause prioritization”. Other times, they’re referring to the community that’s clustered around these ideas. I want to defend both, though separately.

The EA community is good

I think there are plenty of issues with the community. I live in Detroit and so I can’t really speak to all of the different clusters of people who currently call themselves EA or “EA-adjacent”. I’m sure some of them have bad epistemics or are not trustworthy and I don’t want to vouch for everyone. I also haven’t been part of that many other communities. I am a lawyer, I have been a part of the civil rights community, and I engage with other online communities (mom groups, au pair host parents, etc.).

All that said, my experience in EA spaces (both online and in-person) has been significantly more dedicated to celebrating and creating a culture of collaborative truth-seeking and kindness. For example:

  • We have online posting norms that I’d love to see adopted by other online spaces I participate in (I’ve mostly stopped posting in the mom groups or host parent groups because when I raise an issue for advice I usually get a swarm of validation rather than pushback or constructive advice, and I almost never post on Twitter.).

  • My in-person experience in the civil rights legal community involved lovely people, but from my experience did not support or encourage differing views or upward feedback. By contrast, when I was at MIRI there was a very strong culture around saying “oops!” and I’ve tried to incorporate that into my team at CEA, including through things like Watch Team Backup (a norm that encourages people to speak up if something doesn’t seem right, and helps people avoid feeling defensive when they make a mistake).

I read someone claim that people are embarrassed to call themselves EA now. I’m not! I’ve said this before, but I’ve spent most of the last 14 years in the EA and rationality communities. I’ve met so many of my best friends here. Our children have played together.

While I haven’t run EAG admissions myself, I get the benefit of seeing applications from lots of people who aren’t recognized for their work. I’ve seen people who were determined to save the lives of total strangers, even if they weren’t public about their giving. I’ve seen people who spent their days working behind the scenes for the sake of people in future generations. There is a deep core of goodness here. We aren’t perfect, but we all want to make things better.

But separate from the community shortcomings and drama, and separate from whether someone wants to identify as EA /​ EA adjacent /​ just wants to use the ideas to make a difference, there are the core ideas that are worth protecting. As Scott says “For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. It’s fun to debate whether some people/​institutions should gain or lose status, and I participate in those debates myself, but they seem less important than these basic questions of how we should live and what our ethics should be.”

EA ideas are good

The problems we’re committed to addressing are no less pressing than they were a year ago. Global poverty, if anything, is exacerbated by global fertilizer and food shortages & price shocks. Factory farming is on the rise and may be breeding the next pandemic as we speak. AI is advancing faster than ever, while core alignment and safety challenges remain unsolved.

We chose these causes for good reasons.

People should be more impact-oriented than they are. EA helped take this simple but profound idea mainstream. We helped reframe classic ideas of obligation into heroic opportunities to do good. We have inspired 10s of thousands around the world to donate 10%+ of their income to effective causes and/​or to focus their careers on helping others in ways that really matter.

Choosing causes by the rubric of importance, tractability, and neglectedness continues to make sense.

Our commitment to evidence-based truth-seeking remains a real virtue. Show me another community in which celebrated, funded leaders voluntarily shut down high-visibility projects simply because they come to the conclusion that there are better uses for the money. These ideas have helped save hundreds of thousands of lives and contributed to better living conditions for millions of non-human animals.

I’ve been working on EA projects before it was called EA, and have been through several phases of EA problems, where it has looked like the community is falling apart. We have made it through these by trying to learn from our mistakes while also not losing sight of the important and urgent problems we are trying to help solve. Self-improvement requires the capacity for honest self-criticism. We have always had this in spades. But on the margin, if I have to choose what to do for the next year, I’m choosing to focus on making the world a better place for our kids and, hopefully, for their great great great grandchildren.