EA is good, actually
The last year has been tough
The last year has been tough for EA.
FTX blew up in the most spectacular way and SBF has been found guilty of one of the biggest frauds in history. I was heartbroken to learn that someone I trusted hurt so many people, was heartbroken for the people who lost their money, and was heartbroken about the projects I thought would happen that no longer will. The media piled on, and on, and on.
The community has processed the shock in all sorts of ways — some more productive than others. Many have published thoughtful reflections. Many have tried to come up with ways to ensure that nothing like this will ever happen again. Some people rallied, some people looked for who to blame, we all felt betrayed.
I personally spent November–February working more than full-time on a secondment to Effective Ventures. Meanwhile, there were several other disappointments in the EA community. Like many people, I was tired. Then in April, I went on maternity leave and stepped away from the Forum and my work to spend time with my children (Earnie and Teddy) and to get to know my new baby Charley. I came back to an amazing team who continued running event after event in my absence.
In the last few months I attended my first events since FTX and I wasn’t sure how I would feel. But when I attended the events and heard from serious, conscientious people who want to think hard about the world’s most pressing problems I felt so grateful and inspired. I teared up watching Lizka, Arden, and Kuhan give the opening talk at EAG Boston, which tries to reinforce and improve important cultural norms around mistakes, scout mindset, deference, and how to interact in a world where AI risk is becoming more mainstream. I went home so motivated!
And then, OpenAI.
I’m still processing it and I don’t know what happened. Almost nobody does. I have spent far too much time searching for answers online. I’ve seen some thoughtful write-ups and also many many posts that criticize a version of EA that doesn’t match my experience. This has sometimes made me feel sad or defensive, wanting to reply to explain or argue. I haven’t actually done that because I’m generally pretty shy about posting and I’m not sure how to engage. Whatever happened, it seems the results are likely bad for AI safety. Whatever happened, I think I’ve reached diminishing returns on my doomscrolling, and I’m ready to get back to work.
The last year has been hard and I want us to learn from our mistakes, but I don’t want us to over-update and decide EA is bad. I think EA is good!
Sometimes when people say EA, they’re referring to the ideas like “let’s try to do the most good” and “cause prioritization”. Other times, they’re referring to the community that’s clustered around these ideas. I want to defend both, though separately.
The EA community is good
I think there are plenty of issues with the community. I live in Detroit and so I can’t really speak to all of the different clusters of people who currently call themselves EA or “EA-adjacent”. I’m sure some of them have bad epistemics or are not trustworthy and I don’t want to vouch for everyone. I also haven’t been part of that many other communities. I am a lawyer, I have been a part of the civil rights community, and I engage with other online communities (mom groups, au pair host parents, etc.).
All that said, my experience in EA spaces (both online and in-person) has been significantly more dedicated to celebrating and creating a culture of collaborative truth-seeking and kindness. For example:
We have online posting norms that I’d love to see adopted by other online spaces I participate in (I’ve mostly stopped posting in the mom groups or host parent groups because when I raise an issue for advice I usually get a swarm of validation rather than pushback or constructive advice, and I almost never post on Twitter.).
My in-person experience in the civil rights legal community involved lovely people, but from my experience did not support or encourage differing views or upward feedback. By contrast, when I was at MIRI there was a very strong culture around saying “oops!” and I’ve tried to incorporate that into my team at CEA, including through things like Watch Team Backup (a norm that encourages people to speak up if something doesn’t seem right, and helps people avoid feeling defensive when they make a mistake).
I read someone claim that people are embarrassed to call themselves EA now. I’m not! I’ve said this before, but I’ve spent most of the last 14 years in the EA and rationality communities. I’ve met so many of my best friends here. Our children have played together.
While I haven’t run EAG admissions myself, I get the benefit of seeing applications from lots of people who aren’t recognized for their work. I’ve seen people who were determined to save the lives of total strangers, even if they weren’t public about their giving. I’ve seen people who spent their days working behind the scenes for the sake of people in future generations. There is a deep core of goodness here. We aren’t perfect, but we all want to make things better.
But separate from the community shortcomings and drama, and separate from whether someone wants to identify as EA / EA adjacent / just wants to use the ideas to make a difference, there are the core ideas that are worth protecting. As Scott says “For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. It’s fun to debate whether some people/institutions should gain or lose status, and I participate in those debates myself, but they seem less important than these basic questions of how we should live and what our ethics should be.”
EA ideas are good
The problems we’re committed to addressing are no less pressing than they were a year ago. Global poverty, if anything, is exacerbated by global fertilizer and food shortages & price shocks. Factory farming is on the rise and may be breeding the next pandemic as we speak. AI is advancing faster than ever, while core alignment and safety challenges remain unsolved.
We chose these causes for good reasons.
People should be more impact-oriented than they are. EA helped take this simple but profound idea mainstream. We helped reframe classic ideas of obligation into heroic opportunities to do good. We have inspired 10s of thousands around the world to donate 10%+ of their income to effective causes and/or to focus their careers on helping others in ways that really matter.
Choosing causes by the rubric of importance, tractability, and neglectedness continues to make sense.
Our commitment to evidence-based truth-seeking remains a real virtue. Show me another community in which celebrated, funded leaders voluntarily shut down high-visibility projects simply because they come to the conclusion that there are better uses for the money. These ideas have helped save hundreds of thousands of lives and contributed to better living conditions for millions of non-human animals.
I’ve been working on EA projects before it was called EA, and have been through several phases of EA problems, where it has looked like the community is falling apart. We have made it through these by trying to learn from our mistakes while also not losing sight of the important and urgent problems we are trying to help solve. Self-improvement requires the capacity for honest self-criticism. We have always had this in spades. But on the margin, if I have to choose what to do for the next year, I’m choosing to focus on making the world a better place for our kids and, hopefully, for their great great great grandchildren.
I can’t quite state enough how much I appreciate you writing this right now. You shared what I’m feeling and thinking way better than I feel I can. Thank you.
💕
I agree with this. Thanks, Amy, for all the work you and your team have done/are doing on the EA Global events, which are delivered to such a high standard and which, I think, contribute to a tonne of good!
Agreed wholeheartedly, and thank you for this kind, insightful writeup! I’m a new EA and not highly engaged (it’s hard to be when you live in a rural area), but I’ve been around EA-adjacent ideas since finding them through LessWrong and other forums ~12 years ago. Back then I was a student, and still quite skeptical of utlitarianism and consequentialism in ethics (which I now affirm). I accepted most EA ideas but didn’t feel like I could join the community if I couldn’t accept its main philosophical basis.
Seeing the impacts EA projects have in the real world, and that they are backed up by a rational community that comes with actual moral force and is willing to respectfully challenge its own to live better, more effective lives, are what encouraged me to finally take that plunge, commit to effective altruism as a life path, and start getting involved in the EA community. And I made this decision a few days after the OpenAI news came out!
All communities, to some extent, have bad actors in them—it’s just a fact of human existence, unfortunately. The fact that the EA community is actually willing to take concrete action on our bad actors, removing them from the community and limiting the damage they can do, is a refreshing change from other communities in which I have been involved.
The FTX scandal was horrible for its victims, and immoral period, but it is not effective altruism. Just because someone is involved in the EA community, and identifies as EA, doesn’t mean they do effective altruism. I want to be someone who does EA, lives the principles and ideas, works to make the world a better place both now and in the future. For me, identifying as EA is secondary to that.
Here’s wishing you all the best on the road ahead. This community is what we make it, and we need to keep working to build on the good community that is already here. We shouldn’t let our harshest critics discourage us from our moral duties and hopeful visions.
That’s true, but in totaling up the benefits and costs of EA, we have to consider it. I think the test is roughly whether the wrongdoer was success in using EA to facilitate their bad deeds, or was materially motivated by their exposure to EA. I think the answer for FTX was yes—SBF obtained early funding from EA-aligned sources, as well as attracting critical early recruits (some of them turning into co-conspirators) and gaining valuable PR benefits from his association with EA. (One could argue that he was also motivated by EA, but I’m not confident that I believe that.)
In the same way, I would weigh mistreatment of children facilitated by association with the Catholic Church or Scouting in assessing those movements, even those child abuse is not the practice of Catholicism or of Scouting. In contrast, I wouldn’t generally view the unfacilitated/unmotivated acts—good or bad—of effective altruists, Catholics, or Scout volunteers in assessing the movements to which they belong.
Thanks for sharing, Amy!
The core of EA is indeed a net positive for humanity. And yet like many outfits before it, intra-organizational conflicts and power-seeking behavior can damage the entire collective.
In my opinion, within EA, the utilitarian long-termist emphasis, powered by SBF’s ill-gotten gains, has damaged the entire philosophy’s image. The original emphasis of doing good while pursuing an evidence-based approach seems dwarfed by an obsession with ex-risk. An obsession that lacks common sense, let alone a firm epistemic foundation.
How can we do good without taking x-risk into account? If all sentient life on Earth* is destroyed, goodness becomes impossible because there’s no one left to be good to.
*some existential risks, like takeover by an unsafe, superintelligent AGI may extend beyond Earth on a cosmic time scale, due to sub-light space travel
In my view, we ought to show humility in our ability to accurately forecast risk from any one domain beyond the five-year window. There’s little evidence to suggest anyone is better than random chance at making accurate forecasts beyond a certain time horizon.
At the core of SBF’s grift was the notion that stealing money from Canadian pensioners was justified if that money was spent reducing the odds of an x-event. After all, the end of humanity in the next century would eliminate trillions of potential lives, so some short-term suffering today is easily tolerated. Simple utilitarian logic would dictate that we should sacrifice well-being today if we can prove that those resources have a positive expected value.
I think anyone making an extraordinary claim needs equally extraordinary evidence to back it. That doesn’t mean x-risk isn’t theoretically possible.
Let’s put it this way—if I said there was a chance that life on Earth could be wiped out by an asteroid, few would argue against it since we know the base rate of asteroids hitting Earth is non-zero. We could argue about the odds of getting hit by an asteroid in any given year, but we wouldn’t argue over the very notion. And we could similarly argue over the right level of funding to scan deep space for potential Earth-destroyers, though we wouldn’t argue over the merits of the enterprise in the first place.
That is very different than me claiming with confidence that there is a 25% chance humanity perishes from an asteroid in the next 20 years. And along with this claim I recommend we stop all infrastructure projects globally and direct resources to interstellar lifeboats. You’d rightly ask for concrete evidence in the face of this claim.
The latter is what I hear from AGI alarmists.
I like humility. I wish AI advocates had more of it too. I agree that forecasting risk beyond five years is hard. It is the burden of advocates to demonstrate that what they want to do has acceptable risks of harm over the 10 to 100 year period, not skeptics’ burden to prove non-safety or non-benefience.
Exactly, the burden of proof lies with those who make the claim.
I hope EA is able to get back to the basics of doing the most real-world good with limited resources rather than utilitarian nonsense of saving trillions of theoretical future humans.
It’s not utilitarian nonsense to think about large numbers of loved ones. There are trillions of fish in the oceans, and we have the chance to make their lives so much better!
https://reducing-suffering.org/how-many-wild-animals-are-there/#Fish
Agreed! My comment was as aimed at the absurd conclusions one makes when weighing the tradeoffs we make today against trillions of unborn humans. That logic leads to extreme positions.
I think the EA community can, should, and will be judged by how we deal with bad behaviour like fraud, discrimination, abuse, and cultishness within the community.
Who knew what about Sam Bankman-Fried’s crimes when? As I understand it, an investigation is still underway and, as far as I know, nobody who enabled or associated with SBF has yet stepped down from their leadership positions in EA organizations. Not necessarily saying anyone should, but I’m not sure I see enough of a reckoning or enough accountability with regard to the FTX/Alameda fraud.
Has the EA community done enough to rebuke Nick Bostrom’s racism? The reaction seems dishearteningly mixed.
What will the EA community ultimately do about the allegations of abuse surrounding Nonlinear, once the organization posts its much-awaited response to those allegations? This is something to watch.
There are disturbing accounts of Leverage Research and, to a lesser extent, CFAR functioning much like cults. That’s pretty weird. How many communities have two, or one and a half, cults pop up inside them? What are the structural reasons this might be happening? Has anything been done that might prevent another EA-adjacent cult from arising again?
I’m not trying to be negative. I’m just trying to give constructive suggestions about what would improve EA’s reputation. I think there are a lot of lovely people and organizations in the EA sphere. But we will — and should — be judged based on how we deal with the minority of bad actors.
Just a quick point of order:
I think Will resigning from his position of the EV UK board and Nick resigning from both the UK and US boards would count for this
I’m not making a claim here whether these were the ‘right’ outcomes or whether it’s ‘enough’, but there have been consequences including at ‘leading’ EA organisations
I was unaware of these resignations! Why did Will resign? Was it because of his association with SBF? Will doesn’t say why he resigned in the source you linked. He links to a post that’s extremely long and I couldn’t immediately find a statement.
The stated reason is the same as Nick’s: since the FTX collapse he’s been reused from too much board business for staying on the board to make sense: