In 2023, 80% of CEA’s budget came from OP’s GCRCB team. This creates an obvious incentive for CEA to prioritize the stuff the GCRCB team prioritizes.
As its name suggests, the GCRCB team has an overt focus on Global Catastrophic Risks. Here’s how OP’s website describes this team:
We want to increase the number of people who aim to prevent catastrophic events, and help them to achieve their goals.
We believe that scope-sensitive giving often means focusing on the reduction of global catastrophic risks — those which could endanger billions of people. We support organizations and projects that connect and support people who want to work on these issues, with a special focus on biosecurity and risks from advanced AI. In doing so, we hope to grow and empower the community of people focused on addressing threats to humanity and protecting the future of human civilization.
The work we fund in this area is primarily focused on identifying and supporting people who are or could eventually become helpful partners, critics, and grantees.
This team was formerly known as “Effective Altruism Community Growth (Longtermism).”
CEA has also received a much smaller amount of funding from OP’s “Effective Altruism (Global Health and Wellbeing)” team. From what I can tell, the GHW team basically focuses on meta charities doing global poverty type and animal welfare work (often via fundraising for effective charities in those fields). The OP website notes:
“This focus area uses the lens of our global health and wellbeing portfolio, just as our global catastrophic risks capacity building area uses the lens of our GCR portfolio… Our funding so far has focused on [grantees that] Raise funds for highly effective charities, Enable people to have a greater impact with their careers, and found and incubate new charities working on important and neglected interventions.”
There is an enormous difference between these teams in terms of their historical and ongoing impact on EA funding and incentives. The GCRCB team has granted over $400 million since 2016, including over $70 million to CEA and over $25 million to 80k. Compare that to the GHW which launched “in July 2022. In its first 12 months, the program had a budget of $10 million.”
So basically there’s been a ton of funding for a long time for EA community building that prioritizes AI/Bio/other GCR work, and a vastly smaller amount of funding that only became available recently for EA community building that uses a global poverty/animal welfare lens. And, as your question suggests, this dynamic is not at all well understood.
Just to clarify, I agree that EA should not have been expected to detect or predict FTX’s fraud, and explicitly stated that[1]. The point of my post is that other mistakes were likely made, we should be trying to learn from those mistakes, and there are worrisome indications that EA leadership is not interested in that learning process and may actually be inhibiting it.
“I believe it is incredibly unlikely that anyone in EA leadership was aware of, or should have anticipated, FTX’s massive fraud.”