I’ve been contemplating writing a post about my side of the issue. I wasn’t particularly close, but did get a chance to talk to some of the people involved.
Here’s my rough take, at this point: 1. I don’t think any EA group outside of FTX would take responsibility for having done a lot ($60k+ worth) of due-diligence and investigation of FTX. My impression is that OP considered this as not their job, and CEA was not at all in a position to do this (to biased, was getting funded by FTX). In general, I think that our community doesn’t have strong measures in place to investigate funders. For example, I doubt that EA orgs have allocated $60k+ to investigate Dustin Moskovitz (and I imagine he might complain if others did!). My overall impression was that this was just a large gap that the EA bureaucracy failed at. I similarly think that the “EA bureaucracy” is much weaker / less powerful than I think many imagine it being, and expect that there are several gaps like this. Note that OP/CEA/80k/etc are fairly limited organizations with specific agendas and areas of ownership.
2. I think there were some orange/red flags around, but that it would have taken some real investigation to figure out how dangerous FTX was. I have uncertainty in how difficult it would have been to notice that fraud or similar were happening (I previously assumed this would be near impossible, but am less sure now, after discussions with one EA in finance). I think that the evidence / flags around then were probably not enough to easily justify dramatically different actions at the time, without investigation—other than the potential action of doing a lengthy investigation—but again, that doing one would have been really tough, given the actors involved.
Note that actually pulling off a significant investigation, and then taking corresponding actions, against an actor as powerful as SBF, would be very tough and require a great deal of financial independence.
3. My impression is that being a board member at CEA was incredibly stressful/intense, in the following months after the FTX collapse. My quick guess is that most of the fallout from the board would have been things like, “I just don’t want to have to deal with this anymore” rather than particular disagreements with the organizations. I didn’t get the impression that Rebecca’s viewpoints/criticisms were very common for other board members/execs, though I’d be curious to get their takes.
4. I think that OP / CEA board members haven’t particularly focused on / cared about being open and transparent with the EA community. Some of the immediate reason here was that I assume lawyers recommended against speaking up then—but even without that, it’s kind of telling how little discussion there has been in the last year or so.
I suggest reading Dustin Moskovitz’s comments for some specific examples. Basically, I think that many people in authority (though to be honest, basically anyone who’s not a major EA poster/commenter) find “posting to the EA forum and responding to comments” to be pretty taxing/intense, and don’t do it much.
Remember that OP staff members are mainly accountable to their managers, not the EA community or others. CEA is mostly funded by OP, so is basically similarly accountable to high-level OP people. (accountable means, “being employed/paid by” here)
5. In terms of power, I think there’s a pretty huge power gap between the funders and the rest of the EA community. I don’t think that OP really regards themselves as responsible for or accountable to the EA community. My impression is that they fund EA efforts opportunistically, in situations where it seems to help both parties, but don’t want to be seen as having any long-term obligations or such. We don’t really have strong non-OP funding sources to fund things like “serious investigations into what happened.” Personally, I find this situation highly frustrating, and think it gets under-appreciated.
6. My rough impression is that from the standpoint of OP / CEA leaders, there’s not a great mystery around the FTX situation, and they also don’t see it happening again. So I think there’s not that much interest here into a deep investigation.
So, in summary, my take is less, “there was some conspiracy where a few organizations did malicious things,” and more, “the EA bureaucracy has some significant weaknesses that were highlighted here.”
Note: Some of my thinking on this comes from my time at the reform group. We spent some time coming up with a list of potential reform projects, including having better investigative abilities. My impression is that there generally hasn’t been much concern/interest in this space.
I’ll respond to your other points in a separate comment later, but for the sake of clarity I want to give a dedicated response to your summary:
My take is less, “there was some conspiracy where a few organizations did malicious things,” and more, “the EA bureaucracy has some significant weaknesses that were highlighted here.”
I very much agree that “the EA bureaucracy has some significant weaknesses that were highlighted here” is the right framing and takeaway.
My concern (which I believe is shared by other proponents of an independent investigation) is that these weaknesses have not, and are not on track to be, properly diagnosed and fixed.
I think plenty of EA leaders made mistakes with respect to FTX, but I don’t think there was any malicious conspiracy (except of course for the FTX/Alameda people who were directly involved in the fraud). For the most part, I think people behaved in line with their incentives (which is generally how we should expect people to act).
The problem is that we don’t have an understanding of how and why those incentives led to mistakes, and we haven’t changed the community’s incentive structures in a way that will prevent those same sorts of mistakes going forward. And I’m concerned that meaningful parts of EA leadership might be inhibiting that learning process in various ways. I’d feel better about the whole situation if there had been some public communications around specific things that have been to improve the efficacy of the EA bureaucracy, including a clear delineation of what things different parts of that bureaucracy are and are not responsible for.
we haven’t changed the community’s incentive structures in a way that will prevent those same sorts of mistakes going forward
I’m curious what your model is of the “community”—how would it significantly change on this issue?
My model is that the “community” doesn’t really have much power directly, at this point. OP has power, and to the extent that they fund certain groups (at this point, when funding is so centralized), CEA and a few other groups have power.
I could see these specific organizations doing reforms, if/when they want to. I could also see some future where the “EA community” bands together to fund their own, independent, work. I’m not sure what other options there are.
Right now, my impression is that OP and these other top EA groups feel like they just have a lot going on, and aren’t well positioned to do other significant reforms/changes.
My model is that the “community” doesn’t really have much power directly, at this point. OP has power, and to the extent that they fund certain groups (at this point, when funding is so centralized), CEA and a few other groups have power.
I more or less agree with this. Though I think some of CEA’s power derives not only from having OP funding, but also the type of work CEA does (e.g. deciding who attends and talks at EAG). And other orgs and individuals have power related to reputation, quality of work, and ability to connect people with resources (money, jobs, etc).
Regarding how different parts of the community might be able to implement changes, it might be helpful to think about “top-down” vs. “bottom-up” reforms.
Top-down reforms would be initiated by the orgs/people that already have power. The problem, as you note, is that “OP and these other top EA groups feel like they just have a lot going on, and aren’t well positioned to do other significant reforms/changes.” (There may also be an issue whereby people with power don’t like to give it up.) But some changes are already in the works, most notably the EV breakup. This creates lots of opportunities to fix past problems, e.g. around board composition since there will be a lot more boards in the post-breakup world. Examples I’d like to see include:
EA ombudsperson/people on CEA’s board and/or advisory panel of with representatives from different parts of the community. (There used to be an advisory panel of this sort, but from what I understand they were consulted for practically, or perhaps literally, nothing).
Reduced reliance on OP’s GCRCB program as the overwhelmingly dominant funder of EA orgs. I’d like it even more if we could find a way to reduce reliance on OP overall as a funder, but that would require finding new money (hard) or making do with less money (bad). But even if OP shifted to funding CEA and other key EA orgs from a roughly even mix of its Global Catastrophic Risks and its Global Health and Wellbeing teams, that would be an enormous improvement IMO.
The fact that key EA orgs (including ones responsible for functions on behalf of the community) are overwhelmingly funded by a program which has priorities that only align with a subset of the community is IMO the most problematic incentive structure in EA. Compounding this problem, I think awareness of this dynamic is generally quite limited (people think of CEA as being funded by OP, not by OP’s GCRCB program), and appreciation of its implications even more so.
Vastly expanding the universe of people who serve on the boards of organizations that have power, and hopefully including more community representation on those boards.
Creating, implementing, and sharing good organizational policies around COI, donor due diligence, whistleblower protections, etc. (Note that this is supposedly in process.[1])
Bottom up reforms would be initiated by lay-EAs, the folks who make up the vast majority of the community. The obstacles to bottom up reforms are finding ways to fund them and coordinate them; almost by definition these people aren’t organized.
Examples I’d like to see include:
People starting dedicated projects focused on improving EA governance (broadly defined)
This could also involve a contest to identify (and incentivize the identification of) the best ideas
Establishment of some sort of coalition to facilitate coordination between local groups. I think “groups as a whole” could serve as a decentralized locus of power that could serve as a counterbalance to the existing centralized power bases. But right now, I don’t get the impression that there are good ways for groups to coordinate.
EAIF focusing on and/or earmarking some percentage of grantmaking towards improving EA governance (broadly defined). As mentioned earlier, lack of funding is a bit obstacle for bottom up reforms, so the EAIF (~$20m in grants since start of 2020) could be a huge help.
Individual EAs acting empowered to improve governance (broadly defined), e.g. publicly voicing support for various reforms, calling out problems they see, incorporating governance issues into their giving decisions, serving on boards, etc)
In December, Zach Robinson wrote: “EV also started working on structural improvements shortly after FTX’s collapse and continued to do so alongside the investigation. Over the past year, we have implemented structural governance and oversight improvements, including restructuring the way the two EV charities work together, updating and improving key corporate policies and procedures at both charities, increasing the rigor of donor due diligence, and staffing up the in-house legal departments. Nevertheless, good governance and oversight is not a goal that can ever be definitively ‘completed’, and we’ll continue to iterate and improve. We plan to open source those improvements where feasible so the whole EA ecosystem can learn from EV’s challenges and benefit from the work we’ve done.”
Open sourcing these improvements would be terrific, though to the best of my knowledge this hasn’t actually happened yet, which is disappointing. Though this stuff has been shared and I’ve just missed it.
Can you say more about why the distinction between “Open Philanthropy” and “Open Philanthropy GCRCB team” matters? What subset of the community does this GCRCB team align with vs not? I’ve never heard this before
In 2023, 80% of CEA’s budget came from OP’s GCRCB team. This creates an obvious incentive for CEA to prioritize the stuff the GCRCB team prioritizes.
As its name suggests, the GCRCB team has an overt focus on Global Catastrophic Risks. Here’s how OP’s website describes this team:
We want to increase the number of people who aim to prevent catastrophic events, and help them to achieve their goals.
We believe that scope-sensitive giving often means focusing on the reduction of global catastrophic risks — those which could endanger billions of people. We support organizations and projects that connect and support people who want to work on these issues, with a special focus on biosecurity and risks from advanced AI. In doing so, we hope to grow and empower the community of people focused on addressing threats to humanity and protecting the future of human civilization.
The work we fund in this area is primarily focused on identifying and supporting people who are or could eventually become helpful partners, critics, and grantees.
This team was formerly known as “Effective Altruism Community Growth (Longtermism).”
CEA has also received a much smaller amount of funding from OP’s “Effective Altruism (Global Health and Wellbeing)” team. From what I can tell, the GHW team basically focuses on meta charities doing global poverty type and animal welfare work (often via fundraising for effective charities in those fields). The OP website notes:
“This focus area uses the lens of our global health and wellbeing portfolio, just as our global catastrophic risks capacity building area uses the lens of our GCR portfolio… Our funding so far has focused on [grantees that] Raise funds for highly effective charities, Enable people to have a greater impact with their careers, and found and incubate new charities working on important and neglected interventions.”
There is an enormous difference between these teams in terms of their historical and ongoing impact on EA funding and incentives. The GCRCB team has granted over $400 million since 2016, including over $70 million to CEA and over $25 million to 80k. Compare that to the GHW which launched “in July 2022. In its first 12 months, the program had a budget of $10 million.”
So basically there’s been a ton of funding for a long time for EA community building that prioritizes AI/Bio/other GCR work, and a vastly smaller amount of funding that only became available recently for EA community building that uses a global poverty/animal welfare lens. And, as your question suggests, this dynamic is not at all well understood.
I think the correct interpretation of this is that OP GHW doesn’t think general community building for its cause areas is cost effective, which seems quite plausible to me. [Edit: note I’m saying community-building in general, not just the EA community specifically—so under this view, the skewing of the EA community is less relevant. My baseline assumption is that any sort of community-building in developed countries isn’t an efficient use of money, so you need quite a strong case for increased impact for it to be worthwhile.].
I agree it’s likely they have a smaller budget, but equating budget with total spend per year (rather than saying that one is an indication of the other) is slightly begging the question—any gap between the two may reflect relevant CEAs.
I don’t think they had dramatically more money in 2023, and (without checking the numbers again to save time) I am pretty sure they mostly maxed out their budget both years.
That may well have been OP’s thinking and they may have been correct about the relative cost effectiveness of community building in GCR vs. GHW. But that doesn’t change the fact that this funding strategy had massive (and IMO problematic) implications for the incentive structure of the entire EA community.
I think it should be fairly uncontroversial that the best way to align the incentives of organizations like CEA with the views and values of the broader community would be if they were funded by organizations/program areas that made decisions using the lens of EA, not subsets of EA like GCR or GHW. OP is free to prioritize whatever it wants, including prioritizing things ahead of aligning CEA’s incentives with those of the EA community. But as things stand significant misalignment of incentives exists, and I think it’s important to acknowledge and spread awareness of that situation.
By analogy, suppose there were a Center for Medical Studies that was funded ~80% by a group interested in just cardiology. Influenced by the resultant incentives, the CMS hires a bunch of cardiologists, pushes medical students toward cardiology residencies, and devotes an entire instance of its flagship Medical Research Global conference to the exclusive study of topics in cardiology. All those things are fine, but this org shouldn’t use a name that implies that it takes a more general and balanced perspective on the field of medical studies, and should make very very clear that it doesn’t speak for the medical community as a whole.
> funded by organizations/program areas that made decisions using the lens of EA
I wouldn’t be surprised if a similar thing occured—those orgs/programs decide that it isn’t that cost-effective to do GHW community-building. I could see it going another way, but my baseline assumption is that any sort of community-building in developed countries isn’t an efficient use of money, so you need quite a strong case for increased impact for it to be worthwhile.
I dunno, I think a funder that had a goal and mindset of funding EA community building could just do stuff like fund cause-agnostic EAGs and a maintenance of a cause-agnostic effectivealtruism.org, and nor really worry about things like the relative cost-effectiveness of GCR community building vs. GHW community building.
It seems that everyone in EA / EA-adjacent circles who is not OP or EVF needs to be wary to some extent. If no one is on the lookout for these sorts of situations and no one is going to be indemnifying many EA individuals and entities, then other people/entities need to clearly understand that and take appropriate action to protect their own interests in the future.
All this sounds like a step back from a higher-trust environment in certain respects. For instance, it’s certainly appropriate for OP to “fund EA efforts opportunistically, in situations where it seems to help both parties, [without wanting] to be seen as having any long-term obligations or such.” That seems more like a transactional relationship. People in transactional relationships do not generally defer to their counterpart(ies) concerning the common good, count on them to be looking out for their own needs, and so on.
It’s possible that an “opportunistic[]” approach that is not “responsible for . . . . the EA community” is the right strategy for OP to pursue. But there are costs to efficiency, personal/smaller-institutional risk tolerance, morale, and so forth to a more transactional / opportunistic approach to the EA community.
I also imagine that these groups would largely agree. Like, if one were to ask OP/EVF, “do you think the EA community should be well-off to develop infrastructure so it doesn’t have to rely that much on you two”, I could imagine them being quite positive about this.
(That said, I imagine they might be less enthusiastic about certain actual implementations of this, especially ones that might get in the way of their other plans.)
1. I don’t think any EA group outside of FTX would take responsibility for having done a lot ($60k+ worth) of due-diligence and investigation of FTX. My impression is that OP considered this as not their job, and CEA was not at all in a position to do this (to biased, was getting funded by FTX). In general, I think that our community doesn’t have strong measures in place to investigate funders. For example, I doubt that EA orgs have allocated $60k+ to investigate Dustin Moskovitz (and I imagine he might complain if others did!). My overall impression was that this was just a large gap that the EA bureaucracy failed at. I similarly think that the “EA bureaucracy” is much weaker / less powerful than I think many imagine it being, and expect that there are several gaps like this. Note that OP/CEA/80k/etc are fairly limited organizations with specific agendas and areas of ownership.
I’m very sympathetic to the idea that OP is not responsible for anything in this case. But CEA/EV should have done at least the due diligence that fit their official policies developed in the aftermath of Ben Delo affair. I think it’s reasonable for the community to ask whether or not that actually happened. Also, multiple media outlets have reported that CEA did do an investigation after the Alameda dispute. So it would be nice to know if that actually happened and what it found.
I don’t think the comparison about investigating Dustin is particularly apt, as he didn’t have all the complaints/red flags that SBF did. CEA received credible warnings from multiple sources about a CEA board member, and I’d like to think that warrants some sort of action. Which raises another question: if CEA received credible serious concerns about a current board member, what sort of response would CEA’s current policies dictate?
Re: gaps, yes, there are lots of gaps, and the FTX affair exposed some of them. Designing organizational and governance structures that will fix those gaps should be a priority, but I haven’t seen credible evidence that this is happening. So my default assumption is that these gaps will continue to cause problems.
2. I think there were some orange/red flags around, but that it would have taken some real investigation to figure out how dangerous FTX was. I have uncertainty in how difficult it would have been to notice that fraud or similar were happening (I previously assumed this would be near impossible, but am less sure now, after discussions with one EA in finance). I think that the evidence / flags around then were probably not enough to easily justify dramatically different actions at the time, without investigation—other than the potential action of doing a lengthy investigation—but again, that doing one would have been really tough, given the actors involved.
Note that actually pulling off a significant investigation, and then taking corresponding actions, against an actor as powerful as SBF, would be very tough and require a great deal of financial independence.
I very much agree that we shouldn’t be holding EA leaders/orgs/community to a standard of “we should have known FTX was a huge fraud”. I mentioned this in my post, but want to reiterate it here. I feel like this is point where discussions about EA/FTX often get derailed. I don’t believe the people calling for an independent investigation are doing so because they think EA knew/should have known that FTX was a fraud; most of us have said that explicitly.
That said, given what was known at the time, I think it’s pretty reasonable to think that it would have been smart to do some things differently on the margin, e.g. 80k putting SBF on less on a pedestal. A post-mortem could help identify those things and provide insights on how to do better going forward.
3. My impression is that being a board member at CEA was incredibly stressful/intense, in the following months after the FTX collapse. My quick guess is that most of the fallout from the board would have been things like, “I just don’t want to have to deal with this anymore” rather than particular disagreements with the organizations. I didn’t get the impression that Rebecca’s viewpoints/criticisms were very common for other board members/execs, though I’d be curious to get their takes.
This seems like a very important issue. I think one big problem is that other board members/execs are disincentivized to voice concerns they might have, and this is one of the things an independent investigation could help with. Learning that several, or none of, the other board members had concerns similar to Rebecca’s would be very informative, and an investigation could share that sort of finding publicly without compromising any individual’s privacy.
4. I think that OP / CEA board members haven’t particularly focused on / cared about being open and transparent with the EA community. Some of the immediate reason here was that I assume lawyers recommended against speaking up then—but even without that, it’s kind of telling how little discussion there has been in the last year or so.
I suggest reading Dustin Moskovitz’s comments for some specific examples. Basically, I think that many people in authority (though to be honest, basically anyone who’s not a major EA poster/commenter) find “posting to the EA forum and responding to comments” to be pretty taxing/intense, and don’t do it much.
Remember that OP staff members are mainly accountable to their managers, not the EA community or others. CEA is mostly funded by OP, so is basically similarly accountable to high-level OP people. (accountable means, “being employed/paid by” here)
Pretty much agree with everything you wrote here. Though I want to emphasize that I think this is a pretty awful outcome, and could be improved with better governance choices such as more community representation, and less OP representation, on CEA’s board.
If OP doesn’t want to be accountable to the EA community, I think that’s suboptimal though their prerogative. But if CEA is going to take responsibility for community functions (e.g. community health, running effectivealtruism.org, etc.) there should be accountability mechanisms in place.
I also want to flag that an independent investigation would be a way for people in authority to get their ideas (at least on this topic) out in a less taxing and/or less publicly identifiable way than forum posting/commenting.
5. In terms of power, I think there’s a pretty huge power gap between the funders and the rest of the EA community. I don’t think that OP really regards themselves as responsible for or accountable to the EA community. My impression is that they fund EA efforts opportunistically, in situations where it seems to help both parties, but don’t want to be seen as having any long-term obligations or such. We don’t really have strong non-OP funding sources to fund things like “serious investigations into what happened.” Personally, I find this situation highly frustrating, and think it gets under-appreciated.
Very well put!
6. My rough impression is that from the standpoint of OP / CEA leaders, there’s not a great mystery around the FTX situation, and they also don’t see it happening again. So I think there’s not that much interest here into a deep investigation.
I think Zvi put it well: “a lot of top EA leaders ‘think we know what happened.’ Well, if they know, then they should tell us, because I do not know. I mean, I can guess, but they are not going to like my guess. There is the claim that none of this is about protecting EA’s reputation, you can decide whether that claim is credible.”
Here’s an update from CEA’s operations team, which has been working on updating our practices for handling donations. This also applies to other organizations that are legally within CEA (80,000 Hours, Giving What We Can, Forethought Foundation, and EA Funds).
“We are working with our lawyers to devise and implement an overarching policy for due diligence on all of our donors and donations going forward.
We’ve engaged a third party who now conducts KYC (know your client) due diligence research on all major donors (>$20K a year).
We have established a working relationship with TRM who conduct compliance and back-tracing for all crypto donations.”
I honestly doubt that this process would have, or should have, flagged anything about SBF. But I can imagine it helping in other cases, and I think it’s important for CEA to actually be following its stated procedures.
I hope that the “overarching policy for due diligence on all of our donors” that was put together post-Delo in 2021 was well designed. But it’s also worth noting Zach has also discussed “increasing the rigor of donor due diligence” in 2023. Maybe the 2023 improvements took the process from good to great. Maybe they suggest that the 2021 policies weren’t very good. It’d be great for the new and improved policy, and how it differs from the previous policy, to be shared (as Zach has suggested it will be) so other orgs can leverage it and to help the entire community understand what specific improvements have been made post-FTX.
I don’t think the comparison about investigating Dustin is particularly apt, as he didn’t have all the complaints/red flags that SBF did.
And—if we are talking about 2024 -- there’s another reason it doesn’t seem like a great comparison to me. Researching catastrophic risks (to one’s movement or otherwise) is generally only compelling to the extent that you can mitigate the likelihood and/or effect of those risks. Given the predominance of a single funder, investigating certain risks posed by that funder may not lead to actionable information to reduce risk no matter what the facts are.[1] At some level of vulnerability, the risk becomes akin to the risk of a massive life-extinguishing asteroid crashing into Earth in the next week; I’m just as dead if I know about it a week in advance rather than seconds in advance.
I think it depends what sort of risks we are talking about. The more likely Dustin is to turn out to be perpetrating a fraud (which I think is very unlikely!) the more the marginal person should be earning to give. And the more projects should be taking approaches that conserve runway at the cost of making slower progress toward their goals.
I think it depends what sort of risks we are talking about.
Agree—I don’t think the fatalistic view applies to all Dustin-related risks, just enough to make him a suboptimal comparison here.
To take an FTX-like situation as an example, I doubt many orgs could avoid bankruptcy if they had liability for 4-6 years’ clawback of prior OP grants, and it’s not clear that getting months to years’ worth of advance notice and attempted mitigation would materially reduce the odds of bankruptcy. (As you note, this is extraordinarily unlikely!)
Encouraging more people to EtG would be mitigation for the movement as a whole, but its effectiveness would be dependent on [1] the catastrophic fraud actually existing, [2] you having enough reason to believe that to recommend action to other EAs but not enough to go to the media and/or cops and get traction,[1] [3] you persuading the would-be EtGers that circumstances warranted them choosing this path, and [4] your advocacy not indirectly causing prompt public discovery and collapse of the fraud. After all, the value would be knowing of the risk in advance to take mitigating action sufficiently in advance of public discovery. Understanding the true risk a few weeks to months in advance of everyone else isn’t likely to help much at all. Those seem like difficult conditions to meet.
Reporting, but not getting traction from external watchdogs, is possible (cf. Madoff). I have not thought through whether having enough reason to advise other EAs, but not enough to report externally, is possible.
I’ve been contemplating writing a post about my side of the issue. I wasn’t particularly close, but did get a chance to talk to some of the people involved.
Here’s my rough take, at this point:
1. I don’t think any EA group outside of FTX would take responsibility for having done a lot ($60k+ worth) of due-diligence and investigation of FTX. My impression is that OP considered this as not their job, and CEA was not at all in a position to do this (to biased, was getting funded by FTX). In general, I think that our community doesn’t have strong measures in place to investigate funders. For example, I doubt that EA orgs have allocated $60k+ to investigate Dustin Moskovitz (and I imagine he might complain if others did!).
My overall impression was that this was just a large gap that the EA bureaucracy failed at. I similarly think that the “EA bureaucracy” is much weaker / less powerful than I think many imagine it being, and expect that there are several gaps like this. Note that OP/CEA/80k/etc are fairly limited organizations with specific agendas and areas of ownership.
2. I think there were some orange/red flags around, but that it would have taken some real investigation to figure out how dangerous FTX was. I have uncertainty in how difficult it would have been to notice that fraud or similar were happening (I previously assumed this would be near impossible, but am less sure now, after discussions with one EA in finance). I think that the evidence / flags around then were probably not enough to easily justify dramatically different actions at the time, without investigation—other than the potential action of doing a lengthy investigation—but again, that doing one would have been really tough, given the actors involved.
Note that actually pulling off a significant investigation, and then taking corresponding actions, against an actor as powerful as SBF, would be very tough and require a great deal of financial independence.
3. My impression is that being a board member at CEA was incredibly stressful/intense, in the following months after the FTX collapse. My quick guess is that most of the fallout from the board would have been things like, “I just don’t want to have to deal with this anymore” rather than particular disagreements with the organizations. I didn’t get the impression that Rebecca’s viewpoints/criticisms were very common for other board members/execs, though I’d be curious to get their takes.
4. I think that OP / CEA board members haven’t particularly focused on / cared about being open and transparent with the EA community. Some of the immediate reason here was that I assume lawyers recommended against speaking up then—but even without that, it’s kind of telling how little discussion there has been in the last year or so.
I suggest reading Dustin Moskovitz’s comments for some specific examples. Basically, I think that many people in authority (though to be honest, basically anyone who’s not a major EA poster/commenter) find “posting to the EA forum and responding to comments” to be pretty taxing/intense, and don’t do it much.
Remember that OP staff members are mainly accountable to their managers, not the EA community or others. CEA is mostly funded by OP, so is basically similarly accountable to high-level OP people. (accountable means, “being employed/paid by” here)
5. In terms of power, I think there’s a pretty huge power gap between the funders and the rest of the EA community. I don’t think that OP really regards themselves as responsible for or accountable to the EA community. My impression is that they fund EA efforts opportunistically, in situations where it seems to help both parties, but don’t want to be seen as having any long-term obligations or such. We don’t really have strong non-OP funding sources to fund things like “serious investigations into what happened.” Personally, I find this situation highly frustrating, and think it gets under-appreciated.
6. My rough impression is that from the standpoint of OP / CEA leaders, there’s not a great mystery around the FTX situation, and they also don’t see it happening again. So I think there’s not that much interest here into a deep investigation.
So, in summary, my take is less, “there was some conspiracy where a few organizations did malicious things,” and more, “the EA bureaucracy has some significant weaknesses that were highlighted here.”
Note: Some of my thinking on this comes from my time at the reform group. We spent some time coming up with a list of potential reform projects, including having better investigative abilities. My impression is that there generally hasn’t been much concern/interest in this space.
I’ll respond to your other points in a separate comment later, but for the sake of clarity I want to give a dedicated response to your summary:
I very much agree that “the EA bureaucracy has some significant weaknesses that were highlighted here” is the right framing and takeaway.
My concern (which I believe is shared by other proponents of an independent investigation) is that these weaknesses have not, and are not on track to be, properly diagnosed and fixed.
I think plenty of EA leaders made mistakes with respect to FTX, but I don’t think there was any malicious conspiracy (except of course for the FTX/Alameda people who were directly involved in the fraud). For the most part, I think people behaved in line with their incentives (which is generally how we should expect people to act).
The problem is that we don’t have an understanding of how and why those incentives led to mistakes, and we haven’t changed the community’s incentive structures in a way that will prevent those same sorts of mistakes going forward. And I’m concerned that meaningful parts of EA leadership might be inhibiting that learning process in various ways. I’d feel better about the whole situation if there had been some public communications around specific things that have been to improve the efficacy of the EA bureaucracy, including a clear delineation of what things different parts of that bureaucracy are and are not responsible for.
I’m curious what your model is of the “community”—how would it significantly change on this issue?
My model is that the “community” doesn’t really have much power directly, at this point. OP has power, and to the extent that they fund certain groups (at this point, when funding is so centralized), CEA and a few other groups have power.
I could see these specific organizations doing reforms, if/when they want to. I could also see some future where the “EA community” bands together to fund their own, independent, work. I’m not sure what other options there are.
Right now, my impression is that OP and these other top EA groups feel like they just have a lot going on, and aren’t well positioned to do other significant reforms/changes.
I more or less agree with this. Though I think some of CEA’s power derives not only from having OP funding, but also the type of work CEA does (e.g. deciding who attends and talks at EAG). And other orgs and individuals have power related to reputation, quality of work, and ability to connect people with resources (money, jobs, etc).
Regarding how different parts of the community might be able to implement changes, it might be helpful to think about “top-down” vs. “bottom-up” reforms.
Top-down reforms would be initiated by the orgs/people that already have power. The problem, as you note, is that “OP and these other top EA groups feel like they just have a lot going on, and aren’t well positioned to do other significant reforms/changes.” (There may also be an issue whereby people with power don’t like to give it up.) But some changes are already in the works, most notably the EV breakup. This creates lots of opportunities to fix past problems, e.g. around board composition since there will be a lot more boards in the post-breakup world. Examples I’d like to see include:
EA ombudsperson/people on CEA’s board and/or advisory panel of with representatives from different parts of the community. (There used to be an advisory panel of this sort, but from what I understand they were consulted for practically, or perhaps literally, nothing).
Reduced reliance on OP’s GCRCB program as the overwhelmingly dominant funder of EA orgs. I’d like it even more if we could find a way to reduce reliance on OP overall as a funder, but that would require finding new money (hard) or making do with less money (bad). But even if OP shifted to funding CEA and other key EA orgs from a roughly even mix of its Global Catastrophic Risks and its Global Health and Wellbeing teams, that would be an enormous improvement IMO.
The fact that key EA orgs (including ones responsible for functions on behalf of the community) are overwhelmingly funded by a program which has priorities that only align with a subset of the community is IMO the most problematic incentive structure in EA. Compounding this problem, I think awareness of this dynamic is generally quite limited (people think of CEA as being funded by OP, not by OP’s GCRCB program), and appreciation of its implications even more so.
Vastly expanding the universe of people who serve on the boards of organizations that have power, and hopefully including more community representation on those boards.
Creating, implementing, and sharing good organizational policies around COI, donor due diligence, whistleblower protections, etc. (Note that this is supposedly in process.[1])
Bottom up reforms would be initiated by lay-EAs, the folks who make up the vast majority of the community. The obstacles to bottom up reforms are finding ways to fund them and coordinate them; almost by definition these people aren’t organized.
Examples I’d like to see include:
People starting dedicated projects focused on improving EA governance (broadly defined)
This could also involve a contest to identify (and incentivize the identification of) the best ideas
Establishment of some sort of coalition to facilitate coordination between local groups. I think “groups as a whole” could serve as a decentralized locus of power that could serve as a counterbalance to the existing centralized power bases. But right now, I don’t get the impression that there are good ways for groups to coordinate.
EAIF focusing on and/or earmarking some percentage of grantmaking towards improving EA governance (broadly defined). As mentioned earlier, lack of funding is a bit obstacle for bottom up reforms, so the EAIF (~$20m in grants since start of 2020) could be a huge help.
Individual EAs acting empowered to improve governance (broadly defined), e.g. publicly voicing support for various reforms, calling out problems they see, incorporating governance issues into their giving decisions, serving on boards, etc)
In December, Zach Robinson wrote: “EV also started working on structural improvements shortly after FTX’s collapse and continued to do so alongside the investigation. Over the past year, we have implemented structural governance and oversight improvements, including restructuring the way the two EV charities work together, updating and improving key corporate policies and procedures at both charities, increasing the rigor of donor due diligence, and staffing up the in-house legal departments. Nevertheless, good governance and oversight is not a goal that can ever be definitively ‘completed’, and we’ll continue to iterate and improve. We plan to open source those improvements where feasible so the whole EA ecosystem can learn from EV’s challenges and benefit from the work we’ve done.”
Open sourcing these improvements would be terrific, though to the best of my knowledge this hasn’t actually happened yet, which is disappointing. Though this stuff has been shared and I’ve just missed it.
Can you say more about why the distinction between “Open Philanthropy” and “Open Philanthropy GCRCB team” matters? What subset of the community does this GCRCB team align with vs not? I’ve never heard this before
In 2023, 80% of CEA’s budget came from OP’s GCRCB team. This creates an obvious incentive for CEA to prioritize the stuff the GCRCB team prioritizes.
As its name suggests, the GCRCB team has an overt focus on Global Catastrophic Risks. Here’s how OP’s website describes this team:
CEA has also received a much smaller amount of funding from OP’s “Effective Altruism (Global Health and Wellbeing)” team. From what I can tell, the GHW team basically focuses on meta charities doing global poverty type and animal welfare work (often via fundraising for effective charities in those fields). The OP website notes:
There is an enormous difference between these teams in terms of their historical and ongoing impact on EA funding and incentives. The GCRCB team has granted over $400 million since 2016, including over $70 million to CEA and over $25 million to 80k. Compare that to the GHW which launched “in July 2022. In its first 12 months, the program had a budget of $10 million.”
So basically there’s been a ton of funding for a long time for EA community building that prioritizes AI/Bio/other GCR work, and a vastly smaller amount of funding that only became available recently for EA community building that uses a global poverty/animal welfare lens. And, as your question suggests, this dynamic is not at all well understood.
I think the correct interpretation of this is that OP GHW doesn’t think general community building for its cause areas is cost effective, which seems quite plausible to me. [Edit: note I’m saying community-building in general, not just the EA community specifically—so under this view, the skewing of the EA community is less relevant. My baseline assumption is that any sort of community-building in developed countries isn’t an efficient use of money, so you need quite a strong case for increased impact for it to be worthwhile.].
The risk, I think, is that this becomes a self-fulfilling prophecy where:
Prominent EA institutions get funded mostly from OP-GCRCB money
Those institutions then prioritise GCRs[1] more
The EA community gets more focused on GCRs by either deferring to these institutions or evaporative cooling by less GCR/longtermist EAs
Due to the increased GCR focus of EA, GHW/AW funders think that funding prominent EA institutions is not cost-effective for their goals
Go-to step 1
Using this as a general term for AI x-risk, longtermism, etc/
They also have a much smaller budget (as indicated by total spend per year).
You can see a direct comparison of total funding in this post I wrote: https://forum.effectivealtruism.org/posts/nnTQaLpBfy2znG5vm/the-flow-of-funding-in-ea-movement-building#Overall_picture
I agree it’s likely they have a smaller budget, but equating budget with total spend per year (rather than saying that one is an indication of the other) is slightly begging the question—any gap between the two may reflect relevant CEAs.
Fair point, I couldn’t find a link to point to the budget, but:
“We launched this program in July 2022. In its first 12 months, the program had a budget of $10 million.”
From their website—https://www.openphilanthropy.org/focus/ea-global-health-and-wellbeing/
I don’t think they had dramatically more money in 2023, and (without checking the numbers again to save time) I am pretty sure they mostly maxed out their budget both years.
That may well have been OP’s thinking and they may have been correct about the relative cost effectiveness of community building in GCR vs. GHW. But that doesn’t change the fact that this funding strategy had massive (and IMO problematic) implications for the incentive structure of the entire EA community.
I think it should be fairly uncontroversial that the best way to align the incentives of organizations like CEA with the views and values of the broader community would be if they were funded by organizations/program areas that made decisions using the lens of EA, not subsets of EA like GCR or GHW. OP is free to prioritize whatever it wants, including prioritizing things ahead of aligning CEA’s incentives with those of the EA community. But as things stand significant misalignment of incentives exists, and I think it’s important to acknowledge and spread awareness of that situation.
A name change would be a good start.
By analogy, suppose there were a Center for Medical Studies that was funded ~80% by a group interested in just cardiology. Influenced by the resultant incentives, the CMS hires a bunch of cardiologists, pushes medical students toward cardiology residencies, and devotes an entire instance of its flagship Medical Research Global conference to the exclusive study of topics in cardiology. All those things are fine, but this org shouldn’t use a name that implies that it takes a more general and balanced perspective on the field of medical studies, and should make very very clear that it doesn’t speak for the medical community as a whole.
> funded by organizations/program areas that made decisions using the lens of EA
I wouldn’t be surprised if a similar thing occured—those orgs/programs decide that it isn’t that cost-effective to do GHW community-building. I could see it going another way, but my baseline assumption is that any sort of community-building in developed countries isn’t an efficient use of money, so you need quite a strong case for increased impact for it to be worthwhile.
I dunno, I think a funder that had a goal and mindset of funding EA community building could just do stuff like fund cause-agnostic EAGs and a maintenance of a cause-agnostic effectivealtruism.org, and nor really worry about things like the relative cost-effectiveness of GCR community building vs. GHW community building.
It seems that everyone in EA / EA-adjacent circles who is not OP or EVF needs to be wary to some extent. If no one is on the lookout for these sorts of situations and no one is going to be indemnifying many EA individuals and entities, then other people/entities need to clearly understand that and take appropriate action to protect their own interests in the future.
All this sounds like a step back from a higher-trust environment in certain respects. For instance, it’s certainly appropriate for OP to “fund EA efforts opportunistically, in situations where it seems to help both parties, [without wanting] to be seen as having any long-term obligations or such.” That seems more like a transactional relationship. People in transactional relationships do not generally defer to their counterpart(ies) concerning the common good, count on them to be looking out for their own needs, and so on.
It’s possible that an “opportunistic[]” approach that is not “responsible for . . . . the EA community” is the right strategy for OP to pursue. But there are costs to efficiency, personal/smaller-institutional risk tolerance, morale, and so forth to a more transactional / opportunistic approach to the EA community.
Agreed!
I also imagine that these groups would largely agree. Like, if one were to ask OP/EVF, “do you think the EA community should be well-off to develop infrastructure so it doesn’t have to rely that much on you two”, I could imagine them being quite positive about this.
(That said, I imagine they might be less enthusiastic about certain actual implementations of this, especially ones that might get in the way of their other plans.)
I’m very sympathetic to the idea that OP is not responsible for anything in this case. But CEA/EV should have done at least the due diligence that fit their official policies developed in the aftermath of Ben Delo affair. I think it’s reasonable for the community to ask whether or not that actually happened. Also, multiple media outlets have reported that CEA did do an investigation after the Alameda dispute. So it would be nice to know if that actually happened and what it found.
I don’t think the comparison about investigating Dustin is particularly apt, as he didn’t have all the complaints/red flags that SBF did. CEA received credible warnings from multiple sources about a CEA board member, and I’d like to think that warrants some sort of action. Which raises another question: if CEA received credible serious concerns about a current board member, what sort of response would CEA’s current policies dictate?
Re: gaps, yes, there are lots of gaps, and the FTX affair exposed some of them. Designing organizational and governance structures that will fix those gaps should be a priority, but I haven’t seen credible evidence that this is happening. So my default assumption is that these gaps will continue to cause problems.
I very much agree that we shouldn’t be holding EA leaders/orgs/community to a standard of “we should have known FTX was a huge fraud”. I mentioned this in my post, but want to reiterate it here. I feel like this is point where discussions about EA/FTX often get derailed. I don’t believe the people calling for an independent investigation are doing so because they think EA knew/should have known that FTX was a fraud; most of us have said that explicitly.
That said, given what was known at the time, I think it’s pretty reasonable to think that it would have been smart to do some things differently on the margin, e.g. 80k putting SBF on less on a pedestal. A post-mortem could help identify those things and provide insights on how to do better going forward.
This seems like a very important issue. I think one big problem is that other board members/execs are disincentivized to voice concerns they might have, and this is one of the things an independent investigation could help with. Learning that several, or none of, the other board members had concerns similar to Rebecca’s would be very informative, and an investigation could share that sort of finding publicly without compromising any individual’s privacy.
Pretty much agree with everything you wrote here. Though I want to emphasize that I think this is a pretty awful outcome, and could be improved with better governance choices such as more community representation, and less OP representation, on CEA’s board.
If OP doesn’t want to be accountable to the EA community, I think that’s suboptimal though their prerogative. But if CEA is going to take responsibility for community functions (e.g. community health, running effectivealtruism.org, etc.) there should be accountability mechanisms in place.
I also want to flag that an independent investigation would be a way for people in authority to get their ideas (at least on this topic) out in a less taxing and/or less publicly identifiable way than forum posting/commenting.
Very well put!
I think Zvi put it well: “a lot of top EA leaders ‘think we know what happened.’ Well, if they know, then they should tell us, because I do not know. I mean, I can guess, but they are not going to like my guess. There is the claim that none of this is about protecting EA’s reputation, you can decide whether that claim is credible.”
What was EV’s official policy post-Ben Delo?
As of February 2021:
I honestly doubt that this process would have, or should have, flagged anything about SBF. But I can imagine it helping in other cases, and I think it’s important for CEA to actually be following its stated procedures.
I hope that the “overarching policy for due diligence on all of our donors” that was put together post-Delo in 2021 was well designed. But it’s also worth noting Zach has also discussed “increasing the rigor of donor due diligence” in 2023. Maybe the 2023 improvements took the process from good to great. Maybe they suggest that the 2021 policies weren’t very good. It’d be great for the new and improved policy, and how it differs from the previous policy, to be shared (as Zach has suggested it will be) so other orgs can leverage it and to help the entire community understand what specific improvements have been made post-FTX.
And—if we are talking about 2024 -- there’s another reason it doesn’t seem like a great comparison to me. Researching catastrophic risks (to one’s movement or otherwise) is generally only compelling to the extent that you can mitigate the likelihood and/or effect of those risks. Given the predominance of a single funder, investigating certain risks posed by that funder may not lead to actionable information to reduce risk no matter what the facts are.[1] At some level of vulnerability, the risk becomes akin to the risk of a massive life-extinguishing asteroid crashing into Earth in the next week; I’m just as dead if I know about it a week in advance rather than seconds in advance.
Of course, certain ethical duties would still exist.
I think it depends what sort of risks we are talking about. The more likely Dustin is to turn out to be perpetrating a fraud (which I think is very unlikely!) the more the marginal person should be earning to give. And the more projects should be taking approaches that conserve runway at the cost of making slower progress toward their goals.
Agree—I don’t think the fatalistic view applies to all Dustin-related risks, just enough to make him a suboptimal comparison here.
To take an FTX-like situation as an example, I doubt many orgs could avoid bankruptcy if they had liability for 4-6 years’ clawback of prior OP grants, and it’s not clear that getting months to years’ worth of advance notice and attempted mitigation would materially reduce the odds of bankruptcy. (As you note, this is extraordinarily unlikely!)
Encouraging more people to EtG would be mitigation for the movement as a whole, but its effectiveness would be dependent on [1] the catastrophic fraud actually existing, [2] you having enough reason to believe that to recommend action to other EAs but not enough to go to the media and/or cops and get traction,[1] [3] you persuading the would-be EtGers that circumstances warranted them choosing this path, and [4] your advocacy not indirectly causing prompt public discovery and collapse of the fraud. After all, the value would be knowing of the risk in advance to take mitigating action sufficiently in advance of public discovery. Understanding the true risk a few weeks to months in advance of everyone else isn’t likely to help much at all. Those seem like difficult conditions to meet.
Reporting, but not getting traction from external watchdogs, is possible (cf. Madoff). I have not thought through whether having enough reason to advise other EAs, but not enough to report externally, is possible.