I’d love to see Oliver Habryka get a forum to discuss some of his criticisms of EA, as has been suggested on facebook
AnonymousEAForumAccount
From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn’t have then, and couldn’t now, say or do more to disown and disavow Leverage’s practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever…
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with—to put it bluntly—the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside.
While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage.
Circa 2018 CEA was literally supporting Leverage/Paradigm on an EA community building strategy event. In August 2018 (right in the middle of the 2017-2019 period at Leverage that Zoe Curzi described in her post), CEA supported and participated in an “EA Summit” that was incubated by Paradigm Academy (intimately associated with Leverage). “Three CEA staff members attended the conference” and the keynote was delivered by a senior CEA staff member (Kerry Vaughan). Tara MacAulay, who was CEO of CEA until stepping down less than a year before the summit to co-found Alameda Research, personally helped fund the summit.
At the time, “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community.” To address those concerns, Kerry committed to “address this in a separate post in the near future.” This commitment was subsequently dropped with no explanation other than “We decided not to work on this post at this time.”
This whole affair was reminiscent of CEA’s actions around the 2016 Pareto Fellowship, a CEA program where ~20 fellows lived in the Leverage house (which they weren’t told about beforehand), “training was mostly based on Leverage ideas”, and “some of the content was taught by Leverage staff and some by CEA staff who were very ‘in Leverage’s orbit’.” When CEA was fundraising at the end of that year, a community member mentioned that they’d heard rumors about a lack of professionalism at Pareto. CEA staff replied, on multiple occasions, that “a detailed review of the Pareto Fellowship is forthcoming.” This review was never produced.
Several years later, details emerged about Pareto’s interview process (which nearly 500 applicants went through) that confirmed the rumors about unprofessional behavior. One participant described it as “one of the strangest, most uncomfortable experiences I’ve had over several years of being involved in EA… It seemed like unscientific, crackpot psychology… it felt extremely cultish… The experience left me feeling humiliated and manipulated.”
I’ll also note that CEA eventually added a section to its mistakes page about Leverage, but not until 2022, and only after Zoe had published her posts and a commenter on Less Wrong explicitly asked why the mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. The mistakes page now acknowledges other aspects of the Leverage/CEA relationship, including that Leverage had “a table at the careers fair at EA Global several times.” Notably, CEA has never publicly stated that working with Leverage was a mistake or that Leverage is problematic in any way.
The problems at Leverage were Leverage’s fault, not CEA’s. But CEA could have, and should have, done more to distance EA from Leverage.
Very interesting, thanks so much for doing this!
I dunno, I think a funder that had a goal and mindset of funding EA community building could just do stuff like fund cause-agnostic EAGs and a maintenance of a cause-agnostic effectivealtruism.org, and nor really worry about things like the relative cost-effectiveness of GCR community building vs. GHW community building.
Some Prisoners Dilemma dynamics are at play here, but there are some important differences (at least from the standard PD setup).
The PD setup pre-supposes guilt, which really isn’t appropriate in this case. An investigation should be trying to follow the facts wherever they lead. It’s perfectly plausible that, for example, an investigation could find that reasonable actions were taken after the Slack warning, that there were good reasons for not publicly discussing the existence or specifics of those actions, and that there really isn’t much to learn from the Slack incident. I personally think other findings are more likely, but the whole rationale for an independent investigation is that people shouldn’t have to speculate about questions we can answer empirically.
People who aren’t “guilty” could “defect” and do so in a way where they wouldn’t be able to be identified. For example, take someone from the EA leaders Slack group who nobody would expect to be responsible for following up about the SBF warnings posted in that group. That person could provide investigators a) a list of leaders in the group who could reasonably be expected to follow-up and b) which of those people acknowledged seeing the Slack warnings. They could do so without compromising their identity. The person who discussed the Slack warnings with the New Yorker reporter basically followed this template.
Re: your comment that “if other prisoners strongly oppose cooperation, they may find a way to collectively punish those who do defect”, this presumably doesn’t apply to people who have already “defected”. For instance, if Tara has a paper trail of the allegations she raised during the Alameda dispute and shared that with investigators, I doubt that would burn any more bridges with EA leadership than she’s already burned.
I agree this would be a big challenge. A few thoughts…
An independent investigation would probably make some people more likely to share what they know. It could credibly offer them anonymity while still granting proper weight to their specific situation and access to information(unlike posting something via a burner account, which would be anonymous but less credible). I imagine contributing to a formal investigation would feel more comfortable to a lot of people than weighing in on forum discussions like this one.
People might be incentivized to participate out of a desire not to have the investigation publicly report “person X declined to participate”. I don’t think publicly reporting that would be appropriate in all cases where someone declined to participate, but I would support that in cases where the investigation had strong reasons to believe the lack of participation stemmed from someone wanting to obscure their own problematic behavior. (I don’t claim to know exactly where to draw the line for this sort of thing).
To encourage participation, I think it would be good to have CEA play a role in facilitating and/or endorsing (though maybe not conducting) the investigation. While this would compromise its independence to some degree, that would probably be worth it to provide a sort of “official stamp of approval”. That said, I would still hope other steps would be taken to help mitigate that compromise of independence.
As others have noted, some people would likely view participation as the right thing to do.
Have you directly asked these people if they’re interested (in the headhunting task)? It’s sort of a lot to just put something like this on someone’s plate (and it doesn’t feel to me like a-thing-they’ve-implicitly-signed-up-for-by-taking-their-role).
I have not. While nobody in EA leadership has weighed in on this explicitly, the general vibe I get is “we don’t need an investigation, and in any case it’d be hard to conduct and we’d need to fund it somehow.” So I’m focusing on arguing the need for an investigation, because without that the other points are moot. And my assumption is that if we build sufficient consensus on the need for an investigation, we could sort out the other issues. If leaders think an investigation is warranted but the logistical problems are insurmountable, they should make that case and then we can get to work on seeing if we can actually solve those logistical problems.
surely the investigation should have remit to add questions as it goes if they’re warranted by information it’s turned up?
Yeah, absolutely. What I had in mind when I wrote this was this excerpt from an outstanding comment from Jason on the Mintz investigation; I’d hope these ideas could help inform the structure of a future investigation:
How The Investigation Could Have Actually Rebuilt Lost Trust and Confidence
There was a more transparent / credible way to do this. EVF could have released, in advance, an appropriate range of specific questions upon which the external investigator was being asked to make findings of fact—as well as a set of possible responses (on a scale of “investigation rules this out with very high confidence” to “investigation shows this is almost certain”). For example—and these would probably have several subquestions each—one could announce in advance that the following questions were in scope and that the investigator had committed to providing specific answers:
Did anyone associated with EVF ever raise concerns about SBF being engaged in fraudulent activity? Did they ever receive any such concerns?
Did anyone associated with EVF discourage, threaten, or seek to silence any person who had concerns about illegal, unethical, or fraudulent conduct by SBF? (cf. the “Will basically threatened Tara” report).
When viewed against the generally-accepted norms for donor vetting in nonprofits, was anyone associated with EVF negligent, grossly negligent, or reckless in evaluating SBF’s suitability as a donor, failing to raise concerns about his suitability, or choosing not to conduct further investigation?
That kind of pre-commitment would have updated my faith in the process, and my confidence that the investigation reached all important topics. If EVF chose not to release the answers to those questions, it would have known that we could easily draw the appropriate inferences. Under those circumstances—but not the actual circumstances—I would view willingness to investigate as a valuable signal.
Here’s an update from CEA’s operations team, which has been working on updating our practices for handling donations. This also applies to other organizations that are legally within CEA (80,000 Hours, Giving What We Can, Forethought Foundation, and EA Funds).
“We are working with our lawyers to devise and implement an overarching policy for due diligence on all of our donors and donations going forward.
We’ve engaged a third party who now conducts KYC (know your client) due diligence research on all major donors (>$20K a year).
We have established a working relationship with TRM who conduct compliance and back-tracing for all crypto donations.”
I honestly doubt that this process would have, or should have, flagged anything about SBF. But I can imagine it helping in other cases, and I think it’s important for CEA to actually be following its stated procedures.
I hope that the “overarching policy for due diligence on all of our donors” that was put together post-Delo in 2021 was well designed. But it’s also worth noting Zach has also discussed “increasing the rigor of donor due diligence” in 2023. Maybe the 2023 improvements took the process from good to great. Maybe they suggest that the 2021 policies weren’t very good. It’d be great for the new and improved policy, and how it differs from the previous policy, to be shared (as Zach has suggested it will be) so other orgs can leverage it and to help the entire community understand what specific improvements have been made post-FTX.
That may well have been OP’s thinking and they may have been correct about the relative cost effectiveness of community building in GCR vs. GHW. But that doesn’t change the fact that this funding strategy had massive (and IMO problematic) implications for the incentive structure of the entire EA community.
I think it should be fairly uncontroversial that the best way to align the incentives of organizations like CEA with the views and values of the broader community would be if they were funded by organizations/program areas that made decisions using the lens of EA, not subsets of EA like GCR or GHW. OP is free to prioritize whatever it wants, including prioritizing things ahead of aligning CEA’s incentives with those of the EA community. But as things stand significant misalignment of incentives exists, and I think it’s important to acknowledge and spread awareness of that situation.
Just to clarify, I agree that EA should not have been expected to detect or predict FTX’s fraud, and explicitly stated that[1]. The point of my post is that other mistakes were likely made, we should be trying to learn from those mistakes, and there are worrisome indications that EA leadership is not interested in that learning process and may actually be inhibiting it.
- ^
“I believe it is incredibly unlikely that anyone in EA leadership was aware of, or should have anticipated, FTX’s massive fraud.”
- ^
In 2023, 80% of CEA’s budget came from OP’s GCRCB team. This creates an obvious incentive for CEA to prioritize the stuff the GCRCB team prioritizes.
As its name suggests, the GCRCB team has an overt focus on Global Catastrophic Risks. Here’s how OP’s website describes this team:
We want to increase the number of people who aim to prevent catastrophic events, and help them to achieve their goals.
We believe that scope-sensitive giving often means focusing on the reduction of global catastrophic risks — those which could endanger billions of people. We support organizations and projects that connect and support people who want to work on these issues, with a special focus on biosecurity and risks from advanced AI. In doing so, we hope to grow and empower the community of people focused on addressing threats to humanity and protecting the future of human civilization.
The work we fund in this area is primarily focused on identifying and supporting people who are or could eventually become helpful partners, critics, and grantees.
This team was formerly known as “Effective Altruism Community Growth (Longtermism).”
CEA has also received a much smaller amount of funding from OP’s “Effective Altruism (Global Health and Wellbeing)” team. From what I can tell, the GHW team basically focuses on meta charities doing global poverty type and animal welfare work (often via fundraising for effective charities in those fields). The OP website notes:
“This focus area uses the lens of our global health and wellbeing portfolio, just as our global catastrophic risks capacity building area uses the lens of our GCR portfolio… Our funding so far has focused on [grantees that] Raise funds for highly effective charities, Enable people to have a greater impact with their careers, and found and incubate new charities working on important and neglected interventions.”
There is an enormous difference between these teams in terms of their historical and ongoing impact on EA funding and incentives. The GCRCB team has granted over $400 million since 2016, including over $70 million to CEA and over $25 million to 80k. Compare that to the GHW which launched “in July 2022. In its first 12 months, the program had a budget of $10 million.”
So basically there’s been a ton of funding for a long time for EA community building that prioritizes AI/Bio/other GCR work, and a vastly smaller amount of funding that only became available recently for EA community building that uses a global poverty/animal welfare lens. And, as your question suggests, this dynamic is not at all well understood.
1. I don’t think any EA group outside of FTX would take responsibility for having done a lot ($60k+ worth) of due-diligence and investigation of FTX. My impression is that OP considered this as not their job, and CEA was not at all in a position to do this (to biased, was getting funded by FTX). In general, I think that our community doesn’t have strong measures in place to investigate funders. For example, I doubt that EA orgs have allocated $60k+ to investigate Dustin Moskovitz (and I imagine he might complain if others did!).
My overall impression was that this was just a large gap that the EA bureaucracy failed at. I similarly think that the “EA bureaucracy” is much weaker / less powerful than I think many imagine it being, and expect that there are several gaps like this. Note that OP/CEA/80k/etc are fairly limited organizations with specific agendas and areas of ownership.I’m very sympathetic to the idea that OP is not responsible for anything in this case. But CEA/EV should have done at least the due diligence that fit their official policies developed in the aftermath of Ben Delo affair. I think it’s reasonable for the community to ask whether or not that actually happened. Also, multiple media outlets have reported that CEA did do an investigation after the Alameda dispute. So it would be nice to know if that actually happened and what it found.
I don’t think the comparison about investigating Dustin is particularly apt, as he didn’t have all the complaints/red flags that SBF did. CEA received credible warnings from multiple sources about a CEA board member, and I’d like to think that warrants some sort of action. Which raises another question: if CEA received credible serious concerns about a current board member, what sort of response would CEA’s current policies dictate?
Re: gaps, yes, there are lots of gaps, and the FTX affair exposed some of them. Designing organizational and governance structures that will fix those gaps should be a priority, but I haven’t seen credible evidence that this is happening. So my default assumption is that these gaps will continue to cause problems.
2. I think there were some orange/red flags around, but that it would have taken some real investigation to figure out how dangerous FTX was. I have uncertainty in how difficult it would have been to notice that fraud or similar were happening (I previously assumed this would be near impossible, but am less sure now, after discussions with one EA in finance). I think that the evidence / flags around then were probably not enough to easily justify dramatically different actions at the time, without investigation—other than the potential action of doing a lengthy investigation—but again, that doing one would have been really tough, given the actors involved.Note that actually pulling off a significant investigation, and then taking corresponding actions, against an actor as powerful as SBF, would be very tough and require a great deal of financial independence.
I very much agree that we shouldn’t be holding EA leaders/orgs/community to a standard of “we should have known FTX was a huge fraud”. I mentioned this in my post, but want to reiterate it here. I feel like this is point where discussions about EA/FTX often get derailed. I don’t believe the people calling for an independent investigation are doing so because they think EA knew/should have known that FTX was a fraud; most of us have said that explicitly.
That said, given what was known at the time, I think it’s pretty reasonable to think that it would have been smart to do some things differently on the margin, e.g. 80k putting SBF on less on a pedestal. A post-mortem could help identify those things and provide insights on how to do better going forward.
3. My impression is that being a board member at CEA was incredibly stressful/intense, in the following months after the FTX collapse. My quick guess is that most of the fallout from the board would have been things like, “I just don’t want to have to deal with this anymore” rather than particular disagreements with the organizations. I didn’t get the impression that Rebecca’s viewpoints/criticisms were very common for other board members/execs, though I’d be curious to get their takes.
This seems like a very important issue. I think one big problem is that other board members/execs are disincentivized to voice concerns they might have, and this is one of the things an independent investigation could help with. Learning that several, or none of, the other board members had concerns similar to Rebecca’s would be very informative, and an investigation could share that sort of finding publicly without compromising any individual’s privacy.
4. I think that OP / CEA board members haven’t particularly focused on / cared about being open and transparent with the EA community. Some of the immediate reason here was that I assume lawyers recommended against speaking up then—but even without that, it’s kind of telling how little discussion there has been in the last year or so.
I suggest reading Dustin Moskovitz’s comments for some specific examples. Basically, I think that many people in authority (though to be honest, basically anyone who’s not a major EA poster/commenter) find “posting to the EA forum and responding to comments” to be pretty taxing/intense, and don’t do it much.Remember that OP staff members are mainly accountable to their managers, not the EA community or others. CEA is mostly funded by OP, so is basically similarly accountable to high-level OP people. (accountable means, “being employed/paid by” here)
Pretty much agree with everything you wrote here. Though I want to emphasize that I think this is a pretty awful outcome, and could be improved with better governance choices such as more community representation, and less OP representation, on CEA’s board.If OP doesn’t want to be accountable to the EA community, I think that’s suboptimal though their prerogative. But if CEA is going to take responsibility for community functions (e.g. community health, running effectivealtruism.org, etc.) there should be accountability mechanisms in place.
I also want to flag that an independent investigation would be a way for people in authority to get their ideas (at least on this topic) out in a less taxing and/or less publicly identifiable way than forum posting/commenting.
5. In terms of power, I think there’s a pretty huge power gap between the funders and the rest of the EA community. I don’t think that OP really regards themselves as responsible for or accountable to the EA community. My impression is that they fund EA efforts opportunistically, in situations where it seems to help both parties, but don’t want to be seen as having any long-term obligations or such. We don’t really have strong non-OP funding sources to fund things like “serious investigations into what happened.” Personally, I find this situation highly frustrating, and think it gets under-appreciated.
Very well put!
6. My rough impression is that from the standpoint of OP / CEA leaders, there’s not a great mystery around the FTX situation, and they also don’t see it happening again. So I think there’s not that much interest here into a deep investigation.
I think Zvi put it well: “a lot of top EA leaders ‘think we know what happened.’ Well, if they know, then they should tell us, because I do not know. I mean, I can guess, but they are not going to like my guess. There is the claim that none of this is about protecting EA’s reputation, you can decide whether that claim is credible.”
My model is that the “community” doesn’t really have much power directly, at this point. OP has power, and to the extent that they fund certain groups (at this point, when funding is so centralized), CEA and a few other groups have power.
I more or less agree with this. Though I think some of CEA’s power derives not only from having OP funding, but also the type of work CEA does (e.g. deciding who attends and talks at EAG). And other orgs and individuals have power related to reputation, quality of work, and ability to connect people with resources (money, jobs, etc).
Regarding how different parts of the community might be able to implement changes, it might be helpful to think about “top-down” vs. “bottom-up” reforms.
Top-down reforms would be initiated by the orgs/people that already have power. The problem, as you note, is that “OP and these other top EA groups feel like they just have a lot going on, and aren’t well positioned to do other significant reforms/changes.” (There may also be an issue whereby people with power don’t like to give it up.) But some changes are already in the works, most notably the EV breakup. This creates lots of opportunities to fix past problems, e.g. around board composition since there will be a lot more boards in the post-breakup world. Examples I’d like to see include:
EA ombudsperson/people on CEA’s board and/or advisory panel of with representatives from different parts of the community. (There used to be an advisory panel of this sort, but from what I understand they were consulted for practically, or perhaps literally, nothing).
Reduced reliance on OP’s GCRCB program as the overwhelmingly dominant funder of EA orgs. I’d like it even more if we could find a way to reduce reliance on OP overall as a funder, but that would require finding new money (hard) or making do with less money (bad). But even if OP shifted to funding CEA and other key EA orgs from a roughly even mix of its Global Catastrophic Risks and its Global Health and Wellbeing teams, that would be an enormous improvement IMO.
The fact that key EA orgs (including ones responsible for functions on behalf of the community) are overwhelmingly funded by a program which has priorities that only align with a subset of the community is IMO the most problematic incentive structure in EA. Compounding this problem, I think awareness of this dynamic is generally quite limited (people think of CEA as being funded by OP, not by OP’s GCRCB program), and appreciation of its implications even more so.
Vastly expanding the universe of people who serve on the boards of organizations that have power, and hopefully including more community representation on those boards.
Creating, implementing, and sharing good organizational policies around COI, donor due diligence, whistleblower protections, etc. (Note that this is supposedly in process.[1])
Bottom up reforms would be initiated by lay-EAs, the folks who make up the vast majority of the community. The obstacles to bottom up reforms are finding ways to fund them and coordinate them; almost by definition these people aren’t organized.
Examples I’d like to see include:
People starting dedicated projects focused on improving EA governance (broadly defined)
This could also involve a contest to identify (and incentivize the identification of) the best ideas
Establishment of some sort of coalition to facilitate coordination between local groups. I think “groups as a whole” could serve as a decentralized locus of power that could serve as a counterbalance to the existing centralized power bases. But right now, I don’t get the impression that there are good ways for groups to coordinate.
EAIF focusing on and/or earmarking some percentage of grantmaking towards improving EA governance (broadly defined). As mentioned earlier, lack of funding is a bit obstacle for bottom up reforms, so the EAIF (~$20m in grants since start of 2020) could be a huge help.
Individual EAs acting empowered to improve governance (broadly defined), e.g. publicly voicing support for various reforms, calling out problems they see, incorporating governance issues into their giving decisions, serving on boards, etc)
- ^
In December, Zach Robinson wrote: “EV also started working on structural improvements shortly after FTX’s collapse and continued to do so alongside the investigation. Over the past year, we have implemented structural governance and oversight improvements, including restructuring the way the two EV charities work together, updating and improving key corporate policies and procedures at both charities, increasing the rigor of donor due diligence, and staffing up the in-house legal departments. Nevertheless, good governance and oversight is not a goal that can ever be definitively ‘completed’, and we’ll continue to iterate and improve. We plan to open source those improvements where feasible so the whole EA ecosystem can learn from EV’s challenges and benefit from the work we’ve done.”
Open sourcing these improvements would be terrific, though to the best of my knowledge this hasn’t actually happened yet, which is disappointing. Though this stuff has been shared and I’ve just missed it.
I basically agree with this take. I think an investigation conducted from within the EA community (by someone(s) with a bit of distance from FTX) makes a lot more sense than Mintz v2. Ideally, the questions this investigation would seek to answer would be laid out and published ahead of time. Would also be good to pre-publish the principles that would determine what information would be redacted or kept confidential from public communication around findings.
This is then kind of a headhunting task; but who would take responsibility for that?
If we had one or more ombudspeople or explicit community representation on the CEA board (which I really wish we did), this would be a great role for them. As things stand, my low-conviction take is that this would be a reasonable thing for the new non-OP connected EV board members to take on, or perhaps the community health team. I have some reservations about having CEA involved in this, but also give a lot of weight to Rebecca saying “CEA is a logical place to house” an investigation.
Personally, I’d consider Rethink Priorities to be kind of the default choice to do an investigation; I’ve seen others toss their name around too. It’d be nice to have some process for generating other candidates (e.g. community health coming up with a few) and then some method of finding which of the options had the most community buy-in (e.g. ranked choice voting among everyone who has filled out the EA survey sometime in the last three years; I don’t think this would be an ideal methodology but it’s at least loosely in the ballpark of ways of finding an investigator that the community would find credible).
I’ll respond to your other points in a separate comment later, but for the sake of clarity I want to give a dedicated response to your summary:
My take is less, “there was some conspiracy where a few organizations did malicious things,” and more, “the EA bureaucracy has some significant weaknesses that were highlighted here.”
I very much agree that “the EA bureaucracy has some significant weaknesses that were highlighted here” is the right framing and takeaway.
My concern (which I believe is shared by other proponents of an independent investigation) is that these weaknesses have not, and are not on track to be, properly diagnosed and fixed.
I think plenty of EA leaders made mistakes with respect to FTX, but I don’t think there was any malicious conspiracy (except of course for the FTX/Alameda people who were directly involved in the fraud). For the most part, I think people behaved in line with their incentives (which is generally how we should expect people to act).
The problem is that we don’t have an understanding of how and why those incentives led to mistakes, and we haven’t changed the community’s incentive structures in a way that will prevent those same sorts of mistakes going forward. And I’m concerned that meaningful parts of EA leadership might be inhibiting that learning process in various ways. I’d feel better about the whole situation if there had been some public communications around specific things that have been to improve the efficacy of the EA bureaucracy, including a clear delineation of what things different parts of that bureaucracy are and are not responsible for.
One way you could engage would be to share your thoughts on when, generally speaking, you think an independent investigation would be warranted. You wouldn’t have to go into any specific details about this particular incident, you could discuss this in terms of high level principles and considerations that you think should guide the decision.
I’ve written a post adding my own call for an independent investigation, which also outlines new information in support of that position. Specifically, my post documents important issues where EA leaders have not been forthcoming in their communications, troublesome discrepancies between leaders’ communications and credible media reports, and claims that leaders have made about post-FTX reforms that appear misleading.
Indeed. And if EA leaders do believe that the issue is closed or that an investigation would be superfluous (which seems to be a common, if not the default, leadership position), they should make the case for that position explicitly and publicly. As things stand, the clearest articulation I’ve seen as to why there hasn’t been an independent investigation comes from Rob Bensinger’s account of what an unidentified “EA who was involved in EA’s response to the FTX implosion” told him based on information that dated from ~April 2023 and “might be out of date”.
There are currently key aspects of EA infrastructure that aren’t being run well, and I’d love to see EAIF fund improvements. For example, it could fund things like the operation of the effectivealtruism.org or the EA Newsletter. There are several important problems with the way these projects are currently being managed by CEA.
Content does not reflect the community’s cause prioritization (a longstanding issue). And there’s no transparency about this. An FAQ on Effectivealtruism.org mentions that “CEA created this website to help explain and spread the ideas of effective altruism.” But there’s no mention of the fact that the site’s cause prioritization is influenced by factors including the cause prioritization of CEA’s (explicitly GCR-focused) main funder (providing ~80% of CEA’s funding).
These projects get lost among CEA’s numerous priorities. For instance, “for several years promoting [effectivealtruism.org], including through search engine optimization, was not a priority for us. Prior to 2022, the website was updated infrequently, giving an inaccurate impression of the community and its ideas as they changed over time.” This lack of attention also led to serious oversites like Global Poverty (the community’s top priority at the time) not being represented on the homepage for an extended period. Similarly, Lizka recently wrote that “the monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test.” But due to competing priorities, “I never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, I’d have very little time or brain space to experiment.”
There doesn’t seem to be much, if any, accountability for ensuring these projects are operated well. These projects are a relatively small part of CEA’s portfolio, CEA is just one part of EV, and EV is undergoing huge changes. So it wouldn’t be shocking if nobody was paying close attention. And perhaps because of that, the limited public data we have available on both effectivealtruism.org and the EA newsletter doesn’t look great. Per CEA’s dashboard (which last updated these figures in June), after years of steady growth the newsletter’s subscriber count has been falling modestly since FTX collapsed. And traffic to ea.org’s “introduction page”, which is where the first two links on the homepage are designed to direct people, is the lowest it has been in at least 7 years and continues to drift downward.
I think all these problems could be improved if EAIF funded these projects, either by providing earmarked funding (and accountability) to CEA or by finding applicants to take these projects over.
To be clear, these aren’t the only “infrastructure” projects that I’d like to see EAIF fund. Other examples include the EA Survey (which IMO is already being done well but would likely appreciate EAIF funding) and conducting an ongoing analysis of community growth at various stages of the growth funnel (e.g. by updating and/or expanding this work).