Red Teaming CEA’s Community Building Work

Staff from CEA, GWWC, and EA Funds reviewed drafts of this report and provided helpful feedback. I’m particularly grateful for Max Dalton’s thoughtful engagement and openness to discussing CEA’s mistakes. HollyK provided extraordinarily beneficial editing and critiques; I highly recommend other EAs take advantage of her offer to provide editing and feedback. I also appreciate the contributions of other anonymous reviewers. In all cases, I remained anonymous to those providing feedback. Any mistakes are mine.

Introduction

In this submission for the Red Teaming contest, I take a detailed look at CEA’s community building work over the years. One of my major findings is that program evaluations, especially public ones, have historically been rare. I offer evidence that this has resulted in problems persisting longer than they needed to and lessons not being learned as widely or quickly as they could have been.

By sharing my in-depth evaluation publicly, I hope to improve the quality of information underlying current and future community building projects. I also offer specific suggestions for CEA and other community members based on my findings. With a better understanding of problematic patterns that have repeated, our community can identify ways to break those patterns and ideally create new, positive patterns to take their place.

In assessing CEA’s work, I evaluated each of its community building projects individually. To the best of my knowledge this is a significantly more thorough evaluation than has been done to date. For each project, I’ve provided a brief background and a list of problems. In the spirit of Red Teaming, my focus is on the problems. CEA’s community building work has of course had many significant benefits as well. However, those benefits are generally better understood by the broader community and offer fewer learning opportunities.

While conducting this analysis, I’ve tried to follow a suggestion found on the “Mistakes” page of CEA’s website (which includes a non-comprehensive list of mistakes the organization has made with a focus on those that have impacted outside stakeholders): “When you evaluate us as an organization, we recommend using this page, but also looking directly at what we’ve produced, rather than just taking our word for things.” Many, but far from all, of the problems I discuss are mentioned on that page; in those cases I often quote CEA’s characterization of the problems. Based on the evidence I’ve collected, my impression is that the Mistakes page (and other places where CEA discusses its mistakes) generally understates the number, degree, and duration of CEA’s mistakes (so much so that I suggest the Mistakes page be radically overhauled or eliminated completely).

I’ve listed CEA’s projects in chronological order, starting with the oldest projects. While this means that some projects that no longer exist (and were run by long-departed management teams) are discussed before projects that are currently impacting the EA community, this approach helps illustrate how CEA’s community building work has, and hasn’t, evolved over time. I argue there has been a pattern of lessons that could have been learned from early projects persisting longer than they should have (even if the lessons seem to have eventually been learned in many cases). The chronological structure helps illustrate this point. Readers are of course welcome to use the table of contents to skip to the projects they want to learn about (or to focus on the Synthesis of Findings and Suggestions sections rather than the lengthy program evaluations).

I found consistent patterns while conducting my “minimal trust investigations” of CEA’s community building work, which I elaborate on later. In short:

  • CEA has regularly lacked the staff needed to execute its ambitious goals, leading to missed deadlines and underdelivering on commitments.

  • Meaningful project evaluations (especially public ones) have been rare, due to lack of capacity, lack of prioritization, and often a lack of necessary data.

  • Without meaningful evaluation, mistakes have been repeated across time and projects.

  • When discussing its mistakes publicly, CEA has typically understated their frequency, degree, and duration; more generally CEA has lacked transparency around its community building programs.

  • In many cases, the EA community has borne negative externalities resulting from these mistakes.

  • CEA’s current management team (in place since 2019) has made significant progress in reducing problems. However, current management continues to deprioritize public program evaluations, raising questions of whether these improvements are sustainable and whether the lessons that led to the improvements have been learned beyond CEA.

In interpreting these findings, and my broader analysis, I hope readers will bear in mind the following caveats.

  • Many/​most of the problems discussed did not take place on the watch of current management. CEA has undergone significant management turnover, with 5 Executive Directors/​CEOs between 2016 and 2019. Over the last 3.5 years, CEA has had stable leadership.

  • It is natural for an organization, especially an ambitious and maturing one, to exhibit problems. I don’t mean to be critical by pointing out that problems existed, though I do think criticism is warranted around the fact that those problems weren’t learned from as much or as quickly as possible.

  • While I generally believe that CEA has underutilized public program evaluations (historically and currently), I commend CEA for its support for the Red Team contest and the open and critical discourse the contest has encouraged.

  • My analysis is largely limited to public information (typically the EA Forum and CEA’s website), which is a shortcoming. Valuable information that I did not have access to includes (but is not limited to) internal CEA data or program evaluations, private discussions, and Slack channels.

  • I had to make judgment calls on what to include in this analysis and which projects constitute “community building”. I tried to strike a balance between including projects that provide learning opportunities and not making this analysis longer than it already is.

My hope is that this analysis leads to stronger EA Community Building projects and a stronger EA community. As EA attracts more people, more funders, and more attention, synthesizing and implementing past lessons is more important than ever.

Synthesis of findings

In this section, I’ll summarize the evidence supporting my assertion that problems in CEA’s community building work (often caused by lack of capacity) persisted longer than they needed to, in large part due to insufficient program evaluation. To do so, I’ll provide a case study on how this pattern played out across several specific projects. Then, I’ll offer a more comprehensive table synthesizing how the problems manifested across all the projects I looked at, the extent to which the problems are ongoing, and suggestions for addressing problems.

Case Study

EA Ventures (2016), EA Grants (2017-2020), and Community Building Grants (2018-present) collectively provide an excellent demonstration of the patterns I’ve observed. Due to their similarities (each was meant to inject funding into the EA community and EA Grants was explicitly framed as a successor to EAV) and timing (roughly one year in between each project launch), one might have expected lessons to have been incorporated over time. Instead, problems persisted.

EAV failed to distribute meaningful sums into the community, with lack of capacity appearing to play a role. By the time EAV was shuttered in late 2016, CEA had identified spreading its staff too thin as one of its “more-significant mistakes.” Despite multiple community requests, CEA never conducted a post-mortem on EAV. As such, it is not clear whether CEA ever identified EAV’s transparency problems, such as not providing the community with accurate information about the project’s status. The efficacy of the few grants EAV did make was never evaluated.

These problems persisted into EA Grants and CBG.

  • Both projects were significantly impacted by a lack of capacity

  • Each project granted much less money into the community than intended (~$2 million granted in 2018-19 vs. ~$6 million intended). Aggressive targets were announced for 2019 despite 2018 grantmaking falling well short of targets.

  • Each provided a series of unrealistic timelines for when applications would be open (starting in early 2018 for EA Grants and extending through early 2021 for CBG).

  • Each program missed publicly communicated timelines about performing a program evaluation. Neither program has published a program evaluation capturing its largest problems, nor published an assessment of its grantmaking.

  • When publicly discussing the mistakes of both projects, CEA has omitted some of their largest problems and understated the mistakes that were mentioned.

  • The problems both projects experienced (particularly the shortfall in grantmaking and the missed timelines) negatively impacted the broader community.

CEA’s current management team has made changes to reduce problematic aspects of these projects. EA Grants was shuttered in 2020, and the CBG program’s scope was radically reduced in mid-2021, easing capacity issues and making it easier to issue accurate communications.

However, earlier improvements were clearly possible. If EA Grants and CBG had properly incorporated lessons from EAV, or even their own early years, significant problems could have been avoided.

Patterns observed across projects

The following table contains data on which of CEA’s programs fit the patterns I’ve described, the current status of those patterns, and suggestions for improvements. It shows that while early projects exhibited serious problems, in recent years CEA has apparently increased its staff capacity, focused more on fewer projects, and more regularly met its public commitments (though it’s still not perfect). Less progress has been made on public evaluations of its programs or full acknowledgement of its mistakes.

Readers can click links for more details on how these patterns manifested in specific projects and my suggestions for going forward.

PatternEvidence of IssueCurrent StatusRelated Suggestions
CEA has regularly lacked the staff needed to execute its ambitious goals, leading to missed deadlines and underdelivering on commitments.

Lack of capacity led to problems in:

*GWWC

*EA Grants

*online Groups platform

*EA Ventures

*EAGx organizer support

*EA Funds

*CBG program

Meaningful progress has been made since 2019. Significant CEA staff growth, greater focus, and spinoffs of GWWC and EA Funds (and upcoming Ops spinoff) seem to have helped. Lack of capacity still seems to impact group work, and EA Funds’ work post spinoff from CEA.

*Spin off operations

*Communicate details of CEA’s improvements

*Place more value on experience

*Embrace more redundancy in community building

*Use targeted pilot programs

*Engage with governance questions

*Other community builders should prioritize areas neglected by CEA

Meaningful project evaluations (especially public ones) have been rare, due to lack of capacity, lack of prioritization, and often a lack of necessary data.

Program evaluations not published for:

*EA Ventures

*Pareto Fellowship

*GWWC (last impact report published in 2014)

*CBG Program

*Group support work

Lack of public grant evaluation for:

*EA Ventures

*EA Grants

*EA Funds

*CBG

CEA appears to have made progress in conducting internal evaluations. Current management prioritizing accountability to board and major funders, so public evaluations remain scarce.

*Publish internal EA Grants evaluation

*Prioritize new GWWC impact report

*Run experimental group support evaluation

*Invest in information architecture (esp. grant database)

*Publish summary of group support learnings

*Hire dedicated evaluation staff and publish evaluations

*Invest in community-wide metrics

Without meaningful evaluation, mistakes have been repeated across time and projects.Illustrated by breadth across projects and duration of other high-level problemsUnder CEA’s current management, problems seem less persistent and a variety of positive steps have been taken to address longstanding issues. Some problems have reoccurred under current management, particularly around Group Support and cause representation (though improvements to the latter should be forthcoming).

*Hire dedicated evaluation staff and publish evaluations

*Have a meaningful mistakes page

*Engage with governance questions

When discussing its mistakes publicly, CEA has typically understated their frequency, degree, and duration; more generally CEA has lacked transparency around its community building programs.

Publicly understated problems:

*GWWC

*EA Global marketing

*Pareto Fellowship

*Group support

*EA Grants

*EA Funds

General lack of transparency:

*Communication of CEA strategy re: cause representation

*Pareto Fellowship

*EA Ventures

*EA Grants

*EA Funds

CEA’s Mistakes page, while not meant to be comprehensive, does not include some of CEA’s most significant mistakes and continues to understate some of the listed problems. CEA seems to have improved transparency to its board and major funders, but much less progress has been made on transparency to the community.

*Engage with governance questions

*Public dashboard of external commitments

*Have a meaningful mistakes page

*Explicitly and accurately communicate CEA’s strategy

*Invest in information infrastructure

In many cases, the EA community has borne negative externalities resulting from these mistakes.

Negative impact on community from:

*Management of EA.org

*Cause representation in EA content

*EA Ventures

*CBG program

*Online group platform

*EA Grants

*Community Health

Past mistakes may still be having flow-through impacts on the community. While mistakes have been less frequent over the last few years (the most significant ongoing mistake has been around cause representation), these may also negatively impact the community.

*Greater emphasis on experience

*Embrace redundancy in community building

*Other community builders should prioritize areas neglected by CEA

*Publish learnings from group support work

*Engage with governance questions

*Use targeted pilot programs

CEA’s current management team (in place since 2019) has made significant progress in reducing problems.

Evidence of improvements:

*Reduced frequency/​recurrence of problems

*Significant CEA staff growth

*CEA freeing capacity/​focus by spinning off projects

Current management is not prioritizing public program evaluations, raising questions of whether these improvements from the last few years are sustainable and whether the lessons that led to the improvements have been learned beyond CEA.

*Invest in community-wide metrics

*Embrace redundancy in community building

*Invest in information infrastructure

*Have a meaningful Mistakes page

*Publish learnings from group support work

*Communicate details of CEA’s improvements

In the following section, I elaborate on my suggestions for addressing the problematic patterns I’ve observed. Some suggestions I propose are for CEA specifically, while others are intended for other parts of the EA community or even the community as a whole. In all cases, my intention is to promote a stronger and more effective EA community.

Suggestions

Note

As its title indicates, this section contains suggestions. While for the sake of brevity I may write that certain parties “should” do something, it would be more accurate to preface each suggestion with “Based on my analysis, I believe…” My goal is to offer ideas, not imperatives.

Suggestions for CEA

CEA should hire dedicated evaluation staff and prioritize sharing evaluations publicly

CEA’s program evaluations could be significantly improved with dedicated staff and a commitment to sharing evaluations publicly.

CEA has routinely failed to evaluate its community building projects or significantly delayed evaluations relative to timelines that had been shared publicly (this applies to Pareto Fellowship, EA Ventures, EA Grants, CBG, GWWC, and EA Funds). When CEA has offered an explanation, it has typically been that the evaluation took longer than expected, there was no available staff to work on the evaluation, or that other work had been prioritized instead.

A simple way to facilitate internal evaluation would be to hire dedicated staff (ideally a team rather than an individual) to work on Metrics, Evaluation, and Learning (MEL). The MEL lead role should be promoted as a long-term position to promote stability.

Having staff focused on MEL will break the pattern of having overwhelmed people trying to juggle both program management and evaluation of that program. It will also allow for greater specialization, continuity and consistency in evaluation techniques, and developing new projects in a way that facilitates subsequent evaluation. It would also be consistent with the ethos of EA: one of the main ideas listed on effectivealtruism.org is the notion that “We should evaluate the work that charities do, and value transparency and good evidence.”

CEA’s leadership tells me that they’ve been doing some internal program evaluations, but prioritize transparency with funders and boardmembers rather than sharing evaluations publicly.[1] This is one of my major cruxes with CEA. I’d hope that dedicated MEL staff would encourage CEA to share more evaluations publicly, which I view as critical for three reasons.

First, public evaluations promote learning. They provide community members with information to update their worldviews, which in turn allows them to operate more effectively. They can also help CEA learn as community members may generate new insights if given access to data. (For example, CEA tells me my analysis has uncovered issues they weren’t aware of, despite using only public information).

Second, public evaluations would promote better governance and accountability. As an organization that explicitly performs functions on behalf of the EA community, CEA’s accountability should not be limited to funders and boardmembers. CEA should provide the broader community with information to assess whether CEA is effectively executing projects it runs on their behalf.

Third, CEA is highly influential in the EA community. If CEA deprioritizes public evaluations, this behavior could become embedded in EA culture. That would remove valuable feedback loops from the community and raise concerns of hypocrisy since EAs encourage evaluations of other nonprofits.

CEA should publish a post describing the process and benefits of its expansion and professionalization

Other organizations and community members could learn valuable lessons from CEA’s progress in growing and professionalizing its staff.

My analysis shows that CEA has made significant progress in reducing the problems in its community building work. Much of this progress appears attributable to stability in the current management team and increased staff capacity and professionalization (including better board oversight). Many of CEA’s roles are now filled by staff with meaningful experience in similar roles outside of CEA, which has not always been the case.

While my report describes problems that have likely been more frequent and severe than commonly understood, the flip side of that coin is that alleviating those problems has been more beneficial than commonly understood. By publishing details of the benefits and process of its improvements, CEA could help other organizations avoid unnecessary pitfalls and leverage CEA’s experience. Helpful topics could include warning signs that professionalization is needed, tips for finding experienced candidates[2], advice on which aspects of professionalization are most important (e.g. leadership stability vs. experienced staff vs. engaged board), and which roles are most important to professionalize. The EA community has no shortage of young but growing organizations, and the whole community will benefit if they can develop by learning from, rather than repeating, the mistakes of others.

CEA should clearly and explicitly communicate its strategy

Someone engaging with CEA’s Strategy page (or other forums through which CEA communicates its strategy) should come away with a clear understanding of what CEA is, and isn’t, prioritizing.

CEA’s biggest historical problems in this area have been when CEA has managed community resources (e.g. effectivealtruism.org) and favored causes supported by CEA leadership at the expense of causes favored by the broader community. I’m optimistic CEA’s policies (and transparency around those policies) will improve going forward: CEA has shared a draft of a potential public post about this with me (which “was motivated (at least in timing)” by my comments on this topic). If CEA publishes that post and acts consistently with it, I would interpret that as a significant improvement on the status quo. (I have not listed this suggestion as “in progress” since it is unclear whether CEA will publish this post).

CEA should publish what it has learned about group support work and invest in structured evaluation

Funders, group leaders, group members, and EA entrepreneurs would all benefit from learning from CEA’s extensive experience in group support, and would learn even more from a more rigorous assessment of group work going forward.

CEA should synthesize and publicly share the data it has collected and the lessons it has learned from this work. This should be done in one or more standalone posts (past data has been shared in posts covering a variety of subjects making it hard to find). As part of this work, CEA should clarify what responsibilities it is taking on regarding group support (this list has undergone routine and significant changes over the years) and what opportunities it sees for others to perform valuable group work.

While CEA tells me it has shared its lessons with relevant stakeholders, I don’t believe these efforts have been sufficient. As a telling example, the head of One for the World (a group-based community building organization) has been quite vocal about wanting more information about CEA’s group support work. And if public data were easily accessible, the range of people who might engage in group work would likely be much larger than the people CEA currently shares data with.

Synthesizing and distributing lessons would be a good start, but more rigorous analysis going forward is sorely needed. Excellent ideas for how to conduct experimental or quasi-experimental evaluation have already been circulated and generated positive feedback. Now these ideas need buy-in from key stakeholders like CEA, and to be funded and executed.

CEA should have a meaningful mistakes page, or no mistakes page

CEA’s Mistakes page should give readers an accurate understanding of the nature, magnitude, and duration of its major mistakes.

When the Red Teaming contest was launched, Joshua Monrad noted:

“In the absence of action, critiques don’t do much. In fact, they can be worse than nothing, insofar as they create an appearance of receptiveness to criticism despite no actual action being taken. Indeed, when doing things like writing this post or hosting sessions on critiques at EA conferences, I am sometimes concerned that I could contribute to an impression that things are happening where they aren’t.”

My impression of CEA’s Mistakes page, which I’ve referenced numerous times, is that it has been “worse than nothing.”[3] It has routinely omitted major problems (such as the failure of EA Grants, CBG, and EA Ventures to grant the amounts intended), significantly downplayed the problems that are discussed (such as the impact of under-resourcing GWWC and missed commitments around EA Grants and CBG), and regularly suggests problems have been resolved when that has not been the case (such as originally claiming that running too many projects was only a problem through 2016 and that EA Funds not providing regular updates and not providing accurate financial data were only problems through 2019). If CEA is going to have a Mistakes page, it should accurately reflect the organization’s mistakes. If CEA is unable or unwilling to do so, it would be better to remove that page entirely.

CEA should consider creating a public dashboard of its commitments to others

A public record of CEA’s external commitments would be a valuable accountability mechanism.

Missed deadlines and commitments have been a recurring problem in CEA’s community building work, often creating difficulties for people, organizations, and funders trying to make plans around CEA’s activities. The prevalence of these missed commitments suggests a lack of accountability. CEA’s understatement of those missed commitments (such as in the EA Grants and CBG programs) suggests it is sometimes unaware of its commitments.

A public dashboard listing CEA’s commitments to the community could help in both regards. It would help the community, and CEA’s management, keep CEA accountable. Simply creating a dashboard wouldn’t ensure that every commitment is kept, but it would encourage CEA to either keep commitments or communicate that they wouldn’t be met as soon as possible.

CEA should consider using targeted pilot programs

Before running projects that serve the entire community, CEA should consider piloting them with narrow groups.

CEA faces a balancing act in its community building work. On one hand, it seems natural for that work to support the entire community, or at least for the entire community to be eligible to receive support. On the other hand, CEA may believe that certain subsets of the community would be particularly easy, or high value, to support.

Based on my analysis, I think CEA should strongly consider piloting new community building programs with narrow populations it expects to have the highest benefits and/​or lowest costs. I’m ambivalent about this suggestion, as I think it’s very valuable for CEA’s services to be widely accessible. However, the track records of the EA Grants and Community Building Grants programs show the merits of a narrow pilot approach.

In each case, the entire community was originally eligible for the program and CEA attempted to sustain this arrangement. But open eligibility strained CEA’s capacity, and both programs ended up narrowing their scopes significantly, via a referral round for EA Grants and a limited list of priority locations for the CBG program. If these programs had been piloted with the reduced scope they ultimately arrived at, applicant and staff time could have been used more efficiently. And if/​when CEA subsequently decided to expand eligibility, it would have done so from a more informed place.

CEA should publish its internal evaluations of EA Grants

Other grantmakers (including individual donors) could learn from the assessments of EA Grants’ grantmaking that CEA internally produced but has not shared.

Nicole Ross (former head of EA Grants) conducted an initial grant review. Publishing a summary of her findings would dramatically improve the community’s knowledge about the program, and would likely provide valuable lessons for other grantmakers. The limited number of grants made by EA Grants relative to other grantmakers (and Ross’ prior work) should make this a tractable exercise, and one that could inform other efforts at retrospective grant evaluations (a topic with significant community interest and growing importance as EA has access to more funding).

While I would love to see an in-depth analysis published, even simple information would be quite informative. How many grants (in terms of number of grants and dollars granted) fell into Ross’ high level categories (“quite exciting”, “promising”, “lacked the information I needed to make an impact judgment”, and “raised some concerns”)? How were grants split across cause areas? Was there any relationship between the initial assessment and cause area? Did grants made through the referral round seems more, less, or similarly promising as grants made through other rounds?

Ideally, a grant assessment would include an analysis of whether the operational mechanics of EA Grants impacted grant quality. Two areas that seem particularly important in this regard are:

Suggestions for EA Community

The EA community should seriously engage with governance questions

The EA community should prioritize explicit conversations about how governance should work in EA.

Firstly, an explicit conversation about the roles CEA executes on behalf of the broader EA community (managing effectivealtruism.org seems like an obvious example) and the responsibilities and accountability CEA should have in these cases would be valuable (e.g. should the community have representation on CEA’s board?) This has been attempted in the past, but past attempts have not led to a lasting and transparent solution (in part due to turnover in CEA’s leadership).

Second, an explicit and transparent conversation about how governance should work in EA would be immensely valuable. What (if any) input should the community have on organizational priorities? What constitutes the community for these purposes? What mechanisms should be in place to promote good behavior amongst individuals and organizations? How should accountability be promoted? With its interest in “going meta”, the EA community should be well suited to engage with these questions.

The EA community would be well served by a governance model significantly better resourced and more transparent than the current model. Nonprofits are traditionally governed by boards, but boards might not be the best model for EA. As Holden Karnofsky observes “boards are weird” and the current board structures appear under-resourced relative to their oversight responsibilities.

For example, CEA UK’s board has six members, each with significant other responsibilities. The board’s responsibilities include not only overseeing CEA’s wide and growing portfolio of programs, but also overseeing the organizations that are legally a part of CEA (e.g. 80,000 Hours, Giving What We Can, EA Funds, and the Forethought Foundation). The planned spin-off of CEA’s operations department, which supports these disparate organizations, provides an excellent opportunity to rethink governance structures.

EAs should invest in community-wide metrics

The EA community, which places a high value on evidence and data, should invest more in self-measurement.

In my opinion, the best model would be for one organization to have explicit responsibility, and commensurate resources, for developing and measuring community-wide metrics. Ideally this effort would aggregate disparate data sources which have typically been owned by different parties, looked at individually[6], and which have their own weaknesses (e.g. the EA Survey is a rich and valuable data set but is prone to selection bias).

Community-wide metrics should seek to answer critical questions (and raise new ones) such as: How big is the EA community? What factors cause people to join the community? How many people drop out of EA? What drives that attrition? What causes people to stay engaged? What can we learn from people who are value-aligned with EA but are not actively involved in the community? Attempts have been made to answer some of these questions, but they have generally been limited by a lack of data sharing across organizations (and possibly by a lack of analytical capacity).

While CEA is obviously an important source of data (e.g. web analytics on effectivelatruism.org), I doubt it is the right organization to own these efforts. Rethink Priorities, which has experience investigating important community questions, would be a natural candidate; other teams (including newly formed ones) should also be considered.

The EA community should embrace humility

The prevalence of mistakes I’ve observed underscores the value of adopting a humble attitude.

My analysis clearly shows that EA projects can be difficult to execute well. Having a smart, dedicated, value-aligned team with impeccable academic credentials does not guarantee success. CEA’s track record of taking on too many projects at a time, with negative effects for the rest of the community, is an example of the problems that can arise from overconfidence.

Two specific areas where I think EAs could benefit from greater humility are:

  • Greater willingness to acknowledge the scope and impact of mistakes the EA community has made. I found it both telling and worrisome that a discussion of a perceived slowdown in EA’s growth largely ignored the significant role mistakes played. This strikes me as a very dangerous blind spot.

  • Embracing public project evaluation and feedback loops more generally. Evaluating projects or grants comes with a cost, but I think the EA community is too quick to simply assume its own work is operating as intended. Without feedback loops, it’s hard to perform any kind of work well. I worry that many EAs undervalue and underutilize these feedback loops, and that as EA shifts towards longtermism those loops will become longer and rarer. I’m also concerned that EA will lose credibility if it promotes the evaluation of other charities while applying a different standard to EA organizations.

EA funders and entrepreneurs should prioritize areas unserved by CEA’s community building work

CEA has limited capacity and focus, making it imperative that other actors in the community complement the areas CEA is prioritizing.[7]

An example of this happening in practice is the EA Infrastructure Fund accepting applications for paid group leaders outside CEA’s narrow priority locations, though without up to date grant reports it isn’t clear the extent to which the EAIF is truly filling this gap.

To my mind, the area most overlooked by existing community building work is mid/​late/​post-career EAs, i.e. everyone except for students and early career professionals. CEA is explicitly not focusing on this market, and 80k’s work is also oriented toward younger EAs. This isn’t the only area where the EA community is “thin” but I’d argue it is easily the most important: literally every EA already is, or will be in the future, older than the age groups currently being prioritized. Ignoring these demographics is a retention problem waiting to happen.

I would love to see the EAIF and other funders circulate a request for proposals for community building projects serving older EAs (and/​or other underserved areas) and dedicate a meaningful amount of funding for those areas. This would have an added benefit of increasing the number of EAs with significant work experience.

EAs should invest in publicly available information infrastructure/​architecture

Information that would lead to a better understanding of the EA community is often difficult (or impossible) for community members to access.

A particularly valuable piece of infrastructure would be a grants database facilitating analysis across grantmakers and grantmaking programs. Various community efforts (e.g. here and here) have made some progress in aggregating grant data, aided by the excellent grant databases Open Phil and FTX already provide. Other grantmaking programs past and present (e.g. EA Funds, EA Grants, and Community Building Grants) do not offer public data amenable to easy analysis. For example, EA Funds’ grantmaking is listed on the relevant webpages, but this information is both out of date and formatted in a way that makes analysis impossible without extensive data entry. (My understanding is that EA Funds will soon provide a grants database which should be a significant improvement on the status quo).

A unified grants database, or separate but compatible databases across grantmakers, would provide a better understanding of where resources are (and aren’t) allocated. It would also make it easier to assess whether grants (or grantmaking programs[8]) are having their intended purpose. In analyzing CEA’s work, I found little in the way of post-grant evaluation despite consistent community interest (“retrospective grant evaluations” was the most upvoted submission in the Future Fund’s project ideas competition.)

Another valuable data resource would be exportable and searchable databases with information on groups and individuals. This would make it easy to learn how many groups exist, where they are located, how many members they have, and which have paid organizers. It would also provide a better understanding of how many EAs are involved in the community, where they are located, and how they engage with EA.[9]

EA should embrace more redundancy in community building efforts

EA has developed to a point where having multiple people/​organizations working in the same area is a feature, not a bug.

In this report I’ve identified numerous examples of community building projects failing to meet their goals. Just looking at CEA’s grantmaking projects, EA Ventures, EA Grants, EA Funds, and CBGs have all at times delivered less funding than expected to the EA community.

When multiple organizations provide similar functions, it’s not too problematic if one fails to deliver. Historically, though, the EA community has favored specialization over competition. Perhaps this was appropriate, given that EA was young (and often underfunded). But with a more mature EA community, more funding in place, and higher stakes if a critical function isn’t being met, EAs should on the margins be open to more competition/​redundancy. Specialization and competition both have their merits, but EA is currently too reliant on the former.

A related concern is that when multiple candidates could work on a project, EA funders are too quick to assume that CEA is the best option; this seems to be one reason why the EA Hub was recently shuttered.[10] But a better understanding of CEA’s spotty track record in actually executing the community building projects it undertakes (which I hope this report encourages) would lead to different conclusions.

EA organizations and individuals should on the margins place greater value on experience and ability to execute

My research suggests EA organizations and individuals undervalue relevant job experience.

I’ve collected evidence of numerous problems, many of which have persisted long after having been originally identified. These problems likely had a variety of causes, yet I’m confident none can be attributed to CEA staff being insufficiently value-aligned. In many cases, it seems quite reasonable to think that if CEA staff had more domain specific expertise (such as project management experience to help keep the number and scope of projects realistic), many of these problems could have been avoided or mitigated. I’ve also observed major improvements when CEA landed on a stable management team and started to professionalize in earnest, particularly in its operations work.

This is a complex issue (this excellent piece, which I largely agree with, does a good job of exploring some of the nuances). But I think on the margins EA organizations generally undervalue domain specific experience, and EA job applicants are generally too averse to acquiring such experience at jobs outside of EA (and overly incentivized to pursue direct work).

Suggestions currently being implemented

CEA should consider spinning off its operations team (in progress)

Spinning off its operations team (currently in progress) will allow CEA to focus on its core priorities.

Historically, one of CEA’s biggest problems has been trying to do too many things and spreading the organization too thin. This made me concerned that by taking on operations management for a variety of disparate organizations[11], CEA was repeating past mistakes. In early drafts, I suggested CEA spin off its sizable (20 FTE) operations team.

When CEA reviewed these drafts, I was very pleased to learn that this spin-off was already in progress. CEA will announce additional details in the future. I view this as a positive sign that past mistakes are being learned from, which I don’t believe has always been the case. As mentioned elsewhere, I hope CEA uses the spin-off as an opportunity to think about optimal governance models for the affected organizations.

GWWC should prioritize publishing an updated impact report (in progress)

GWWC’s last impact report is extremely out of date, and producing a new one will provide useful information for understanding GWWC and the EA community as a whole.

GWWC has told me they are currently working on an updated report. This will provide helpful information on GWWC’s impact as an organization, on the value of a new pledger (a useful data point for assessing the value of a new EA), and attrition rates for past pledgers.

Project Evaluations

Background

Giving What We Can is a worldwide community of effective givers founded in 2009. GWWC promotes a lifetime 10% giving pledge, as well as a “Try Giving” program for those who find the pledge too large a commitment. The GWWC community has over 7,000 members worldwide.

In July 2020, Luke Freeman was hired to lead EA Funds, and in December 2020 CEA announced that GWWC would “operate independently of CEA” though “CEA provides operational support and they are legally part of the same entity.”

Understaffing and underinvesting in GWWC

CEA significantly under-invested in GWWC, causing the EA community to be smaller than it could have been and effective charities to receive less money and achieve less impact.

CEA’s Mistakes page acknowledges under-investing in GWWC, but suggests that only minor problems resulted from it. It ignores the more substantive problem: that a lack of capacity likely contributed to slower growth than GWWC would have otherwise experienced, with knock-on implications for the rest of the EA community.

Lack of transparency into cause of slowdown in pledge takers

CEA has offered conflicting explanations for a slowdown in GWWC pledge taking.

A September 2018 post by Rob Wiblin found CEA’s deprioritization of GWWC slowed GWWC’s membership growth by 25-70% depending on what one assumes about the previous trajectory. The head of GWWC at the time commented on that post, and did not object to blaming GWWC’s deprioritization in early 2017 for the slowdown.

However, CEA’s public communications throughout 2017 made no mention of deprioritizing GWWC and instead suggested it was actively being worked on.[12] And CEA’s 2017 Review explicitly attributed GWWC’s slowdown to “our change from emphasizing recruitment of new members to emphasizing the Pledge as a serious lifetime commitment to be thoroughly considered.”

Deprioritizing GWWC or emphasizing the gravity of the pledge (or a combination of the two) could both have plausibly caused a slowdown. However, these explanations have radically different implications for other community building efforts. CEA’s failure to be transparent about the primary cause of the slowdown meant that important lessons about community building were missed.

Lack of program evaluation

GWWC’s most recent impact report is extremely out of date, having been published in 2015[13] based on donations from 2009-2014.

However, the findings of that report inform much more recent decisions. For instance, the webpage for the EA Infrastructure Fund used to cite the report’s finding of a 6:1 leverage ratio, and in discussing CEA’s plans for 2021 CEO Max Dalton cited the report’s estimated lifetime value of a pledge of $73,000. Much has changed since the report was written[14] making it disappointing that out-of-date data is still being relied on.

Another important data point with broad community relevance is the attrition rate of GWWC pledgetakers, which I don’t believe CEA/​GWWC has studied since 2015. Rethink Priorities, as part of its work on the EA Survey, examined the issue in 2019 and found “~40% of self-reported GWWC members are not reporting donation data that is consistent with keeping their pledge—far more pledgers than GWWC originally reported based on data ending in 2014.” [NB: the original estimate was ~6%.] This analysis was repeated in 2021, with results that were “a bit more pessimistic.”

Given how relevant this data is to understanding EA retention, GWWC’s failure to conduct a more recent and more thorough analysis is a missed learning opportunity. Fortunately, GWWC is apparently currently working on updating its impact assessment.

Problematic marketing of the GWWC pledge

CEA has at times marketed the GWWC pledge in inappropriate ways.

CEA’s Mistakes page acknowledges that from 2014-2017 “We encouraged student groups to run pledge drives which sometimes resulted in people taking the Pledge at a young age without being encouraged to seriously think it through as a long-term commitment. Some of our communications also presented the Pledge as something to be taken quickly rather than carefully considered.”

Content Creation and Curation (2015-present)

Background

CEA is responsible for creating and curating EA content in a variety of contexts, including cases where CEA is effectively managing community resources. Examples include EA Global (which CEA has been running since 2015), the EA Handbook (CEA produced the 2nd and 3rd editions), and effectivealtruism.org (which CEA has been operating since 2015). EA Global and effectivealtruism.org are arguably two of EA’s most prominent platforms.

These projects are operated across disparate teams at CEA, but I’m aggregating them for simplicity and brevity.

Problems

Lack of transparency around CEA’s strategy

CEA has lacked transparency around its strategy for representing different cause areas.

CEA’s staff and leadership has for quite some time favored longtermist causes, more so than the community at large. Content that CEA has created and curated has often skewed heavily toward longtermist causes (seeProblematic representation in EA Content section for more details). This strategy has not always been made clear, and in a December 2019 comment thread Max Dalton (CEA’s Executive Director) acknowledged that “I think that CEA has a history of pushing longtermism in somewhat underhand ways… Given this background, I think it’s reasonable to be suspicious of CEA’s cause prioritisation.”

Dalton’s most detailed descriptions of this thinking on this topic have come in obscure comment threads (here and here) in which he describes a general desire to promote principles rather than causes, but “where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).” (He recently retracted this comment, noting that he now thinks the 70%-80% figure should be closer to 60%.)

As one EA noted at the time, the lack of transparency around this decision was incommensurate with its considerable importance.[15] This topic warrants an explicit and public discussion and explanation of CEA’s policy, and should not be relegated to comment threads on only marginally related posts. I find it notable that someone reading CEA’s strategy page at the time these comments were written would likely come away with a very different understanding of CEA’s approach.[16]

I’m happy to report that Dalton has shared with me a draft of a post he may publish on this topic. I hope he does choose to publish it, as I think it would represent a significant improvement in CEA’s transparency. While I disagree with some details of the draft (for instance, I share concerns others have previously voiced about various biases inherent in deferring to cause prioritization experts) I’m glad to see CEA listening to community concerns and considering more transparency about its strategy.

Problematic representation in EA content

CEA has repeatedly used community forums to promote its own views on cause prioritization rather than community views.

CEA’s Mistakes page notes “we’ve carried out projects that we presented as broadly EA that in fact overrepresented some views or cause areas that CEA favored. We should have either worked harder to make these projects genuinely representative, or have communicated that they were not representative.” The page provides several specific examples that I’ve listed below, along with additional context where relevant.

  • “EA Global is meant to cover a broad range of topics of interest to the effective altruism community, but in 2015 and 2016 we did not provide strong content at EA Global from the area of animal advocacy…This made some community members who focus on animal advocacy feel unvalued.”

    • NB: Another significant reason why members who value animal advocacy felt unvalued is because factory-farmed meat was served at EA Global 2015. This post describes the situation, which troubled many people as this Facebook discussion makes clear. Since then, CEA has only provided vegetarian (and mostly vegan) food at EA Global.

  • “In 2018, we published the second edition of the Effective Altruism Handbook, which emphasized our longtermist view of cause prioritization, contained little information about why many EAs prioritize global health and animal advocacy, and focused on risks from AI to a much greater extent than any other cause. This caused some community members to feel that CEA was dismissive of the causes they valued.”

    • NB: In response to negative feedback on the EA Forum (feedback was even more critical on Facebook), Max Dalton (author of the second edition handbook and current Executive Director of CEA) announced plans to add several articles in the short-term; these do not appear to have ever been added. The release of the 3rd Edition of the Handbook was then delayed due to CEA’s change in leadership.

  • “Since 2016, we have held the EA Leaders Forum, initially intended as a space for core community members to coordinate and discuss strategy. The format and the number of attendees have changed over time, and in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview. While the name is less significant than the structure of the event itself, we should not have continued calling it the “EA Leaders Forum″ after it no longer involved a representative group of EA community leaders.”

CEA’s Mistakes page omits several other (including more recent) problems with representativeness:

  • The In-Depth EA Program includes topics on biorisk and AI but nothing on animals or poverty in its topics for discussion. This content is framed as EA content (rather than CEA’s organizational views).

  • Starting in late 2017 and extending to March 2021 (when I called this issue to CEA’s attention), the Reading List on the effectivealtruism.org homepage included content on a variety of longtermist causes but not Global Poverty (which was the community’s most popular cause at the time per the EA Survey).

  • I’ve argued that the 3rd Edition of the EA Handbook has a skewed cause representation (though not as bad as the 2nd Edition). The 4th Edition, recently soft-launched, looks like a significant improvement on the 3rd Edition.

  • For several years, effectivealtruism.org/​​resources (one of the main pages on that site) heavily prioritized longtermist content relative to global health and animal welfare. For instance, the “Promising Causes″ section listed two articles on AI and one on biosecurity before mentioning animal welfare or global health; moreover, that section came after a section dedicated to “The Long-Term Future”. This page was updated in early 2022, and is now more representative.

Inattention to, and lack of representation on, effectivealtruism.org

CEA manages effectivealtruism.org (the top search result for “effective altruism”) on behalf of the community, but has only recently made it a priority.

For example, from late 2017 to early 2022, the homepage had only very minimal changes.[17] In early 2022, CEA revamped the site, including a major redesign of the homepage and navigation, and published a new intro essay.

CEA also hasn’t shared information about effectivealtruism.org that could be helpful to the rest of the community. Examples include traffic to the site, which pages receive the most (and least) visitors, and donations received from visitors the site refers to various donation platforms (EA Funds, GiveWell, etc.)

Lack of EA Global admissions inbox monitoring

In 2021, CEA ignored admissions-related emails for five weeks. As described on CEA’s Mistakes page:

“In the lead-up to EA Global: Reconnect (March 21-22, 2021), we set up an inbox for admissions-related communications. However, the team member who was responsible for the inbox failed to check it. The mistake was discovered after 36 days, a week before the event. While we responded to every message, many people waited a long time before hearing back, some of them having sent follow-up emails when they didn’t receive a timely response.”

Poor communication with EAGx organizers

EAGx organizers have been hampered by CEA’s unresponsiveness and lack of professionalism.

CEA’s Mistakes page acknowledges: “At times, our communication with EAGx organizers has been slow or absent, sometimes impeding their work. For example, in 2016 EAGxBerkeley organizers described unresponsiveness from our staff as a low point in their experience as event organizers.”

Feedback from the 2016 EAGxBerkeley organizers indeed flagged unresponsiveness as a major problem (“There were multiple instances where Roxanne did not respond to our messages for days, and almost every time [she] said [EA Outreach, part of CEA] would do something for us, that thing would not be done by the time they said it would be done.”) However, they also described broader problems including CEA creating artificial constraints[18], a lack of CEA capacity[19], and a general lack of oversight.[20]

Given the scope of these communication issues with the 2016 EAGx organizers, it’s troubling that CEA describes these problems as persisting through 2019.[21] Since late 2021, CEA has had someone working full-time on the EAGx program, and CEA tells me that the degree of support and satisfaction is generally much higher than it was.

Problematic marketing of EAG 2016

CEA’s marketing of EAG in 2016 received substantial community criticism for violating community standards and values.

Some of that criticism related to the frequency of emails; one EA reported “during the final month or so I got up to three emails per day inviting me to EA Global.” Other criticism related to marketing that seemed “dishonest” and/​or “dodgy”. Community comments include (all emphasis added):

  • Dishonest elements in the marketing beforehand seemed destructive to long-term coordination… I switched from ‘trust everyone at CEA except...’ to ‘distrust everyone at CEA except...’, which is a wasteful position to have to take… dodgy emails convinced approximately −1 of the 12 people I nominated to attend, and now some of my friends who were interested in EA associate it with deception.” (source)

  • “I confess I find these practices pretty shady, and I am unpleasantly surprised that EAG made what I view to be a fairly large error of judgement on appropriate marketing tactics.” (source)

  • “I didn’t end up nominating anybody because I’d rather reach out to people myself. The “via EAG” thing makes me really relieved that I made this choice and will prevent me from nominating people in the future. I’m actually a bit surprised at the strength of my reaction but this would’ve felt like a major violation to me. I really dislike the idea of feeling accountable for words that I didn’t endorse.. After your explanation the practice still does seem (very) deceptive to me.” (source)

  • “I’d recommend all EAs avoid in the future:

    • Sending emails ‘from’ other people. Friends I recommended received emails with ‘from’ name ‘Kit Surname via EAG’. Given that I did not create the content of these emails, this seemed somewhat creepy, and harmed outreach.

    • Untruths, e.g. fake deadlines, ‘we trust Kit’s judgement’, ‘I was looking through our attendee database’, etc. (My vanity fooled me for a solid few seconds, by the way!)” (source)

Poor communication around EAG events

CEA has been unclear about EA Global admissions criteria and dates, leading to community frustration and missed attendance.

CEA’s Mistakes page acknowledges: “As EA Global admissions criteria have changed over time, we have not always communicated these changes adequately to the community, leading to confusion and disappointment.” This confusion seems to have started following EAG 2016 (which courted a large audience via aggressive marketing) while subsequent events were more selective. CEA also notes “In the years since, there has continued to be disagreement and confusion about the admissions process, some of it based on other mistakes we’ve made.”

Other confusion has been driven by a failure to announce conference dates in a timely fashion. When CEA announced high level plans (but not dates) for EAG 2017 in December 2016, one EA noted “the sooner you can nail specific dates, the better!” because “that has always been a huge hurdle for me in past years and why I’ve been unable to attend prior conferences.” In February 2017, two other EAs requested the dates and described how not knowing them was interfering with their plans and ability to attend. While a separate post announcing dates was released in early March 2017, the February comments weren’t responded to until late March.

Pareto Fellowship (2016)

Project background

The Pareto Fellowship took place in the summer of 2016. Per its website, it was meant to provide “training, room and board in the San Francisco Bay, project incubation, and career connections for Fellows to pursue initiatives that help others in a tremendous way.” While the Pareto Fellowship was “sponsored by CEA and run by two US-based CEA staff… the location and a significant amount of the programming were provided by Leverage Research /​ Paradigm Academy.”

In 2021, one of the Fellows described the program as follows:

There were ~20 Fellows, mostly undergrad-aged with one younger and a few older.

[Fellows] stayed in Leverage house for ~3 months in summer 2016 and did various trainings followed by doing a project with mentorship to apply things learnt from trainings.

Training was mostly based on Leverage ideas but also included fast-forward versions of CFAR workshop, 80k workshop. Some of the content was taught by Leverage staff and some by CEA staff who were very ‘in Leverage’s orbit’.

In December 2016, CEA announced the discontinuation of the Pareto Fellowship.

Problems

Severe lack of professionalism

Various aspects of the fellowship were disturbing to participants, including an interview process described as “extremely culty” and “supremely disconcerting”.

Nearly 500” applicants and “several hundred” semi-finalists interviewed for the fellowship; two years later anonymous accounts described this process as problematic (emphasis added):

“I was interviewed by Peter Buckley and Tyler Alterman [NB: Pareto co-founders] when I applied for the Pareto fellowship. It was one of the strangest, most uncomfortable experiences I’ve had over several years of being involved in EA. I’m posting this from notes I took right after the call, so I am confident that I remember this accurately.

The first question asked about what I would do if Peter Singer presented me with a great argument for doing an effective thing that’s socially unacceptable. The argument was left as an unspecified black box.

Next, for about 25 minutes, they taught me the technique of “belief reporting”. (See some information here and here). They made me try it out live on the call, for example by making me do “sentence completion”. This made me feel extremely uncomfortable. It seemed like unscientific, crackpot psychology. It was the sort of thing you’d expect from a New Age group or Scientology.

In the second part of the interview (30 minutes?), I was asked to verbalise what my system one believes will happen in the future of humanity. They asked me to just speak freely without thinking, even if it sounds incoherent. Again it felt extremely cultish. I expected this to last max 5 minutes and to form the basis for a subsequent discussion. But they let me ramble on for what felt like an eternity, and there were zero follow up questions. The interview ended immediately.

The experience left me feeling humiliated and manipulated.

Responding to this comment, another EA corroborated important elements (emphasis added):

I had an interview with them under the same circumstances and also had the belief reporting trial. (I forget if I had the Peter Singer question.) I can confirm that it was supremely disconcerting.

At the very least, it’s insensitive—they were asking for a huge amount of vulnerability and trust in a situation where we both knew I was trying to impress them in a professional context. I sort of understand why that exercise might have seemed like a good idea, but I really hope nobody does this in interviews anymore.

Questionable aspects of Leverage/​Paradigm’s norms and behavior extended beyond the interview process. In some circles it is “common knowledge” that in the Leverage/​Paradigm community, “using psychological techniques to experiment on one another, and on the “sociology” of the group itself, was a main purpose of [Leverage]. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one’s belief structure, and experimental group dynamics.”

Responding to that characterization, one fellow recounted that “The Pareto program felt like it had substantial components of this type of social/​psychological experimentation, but participants were not aware of this in advance and did not give informed consent. Some (maybe most?) Pareto fellows, including me, were not even aware that Leverage was involved in any way in running the program until they arrived, and found out they were going to be staying in the Leverage house.” This apparently affected how fellows perceived the program.[22]

The Pareto Fellowship was unprofessional in more mundane ways as well. CEA now reports that “A staff member leading the program appeared to plan a romantic relationship with a fellow during the program.” CEA is unaware of any harm that resulted, but acknowledges that “due to the power dynamics involved” this was “unwise” and “may have made some participants uncomfortable.” CEA’s characterization of the mistake understates its gravity, as it omits the context that the parties were living in the same house (which also served as the program location) and that the fellow was not aware of this arrangement beforehand (and presumably did not have alternative lodging options if they were uncomfortable).

Lack of program evaluation, and missed commitments to conduct one

CEA explicitly committed to publishing a program review, but did not deliver one.

Various EAs requested that CEA conduct a post-mortem of the Pareto Fellowship[23] to help the community learn from it. In its 2017 Fundraising Document CEA promised “a detailed review of the Pareto Fellowship is forthcoming.” In response to a comment that “multiple friends who applied to the Pareto Fellowship felt like it was quite unprofessionally run” CEA staff reiterated that an evaluation was “forthcoming”, but it was never published.

Failure to acknowledge degree or existence of problems

A recurring theme in the Pareto Fellowship’s problems has been CEA’s disinclination to acknowledge the extent (or even the existence) of the program’s problems.

CEA has never publicly acknowledged that it committed to, but did not deliver, a program evaluation. The problems with the interview process were posted on the Forum in summer of 2018, but CEA did not add them to its Mistakes page until 2020.

The observations I quoted earlier from a former fellow were comments on a September 2021 LessWrong post describing problematic “common knowledge” about Leverage. Julia Wise from CEA’s community health team responded that “CEA regards it as one of our mistakes that the Pareto Fellowship was a CEA program, but our senior management didn’t provide enough oversight of how the program was being run” and apologized to “participants or applicants who found it misleading or harmful in some way.”

Roughly two weeks later a former Leverage member published a disturbing account of her experiences during and after being a part of that community (including highly problematic handling of power dynamics), prompting a question in the aforementioned thread of why CEA’s Mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. Wise responded that “we’re working on a couple of updates to the mistakes page, including about this.” In January 2022, the Mistakes page was updated to include references to Leverage’s role in the Pareto Fellowship, the attempted romantic relationship (the first public mention of said relationship), and a section providing “background on CEA’s relationship with Leverage Research.”

Lack of program transparency

Very little public information about the Pareto Fellowship exists.

CEA announced that there were 18 Pareto Fellows and that “the program created 14-15 significant plan changes”. But the identities of the fellows, or the nature of their projects (each intended as “far more than an internship”) were never published.

EA Ventures (2015)

Project background

EA Ventures (EAV) was launched in February 2015 as “a project of CEA’s Effective Altruism Outreach initiative.” Its goal was “to test the theory that we can stimulate the creation of new high impact organizations by simply signaling that funding is available.”

Projects looking for funding were invited to “apply and go through a systematic evaluation process” after which the EAV team would “introduce projects that pass the evaluation to our network of individual and institutional funders and help find new funders if needed.” The EA Ventures homepage listed over 20 funders, including major funders like Jaan Tallinn and Luke Ding.

EAV went on to release a list of projects and projects areas that they would “like to fund” and received over 70 applications before their first deadline. By the end of 2015, EAV had “received around 100 applications and had helped move around $100,000 to effective organizations”. However, as discussed below, minimal information is available about these grants.

The project was closed in 2016.

[Note: updated in July 2023 to reflect that EAV was launched in 2015, not 2016.]

Problems

Granting negligible funds relative to expectations and resources invested in project

EAV failed to meaningfully connect projects with funding.

The first project it funded was EA Policy Analytics, which received $19,000. A paper by Roman V. Yampolskiy thanks EA Ventures (among others) for “partially funding his work on AI Safety.” I’ve found no other records of specific grants that EAV facilitated, nor any post-grant evaluations of grants that were made.

The ~$100,000 distributed pales in comparison to the resources invested in EAV. Four staff members were involved in the project (though they had other responsibilities as well). These staff need to spend significant time building relationships with funders, developing an advisory board, planning the project, and evaluating and assessing the ~100 applications that were submitted. Each of those applications also required time and energy from the applicants. The evaluation process itself seems very time consuming to develop and implement in process, especially relative to the amount of money ultimately distributed.[24]

Lack of program evaluation

Despite repeated requests from the EA community (e.g. here, here, and here), a proper post-mortem on EAV has never been published.

When piecemeal evaluations have surfaced, they’ve offered conflicting evidence as to why EAV failed. In a 2017 comment thread, EAV co-founder Kerry Vaughn wrote: “We shut down EA Ventures because 1) the number of exciting new projects was smaller than we expected; 2) funder interest in new projects was smaller than expected and 3) opportunity cost increased significantly as other projects at CEA started to show stronger results.”

Points 1 and 2 suggest that a lack of projects worth funding was the problem. However, in a June 2015 project update Vaughan wrote that both the number and quality of applications exceeded his expectations[25] In late 2015, the EAV team again indicated that they thought the project was going well and warranted further investment.[26]

In June 2015, Vaughan also noted that the team was “mostly bottlenecked on time currently” due to competing projects (related to the third point he later raised) and expressed interest in finding someone to help with evaluations. Three people commented offering to help; there is no evidence the EAV team responded to those offers.

Vaughan has also suggested in 2017 that “Part of the problem is that the best projects are often able to raise money on their own without an intermediary to help them. So, even if there are exciting projects in EA, they might not need our help.” That explanation seems quite different from the original three reasons he supplied; it also seems easy to prove by listing specific high quality projects that applied to EAV but were instead funded by others.

Personally, I (and others) suspect the main reason EAV failed is that it did not actually have committed funding in place. At an EA Global 2017 panel called “Celebrating Failed Projects” Vaughan confirmed that played a major role, saying a “specific lesson” he’d learned from EAV was “if you’re doing a project that gives money to people, you need to have that money in your bank account first.”

Lack of transparency and communication regarding program status

Due to EAV’s lack of transparency community members made decisions based on faulty information.

I know of one EAV applicant who was told in early 2016 that EAV would not be funding projects for the foreseeable future. Public messaging did not make this clear. In an August 2016 EA Global talk, Vaughan discussed EAV as an ongoing project. As late as October 2016, the EAV website did not indicate the project was stalled and CEA’s 2016 Annual Review made no mention of EAV. There were never any posts on the EA Forum or the EA Facebook group indicating that EAV had closed.

The first acknowledgement of the closure I’ve seen was the aforementioned February 2017 Forum comment. Since the EA community was not informed that EAV was not providing material funding to projects, nor when the project had shuttered, community members were left to operate with faulty information about the availability of funding for projects.[27]

Group support (2016- present)

Background

CEA’s earliest group support took place through supporting GWWC chapters (though legally part of CEA, GWWC operated independently until 2016) and EA Build (a long-defunct CEA project that supported groups). Since GWWC is discussed separately and I found no meaningful information about EA Build, this section focuses on CEA’s group support after 2016.

Since then, CEA has provided support for local and university groups in a variety of ways. These include providing financial support for group expenses and (starting in 2018 through the Community Building Grant or CBG program) salaries for select group organizers.

CEA also helps produce the EA Group Organizers newsletter, has run retreats for group leaders, and provides a variety of online and human resources to help groups operate. In 2021 CEA started running a group accelerator and virtual programs, significantly narrowed which groups were eligible for the CBG program, and discontinued its Campus Specialist program.

Problems

Poor communication and missed commitments (and minimizing these mistakes)

CEA has routinely missed deadlines and other commitments related to group support, making it hard for other community members to plan and operate effectively.

CEA’s mistakes page acknowledges a single instance of missing a deadline for opening CBG applications[28]; this missed deadline was actually part of a pattern.

In November 2018, CEA announced plans to run a round of CBG applications in “summer 2019”. In October 2019, CEA acknowledged missing that deadline and announced plans for “rolling applications” rather than scheduled application rounds. This new process lasted less than a year: in July 2020 CEA provided an update that “We’ll temporarily stop accepting new applications to EA Community Building Grants from the 28th of August” and announced plans to re-open applications “around January 2021.” This deadline wasn’t met, and when an update was finally provided in March 2021, the only guidance given was “we will give an update on our plans for opening applications by June 1st.” CEA met this deadline by announcing in May 2021 that “The Community Building Grants (CBG) programme will be narrowing its scope to support groups in certain key locations and universities.”

This poor communication made it hard for groups and group organizers to make meaningful plans related to the CBG program, especially for groups outside the “key locations and universities” that CEA ultimately decided to support. Even other funders were apparently confused: the EAIF at one point referred groups seeking funding to the CBG program until I pointed out that CBG applications had been closed for months and did not appear likely to reopen soon.

CEA’s Mistakes page does not discuss other aspects of its group support work that also experienced “poor communication and missed deadlines”. For instance, evaluation of the CBG program was routinely delayed relative to timelines that were publicly communicated. Also, CEA’s efforts to deliver a Groups Platform were delayed numerous times and commitments to deliver updates on the status of that platform were not met.

Missed commitments around group platform

CEA’s repeated missed commitments and poor communication around delivering an online platform for groups interfered with other community builders’ efforts.

CEA’s 2017 review acknowledged that “the launch of the EA Groups platform has been delayed multiple times while we have been building the capacity in our Tech Team.” That post also discussed plans to roll the platform out in January 2018.

However, in late March 2018 the leader of that project posted that the project would be delayed at least another six weeks, and laid out the reasons and implications of that decision:

I (CEA) won’t be developing the EA Groups Platform over the next six weeks. After 6 weeks it’s likely (eg. 75%) that we’ll resume working on this, but we’ll reassess the priority of this relative to other EA group support projects at the time.

Currently the groups list is incomplete, and the majority of groups don’t have management permissions for their groups, and are not notified of people signing up to their group mailing list via the platform. Because the half-finished state seems to be worse than nothing, I’ll be taking the list down until work on this is resumed.

The primary reasons for delaying work on the platform are a) other EA group support projects (most notably the EA Community Grants process) taking priority and b) changing views of the value of the platform (I now think the platform as a stand-alone piece of infrastructure will be less valuable, than previously, and that a large part of the value will be having this integrated with other group support infrastructure such as funding applications, affiliation etc.).

I made a few mistakes in working on this project:

1) Consistently underestimating the time required to take the groups platform to a usable state.

2) Failing to communicate the progress and status of the project.

The combination of the above has meant that:

1) People signing up to groups members lists haven’t been put into contact with the respective groups.

2) People have been consistently waiting for the platform’s functionality, which hasn’t been forthcoming. Plausibly this has also negatively interfered with LEANs efforts with managing the effective altruism hub.

I apologise for the above. I’m hesitant to promise better calibration on time estimates for completion of group support projects in future, I’ll make sure to communicate with group leaders about the status of similar projects in future, so that if a decision is made to deprioritise a particular project, group leaders will know as soon as possible.

I’ll post an overview of CEA’s current EA group support priorities within the next two weeks.

No overview of CEA’s group priorities was published in the following two weeks (or anytime soon thereafter). To the best of my knowledge, CEA did not launch an online Groups platform until several years later.[29] It seems more than “plausible” that this “negatively interfered with LEANs efforts” as there is clear evidence that LEAN’s strategy assumed CEA would produce the platform.

Granting significantly less money than planned through CBG program

CEA’s grantmaking through the CBG program fell well short of plans.

CEA’s 2018 review announced 2019 plans for “a regranting budget of $3.7M for EA Grants and EA Community Building Grants.” While there was no specific budget provided for the CBG program, in January 2020 CEA acknowledged “we spent significantly less on EA Grants and CBGs in 2018 than the $3.7M announced in the 2018 fundraising post.” CEA later reported spending ~$875,000 on CBGs in 2019 (and just ~$200,000 on EA Grants).

Understaffing/​Underestimating necessary work

CEA’s staffing of group work has not been commensurate with its goals, leading to missed commitments and problems for other community members.

Capacity problems include:

  • Lack of capacity was cited as the reason why CBG applications remained closed longer than expected in early 2021

  • “Consistently underestimating the time required to take the groups platform to a usable state” led to the Groups Platform being put on hold for years, even after its launch had already “been delayed multiple times while we have been building the capacity in our Tech Team.”

  • In 2017 CEA “began a beta version of an EA Conversations platform to facilitate conversations between EAs but discontinued work on it despite initial success, largely because of competing time demands.”

  • Underestimating the required work was among the reasons why a timeline for an impact evaluation of the CBG program was not been met

  • CEA’s Mistakes page notes “Many CBG recipients expected to receive more non-monetary support (e.g. coaching or professional development) than we were able to provide with our limited staff capacity. We think this is because we were too optimistic about our capacity in private communication with recipients.”

  • In spring of 2022, the leader of CEA’s groups work reflected “[Over the last year] I think I tried to provide services for too many different types of group leaders at the same time: focus university groups, early stage university groups, and city/​national groups… This meant that I didn’t spend as much on-the-ground time at focus universities as I think was needed to develop excellent products… I didn’t generate enough slack for our team for experimentation. Demand for basic support services at focus universities more than tripled… This meant that our team was growing just to keep up with services that group leaders expected to receive from us, stretching our team to capacity. This left little time for reflection, experimentation, and pivoting.”

CEA’s capacity constraints have in many cases not been clearly communicated to the rest of the EA community, making it hard for others to make informed decisions.

Lack of transparent program evaluation for CBG program

CEA has not published an impact review of the CBG program, despite discussing plans and timelines to do so on multiple occasions.

In November 2018, CEA announced plans to “complete an impact review” for the CBG program “in the first half of 2019”. In response to a late-January 2019 question about when the evaluation would be conducted, CEA pushed this deadline back modestly, writing: “the impact evaluation will take place in the summer of 2019, likely around August.”

In May 2019, the November announcement was updated to note “We now expect that the impact review will not be completed in the first half of 2019”; I don’t believe this update was communicated elsewhere. A July 2019 post described an intention to “complete a deeper review of the programme’s progress by the end of this year.” In October 2019, a second response to the question posed in late January indicated that the evaluation remained incomplete and that publishing impact information had been deprioritized.[30]

The most relevant data I’ve seen for evaluating the CBG program came in CEA’s 2020 review (after nearly three years of operation); even with that data I find it hard to assess how well the CBG program is performing in an absolute sense and relative to other group support work CEA could prioritize instead. Other community builders doing group support work also appear uninformed about the CBG program’s impact.

Poor metric collection and program evaluation

CEA has invested significant amounts of time, money, and energy into group support but has published little in the way of actionable insights to inform other community builders.

Despite having extensively researched the publicly available information about CEA’s group support work, I find it very difficult to gauge the effectiveness of the work, and especially difficult to know which of CEA’s various programs have been most impactful.[31] I’d feel largely at a loss if I were allocating human or financial resources.

A recent comment from Rossa O’Keeffe-O’Donovan (echoing longstanding community concerns[32]) provides an excellent summary of the situation and is consistent with views I’ve heard from other community members:

It’s bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of “strong” evidence of the impact of various types of community building /​ outreach, in particular local/​student groups. I’d like to see more by way of baking self-evaluation into the design of community building efforts, and think we’d be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago.

By “strong” I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods—i.e. not necessarily RCTs where these aren’t practical (though it would be great to see some of these where they are!), but some sort of “difference in difference” style analysis, or before-after comparisons. For example, how do groups’ key performance stats (e.g. EA’s ‘produced’, donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/​part time salaried group organiser?

CEA has been aware of these problems for a long time, having acknowledged in 2017: “Our work on local groups was at times insufficiently focused. In some cases, we tried several approaches, but not long enough to properly assess whether they had succeeded or not.”

Five years later, CEA still struggles to design programs in a way that is conducive to evaluation. In April 2022, the leader of CEA’s group team reflected:

I think we tried to build services for group leaders that had long feedback loops (e.g. hiring for Campus Centres is a 6 month process, developing and designing metrics for groups involves at least a semester to see if the metric is helpful + lots of time communicating). We could have tested these services faster, conducted group leader interviews to shorten these feedback loops, and potentially even chosen to provide services that even had quicker feedback.

EA Grants (2017-2020)

Project background

EA Grants was a “successor” to EA Ventures. The program was launched in June 2017 with a goal of providing funding “to help individuals work on promising projects.” The initial grant round had a budget of £500,000 (~$650,000); unlike EAV this funding was secured ahead of time. The launch post indicated that “if we feel that we have been able to use money well through this project, we will allocate new funds to it in 2018.”

The initial grant round attracted 722 applicants, and ended up providing ~$480,000 to 21 grantees[33]. More details about that round can be found in this writeup, including a list of grantees.

CEA also ran a referral round in early 2018, and another application-based round starting in October 2018. The referral round distributed ~$850,000 and EA Grants distributed another $200,000 in 2019 and early 2020. Public information about where this money went is extremely limited; I’ve summarized the information I’ve seen here.

In November 2019, CEA published a report detailing some of the numerous problems the program had experienced, and noted that “we don’t think it’s likely that we’ll open a new round for EA Grants.” In April 2020, another post confirmed that “EA Grants is no longer considering new grantmaking.”

Problems

Chronic operational problems

Poor record keeping and organization led to missed commitments and bad experiences for grantees and applicants.

After joining CEA in December 2018 to run EA Grants, Nicole Ross published an update in November of 2019 describing serious and widespread operational shortcomings:

We did not maintain well-organized records of individuals applying for grants, grant applications under evaluation, and records of approved or rejected applications. We sometimes verbally promised grants without full documentation in our system. As a result, it was difficult for us to keep track of outstanding commitments, and of which individuals were waiting to hear back from CEA…

A lack of appropriate operational infrastructure and processes resulted in some grant payments taking longer than expected. This lack of grantmaking operational systems, combined with the lack of consolidated records, led to delays of around a year between an individual being promised a grant and receiving their payment in at least one case.[34] We are aware of cases where this contributed to difficult financial or career situations for recipients.

The post did not discuss why these operational problems were not observed during the initial grant round, or why subsequent rounds were launched if they were observed.

Granting significantly less money than planned

EA Grants distributed much less money than intended, falling short of CEA’s grantmaking targets by millions of dollars.

In failing to grant as much as intended (an issue CEA never publicly discussed until I raised the issue), EA Grants bore an unfortunate resemblance to its predecessor EA Ventures. In December 2017, CEA announced plans to reopen EA Grants in 2018 with rolling applications and a significantly increased budget of £2,000,000 (~$2.6 million) or more. However, the 2018 referral round only granted ~$850,000.[35]

This problem extended into 2019, when CEA’s plans included a combined budget for EA Grants and the Community Building Grants program of $3.7 million. While a planned split between those programs was not given, this budget clearly implied a significant budget for EA Grants. Yet the program granted less than $200,000 in 2019 and early 2020 (while the CBG program granted ~$875,000 in 2019).

Repeated inaccurate communications about timelines and scope

Throughout 2018, the community was repeatedly, but incorrectly, led to believe that EA Grants would re-open soon, with rolling applications and very large grant capacity.

CEA’s original description of these mistakes significantly understated their severity (emphasis added):

In February 2018 we stated in a comment on the EA Forum that we planned to re-open the EA Grants program by the end of that month. Shortly afterwards, we realized that we had underestimated how much work would be involved in running the open round of Grants. We did not issue a public update to our timeline. We opened public applications for EA Grants in September 2018.

It’s true CEA offered an extremely over-optimistic timeline for EA Grants in February 2018; however, the assertion that this was the last such public update is demonstrably false. CEA provided at least four other public updates[36], which were generally overly optimistic about when applications would open and/​or how much money would be granted. (After I pointed this out, CEA updated the copy on its Mistakes page.)

Unrealistic assumptions about staff capacity

CEA repeatedly made unrealistic commitments about EA Grants despite having minimal staff working on the project.

In December 2017, CEA announced plans to scale the EA Grants program significantly in 2018, with a budget of $2.6 million (up from ~$480,000 in 2017)[37] and a “plan to accept applications year-round with quick reviews and responses for urgent applications and quarterly reviews for less urgent applications.” Despite these ambitious goals, the project didn’t have anyone working on it full time until December 2018, and the search for this employee didn’t even begin until July 2018.

EA Grants did receive some part-time resources. But those seem to have gone to the referral round that started in January 2018, leaving even less capacity to work on the larger program that had been promised. Compounding capacity problems, CEA “elected to devote staff time to launching EA Community Building Grants and to launching our individual outreach retreats (such as our Operations Forum) instead of devoting that time to reopening EA Grants for public applications.”

Given the lack of dedicated capacity for EA Grants, that CEA supposedly identified this issue quickly, and that CEA was aware dedicated capacity was not expected soon, it is rather baffling that throughout 2018 CEA continued to issue optimistic timelines for re-opening the project. For example:

  • On February 11, CEA discussed plans to open the program by the end of the month with rolling applications; in actuality staff was working on the referral round that had started in January and the CBG program which would launch less than two weeks later.

  • In April CEA announced plans to reopen EA Grants by the end of June, but didn’t start looking for full time staff until July

  • Just one month before finally opening a round of EA Grants in September 2018 (albeit a round with a cap on the number of applications it could process and that granted less than $200,000), CEA was describing plans for “Re-launching EA Grant applications to the public with a £2,000,000 budget and a rolling application.”

At no time did CEA have adequate staff to execute these plans. The significant operational problems EA Grants exhibited suggest that there was not even sufficient capacity to properly execute the referral round, launched in January 2018 as a “stop-gap… so [CEA] could run the project with less staff capacity.”

Indeed, when Nicole Ross was finally hired to work on the project full-time, her diagnosis of the “issues of the program” pointed directly to capacity issues.[38]

Lack of transparency leading to faulty community assumptions

CEA’s faulty communications and missed commitments around EA Grants led other actors to operate on false assumptions.

Given CEA’s frequent communications that a large round of EA Grants was around the corner, it’s not surprising that many people in the EA community operated under that distorted assumption. References to a multimillion dollar EA Grants round can be found in lengthy (>100 comments) Facebook discussions about EA funding mechanisms and Less Wrong discussions about funding opportunities. New projects seeking to change the funding landscape seemed to operate under the assumption that EA Grants was a going concern (e.g. here and here). Other EAs simply expressed confusion about whether and how EA Grants was operating. There are also accounts of donors viewing projects not funded by EA Grants as not worth funding based on that signal, when in reality EA Grants was only open via referrals at the time.

Even Nick Beckstead[39] was unaware of what was going on with EA Grants. When stepping down from managing two EA Funds in spring of 2018, he encouraged “those seeking funding for small projects to seek support via EA Grants… EA Grants will have more time to vet such projects.” At the time, EA Grants was only open via the referral round, did not accept open applications for another five months, and was eight months away from hiring full-time dedicated staff.

The community’s confusion was presumably exacerbated because the EA Grants website was never updated after the first round of grants closed. Perhaps it was for the best that various unexecuted plans to reopen the program in 2018 were not listed on the site. However, it seems unambiguously problematic that even when applications finally re-opened in September 2018 the site was not updated so visitors saw a message saying applications were closed. Likewise, the site was never updated to reflect that CEA was unable to make grants for educational purposes, and therefore the site provided examples of “welcome applications” that included unfundable projects (even after a community member pointed this error out).

Lack of post-grant assessment

Despite EA Grants distributing roughly $1.5 million, there has been no public assessment of the efficacy of those grants.

When community members have inquired about evaluating these grants, CEA has replied that evaluation would happen in the future but that it was not clear which individual would be responsible for it. (Examples of these exchanges can be found here and here.) As a result, little is known about how many grants achieved their intended purpose.

In November 2019, CEA’s Nicole Ross flagged a “lack of post-grant assessment” as one of the program’s problems. She indicated that improvements had been made: “Since joining, I have developed a consistent process for evaluating grants upon completion and a process for periodically monitoring progress on grants. CEA is planning further improvements to this process next year.” She also committed to a specific deliverable: “I’m working on a writeup of the grants I’ve evaluated since I joined in December. Once I’ve finished the writeup, I will post it to the Forum and CEA’s blog, and link to it in this post.”

Unfortunately, that writeup was never published. Thus the closest we have to an evaluation is Ross’ extremely brief summary: “upon my initial review, it had a mixed track record. Some grants seemed quite exciting, some seemed promising, others lacked the information I needed to make an impact judgment, and others raised some concerns.”

Since only grantees from the initial 2017 round have been published, it’s hard for community members to conduct their own evaluations. A quick look at the 2017 grantees seems consistent with Ross’ assessment of a “mixed track record.” Some grants, including the largest grant to LessWrong 2.0, clearly seem to have achieved their goals. But for other grants, like “The development of a rationality, agency, and EA-values education program for promising 12-14 year olds”, it’s not obvious that the goal was met.

The lack of grant assessment, or even public information about which grants were made, is especially disappointing given that an express goal of EA Grants was to produce “information value” that the rest of the community could learn from.[40] Ideally, the community could learn about the efficacy of different grants, as well as the process used to make the grants in the first place. The EA Grants process had potential weaknesses (e.g. “Many applicants had proposals for studies and charities [CEA] felt under-qualified to assess”) so understanding whether those weaknesses impacted the quality of grants seems particularly important.

Even descriptive (rather than evaluative) information about the grants has been scarce. Most grantees and grant sizes have not been shared publicly. And even when grants from the first round were published, CEA’s discussion of that grant round neglected to mention those grants were extraordinarily concentrated in certain cause areas.[41]

Insufficient program evaluation

In public discussions of EA Grants’ mistakes, CEA has failed to notice and/​or mention some of the program’s most severe problems.

CEA has occasionally discussed EA Grants’ problems. While these efforts provided some valuable lessons about the program (including some cited in this analysis), they missed some of the program’s biggest problems including:

Background

EA Funds is a platform that allows donors to delegate their giving decisions to experts. Funds operate in four cause areas: Global Health and Development, Animal Welfare, EA Infrastructure (EAIF), and the Long-Term Future (LTFF). EA Funds has raised roughly $50 million for those four funds since launching in 2017. EA Funds also facilitates tax-deductible giving for US and UK donors to various organizations.

Originally, the funds were managed by individuals, but in 2018 CEA adopted Fund Management Teams. In July 2020, Jonas Vollmer was hired to lead EA Funds, and in December 2020 CEA announced that EA Funds would “operate independently of CEA” though “CEA provides operational support and they are legally part of the same entity.”

Caveats

This section describes problems both before and after EA Funds’ 2020 spin off from CEA. While CEA bears responsibility for problems before the spin off, the new management team is responsible for subsequent problems (except to the extent that CEA was responsible for developing the spinoff plans).

It should also be noted that EA Funds is in the midst of some material changes. In May 2022, GWWC announced that “the donation specific functionality of funds.effectivealtruism.org will be retired and redirected to GWWC’s version of the donation platform” and “EA Funds will continue to manage the grantmaking activities of their four Funds and will at some point post an update about their plans moving forward and this includes some of the reasoning for this restructure decision.” This restructuring will likely impact, and hopefully resolve, some of the ongoing problems I discuss below.

Problems

Failure to provide regular updates

EA Funds has struggled to provide updates on grant activity and other developments, and these problems are still continuing at time of writing despite CEA’s claims to have resolved them.

When EA Funds went live in February 2017, CEA announced plans “to send quarterly updates to all EA Funds donors detailing the total size of the fund and details of any grants made in the period. We will also publish grant reports on the EA Funds website.”

However, these plans weren’t executed. CEA’s Mistakes page acknowledges that “we have not always provided regular updates to donors informing them about how their donations have been used.” That page indicates that this problem only lasted through 2019, and was addressed because “We now send email updates to a fund’s donors each time that fund disburses grants.” That has not been my experience as a donor to EA Funds, as the emails I’ve received about where my donations have gone have been sporadic at best. And EA Funds’ “failure to provide regular updates” seems ongoing.

There was a period during which Fund management teams published detailed grant reports on the EA Forum and the webpages for each Fund; this practice added valuable transparency into the grant process.[42] But these reports have become much less frequent, and transparency around this has been problematic.[43] The Global Health and Development Fund is the only fund that has posted a grant report describing grants made in 2022 (a January grant).

Besides grant reports, other communication one would expect from a donation platform has also been absent. While EA Funds emailed out solicitations in late-December 2019 and late November 2020, no giving season email was sent in giving season 2021. EA Funds has also declined other opportunities to email donors, such as changes to the funds management team, the spin-off of EA Funds, or simply occasional solicitations. These practices could easily have suppressed donations. My understanding is that in light of the recent restructuring, these sorts of communications will be GWWC’s responsibility going forward.

Slow grant disbursement

The lack of grant activity during EA Funds’ early operations raised concerns from the community. In December 2017 (EA Funds’ first giving season), one EA asked if the platform was still operating. A month later, Henry Stanley wrote a post titled “EA Funds hands out money very infrequently—should we be worried?” which highlighted large pools of ungranted money and the infrequency of grants, then in April published another post with suggestions for improving EA Funds.

CEA staff responded to the April post, suggesting improvements would be forthcoming shortly:

Many of the things Henry points out seem valid, and we are working on addressing these and improving the Funds in a number ways. We are building a Funds ‘dashboard’ to show balances in near real time, looking into the best ways of not holding the balances in cash, and thinking about other ways to get more value out of the platform.

We expect to publish a post with more detail on our approach in the next couple of weeks. Feel free to reach out to me personally if you wish to discuss or provide input on the process.

However, as described in a July 2018 post titled “The EA Community and Long-Term Future Funds Lack Transparency and Accountability”, CEA never published this update. This post observed:

Whether it’s been privately or on the Effective Altruism Forum, ranging from a couple weeks to a few months, estimates from the CEA regarding updates from the EA Funds cannot be relied upon. According to data publicly available on the EA Funds website, each of the Long-Term Future and EA Community Funds have made a single grant: ~$14k to the Berkeley Existential Risk Initiative, and ~$83k to EA Sweden, respectively. As of April 2018, over $1 million total is available for grants from the Long-Term Future Fund, and almost $600k from the EA Community Fund.

In October 2018, CEA announced new fund management teams and introduced a three-time-per-year granting schedule. These changes seem to have addressed the community’s concerns about slow grant disbursement, and CEA’s mistakes page indicates that this problem only existed from 2017-18.[44] From my correspondence with EA Funds staff my understanding is that funds are maintaining large but reasonable balances given their grantmaking activity; however, without up-to-date and reliable data on grantmaking and cash balances this is difficult to verify.

Failure to provide accurate data about fund financials

Various attempts to provide the public with information about the financial situation of EA Funds have been plagued by data quality issues.

In August 2018, after EA Funds was operating for a year and a half, CEA noted that one of the “main things that community members have asked for” was “easy visibility of current Fund balances.” However, an attempt to remedy that in October did not work as planned meaning fund balances were often outdated from 2016-2019.

The dashboards EA Funds currently provides also contain bad data. The “payout amounts by fund” chart is obviously out of date, with multiple funds showing no payouts for over a year. I’m not clear on whether the fund balance dashboard on the same page is accurate, as some of the data seems suspicious, or at least inconsistent with the donations dashboard.[45]

Operational problems

EA Funds exhibited a variety of operational problems, particularly prior to being spun out of CEA.

These missteps include:

  • “A bug in our message queue system meant that some payment instructions were processed twice. Due to poor timing (an audit, followed by a team retreat), the bug was not discovered for several days, leading to around 20 donors being charged for their donations twice.” (source)

  • “We failed to keep the EA Funds website up to date, meaning that many users were unsure how their money was being used.” (source)

  • “A delay in implementing some of the recurring payment processing logic in EA Funds meant that users who created recurring payments before May did not have their subscriptions processed.” (source)

  • Per a late 2019 update: “Most of the content on EA Funds (especially that describing individual Funds) hadn’t been substantially updated since its inception in early 2017. The structure was somewhat difficult to follow, and wasn’t particularly friendly to donors who were sympathetic to the aims of EA Funds, but had less familiarity with the effective altruism community (and the assumed knowledge that entails). At the beginning of December we conducted a major restructure/​rewrite of the Funds pages...”

  • The original user experience on EA Funds contradicted normal fundraising practices and likely suppressed donations as a result. If someone (e.g. a new prospective donor coming from effectivealtruism.org) landed on the main page and clicked the prominent “donate now” button, they were asked to enter their email to create an account before learning about any of the funds or entering a donation amount.

Inadequate staffing

EA Funds has historically had very little staff capacity relative to the scope of the project and the plans communicated to the EA community and public at large.

While CEA’s Mistakes page does not mention understaffing as an issue for EA Funds, CEA has previously acknowledged this problem. In August 2018, Marek Duda wrote: “We are now dedicating the resources required to improve and build on the early success of the platform (though we recognize this has not been the case throughout the timeline of the project).”

In December 2017, it was unclear that EA Funds was even operational. Yet in early in 2018, CEA “made the choice to deprioritize active work on EA Funds in favour of devoting more staff resources to other projects (where active work includes technical work to improve the user experience, operations work to e.g. bring on new grantee organizations or to check in on a regular basis with Fund managers).” This decision to deprioritize EA Funds likely contributed to community dissatisfaction with how the funds were managed (particularly around grant frequency and transparency), which was voiced with increasing frequency starting in January 2018. Notably, when CEA laid out its plans for 2018 in December of the previous year, there was no mention of deprioritizing EA Funds in any way and instead plans were communicated for many substantive improvements to the platform.

The staff working on EA Funds has also turned over considerably, which has presumably exacerbated capacity issues. EA Funds was originally run by Kerry Vaughan when it launched in 2017. Marek Duda (2018) and Sam Deere (2019) then took over responsibility, followed by Jonas Vollmer who was hired in 2020 when EA Funds started operating independently of CEA. In June 2022, Caleb Parikh became the “Interim Project Lead.”

Not only did CEA dedicate minimal staff resources to EA Funds while it was in charge of the platform[46], Fund Managers have also been capacity constrained. When Nick Beckstead stepped down as manager of the EAIF and LTFF (prompted by community complaints about the infrequency of his grantmaking), he noted time constraints had contributed to this issue: “The original premise of me as a fund manager was… that this wouldn’t require a substantial additional time investment on my part.” Beckstead explained that for him “additional time would be better spent on grantmaking at Open Phil or other Open Phil work”, and that “I believe it will be more impactful and more satisfying to our community to have people managing the EA Community and Long-term Future Funds who can’t make most of their grants through other means, and for whom it would make more sense to devote more substantial attention to the management of these funds.” At least one other fund manager has also stepped down due to limited capacity.

The Fund Management Team model, announced in August 2018, has added Fund Manager capacity. But a lack of Manager capacity is still causing problems. In December 2020 one manager reported finding it problematic that “we currently lack the capacity to be more proactively engaged with our grantees.” And in April 2022, another manager explained that grant reports were significantly delayed “because the number of grants we’re making has increased substantially so we’re pretty limited on grantmaker capacity right now.”[47]

Lack of post-grant assessment

EA Funds has received roughly $50 million in donations and has made hundreds of grants, but has never published any post-grant assessments.

EA Funds has also failed to publish any descriptive analysis, such as meaningfully categorizing grants.

The infrastructure around grant data likely contributes to this lack of post-grant assessment. While Open Philanthropy provides a searchable database of grants it has made, it is difficult and time-consuming to collect data on EA Funds’ grant history. Records of grants are available, but each grant round for each fund is a separate document. Anyone wishing to analyze grants would need to hand enter data about each grant into a spreadsheet. I believe EA Funds plans to release a grant database in the future, which would significantly facilitate analysis of their grantmaking.

Overly positive descriptions of project

While responsible for EA Funds, CEA regularly portrayed the platform in an overly positive light in public communications.

Some of these have been mentioned in other sections of this analysis, however, I feel like aggregating these examples provides a helpful perspective.

  • Mischaracterizing community feedback, like in an update CEA published a few months after launch, which originally stated “Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.” The post was later updated to remove this statement after receiving strong pushback from several EAs who observed that CEA had received substantial criticism in other areas (examples: here and here).

  • Regularly setting expectations of up-to-date grant reporting (e.g here and here) which has not materialized

  • Marketing EA Funds more aggressively than had been communicated to others, e.g. by quickly making EA Funds the main donation option on effectivealtruism.org and GWWC’s “recommendation for most donors”. As one EA put it “I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I’d been told earlier, privately and publicly—that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months.”

  • Changing the evaluation bar for money moved through EA Funds to frame this metric as a success. In August 2018, CEA described the project-to-date as “To a large extent… very successful. To date we have regranted (or are currently in the process of regranting) more than $5 million of EA-aligned money to around 50 organizations.” Yet in February 2017 when the project was launched, CEA stated “We will consider this project a success if… the amount we have raised for the funds in the first quarter exceeds $1M.” If CEA had raised its target of $1 million in the initial quarter of EA Funds and then experienced zero growth (which would presumably be very disappointing), by August 2018 that would have led to donations of $6 million or more than $7 million if one accounts for Giving Season.

  • Ignoring counterfactuals in discussions of money moved. There are many reasons to think that much of the money moved through EA Funds would still have been donated to effective charities if EA Funds didn’t exist. These reasons include: the “vast majority” of early donations coming from people already familiar with EA[48], EA Funds’ displacement of the GWWC Trust which was moving significant amounts of money and growing extremely quickly when it was shuttered[49], and EA Funds replacing other effective options on sites like GWWC and effectivealtruism.org.

  • Publicly stating plans for integrating a workplace giving option into EA Funds, which has not happened. “Automation of payroll giving” was mentioned in plans for 2018 and again in August of that year, but has not been implemented.

Community Health (2017-present)

Background

CEA has taken an active role in the health of the broader EA community in a variety of ways over the years. CEA’s Guiding Principles were published in 2017, and since early 2018 Julia Wise has served as a contact person that people could reach out to. The “community health” team was built out over the years, and includes 5 people at time of writing.

Historically, much of CEA’s community health work has been reactive in response to concerns raised by individuals or groups in areas such as interpersonal conflicts, online conflicts, advising on personal or mental health problems, improving diversity, equity, and inclusion, and community health practices. Since late 2021, CEA has been shifting toward more proactive work (e.g. writing content, anticipating future problems or risks, launching the EA Librarian project).

Caveats

Public information about CEA’s community health work is often unavailable, as many concerns are raised and addressed confidentially. It is also difficult to determine counterfactuals around what would have happened if CEA had not been active in this area. These factors make evaluating CEA’s community health work difficult, and I encourage readers to bear in mind that my analysis may have suffered as a result.

Problems

Deprioritizing mid-career (and older) EAs

Community building efforts focus on young EAs, leaving other age groups neglected.

At EA Global 2017, Julia Wise reported:

“Two years ago I interviewed all the EA Global attendees over the age of 40. There were not many of them. I think age is an area where we’re definitely missing opportunities to get more experience and knowledge. One theme I heard from these folks was that they weren’t too keen on what they saw as a know-it-all attitude, especially from people who were actually a lot less experienced and knowledgeable than them in many ways.”[50]

I have not seen any evidence that older attendees’ concerns were prioritized in subsequent conferences. And CEA’s other community building work has prioritized younger EAs in implicit ways (e.g. the original metric for evaluating the CBG program was the number of career changes a group produced) and explicit ways (e.g. CEA is not focusing on “Reaching new mid- or late-career professionals”).

The community health team’s “strategy is to identify ‘thin spots’ in the EA community, and to coordinate with others to direct additional resources to those areas.” But after CEA announced that reaching older EAs was not part of its strategy, I don’t believe any effort was made to have other community builders fill this gap.

Confidentiality Mistakes

The EA community’s contact person, whose responsibilities include fielding confidential requests, has accidentally broken confidentiality in two instances she is aware of.

Missed opportunity to distance (C)EA from Leverage Research

CEA has missed opportunities to distance itself and the EA community from Leverage Research and its sister organization Paradigm Academy, creating reputational risks.

Leverage Research ran the original EA Summits, so some connection to EA was inevitable. However, CEA had plenty of signs that minimizing that connection would be wise. CEA’s 2016 Pareto Fellowship, run by employees closely tied to Leverage, exhibited numerous problems including a very disturbing interview process. And Leverage has had minimal output (and even less transparency) despite investing significant financial and human capital.

Yet in 2018, CEA supported and participated in an EA Summit incubated by Paradigm Academy (a sister organization to Leverage) employees, including a leader of the Pareto Fellowship.[51] A former CEA CEO (who stepped down from that role less than a year earlier) personally provided some funding.

After the Summit, CEA noted community concerns: “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community. We will address this in a separate post in the near future.” The post was later edited to note “We decided not to work on this post at this time.” CEA’s CEO at the time of the summit (Larissa Hesketh-Rowe) now works for Leverage (as does Kerry Vaughan, the former Executive Director of CEA USA).

Since then, there have been more negative revelations about Leverage.

An article in Splinter News was released [in September 2019], showing leaked emails where Jonah Bennett, a former Leverage employee who is now editor-in-chief for Palladium Magazine (LinkedIn ), was involved with a white nationalist email list, where he among other things made anti-Semetic jokes about a holocaust survivor, says he “always has illuminating conversations with Richard Spencer”, and complained about someone being “pro-West before being pro-white/​super far-right”.

Geoff Anders, Leverage’s founder, defended Bennett, writing “I’m happy to count him as a friend.”

  • In October 2021, a former member of the Leverage ecosystem wrote a disturbing account of her experiences. In addition to revealing troubling personal experiences (“I experienced every single symptom on this list of Post-Cult After-Effects except for a handful (I did not experience paradoxical idealization of the leader, self-injury or sexual changes)” she described worrisome aspects of Leverage as an organization/​community: “People (not everyone, but definitely a lot of us) genuinely thought we were going to take over the US government… One of my supervisors would regularly talk about this as a daunting but inevitable strategic reality (“obviously we’ll do it, and succeed, but seems hard”)… The main mechanism through which we’d save the world was that Geoff would come up with a fool-proof theory of every domain of reality.”

While CEA’s Mistakes page now includes content minimizing the relationship between CEA and Leverage, this content was not added until early 2022, well after the revelations above had surfaced. If CEA had released that content in 2018 (when it had originally planned to describe the CEA/​Leverage relationship), it would be more credible that CEA knew of problems with Leverage and taken proactive steps to address them.

Poor public communication and missed commitments around EA Librarian project

The EA Librarian Project failed to meet public commitments.

The EA Librarian Project, launched in January 2022, was meant to answer questions about EA, especially “‘Dumb’ questions or questions that you would usually be embarrassed to ask.” The launch post noted “We will aim to publish a thread every 2 weeks with questions and answers that we thought were particularly interesting or useful (to either the community or the question asker). We hope that this will encourage more people to make use of the service.”

These regular updates were not provided. CEA published just a single update with several questions on March 10. On April 21 an EA posted a question about whether the program was still operating, as they had “submitted a question on March 31 and have not heard anything for 3 weeks now.” The person in charge of the EA Librarian project responded, noting that they had been ill and was unable to “indicate turnaround time right now due to having some of the librarians leave recently. We will certainly aim to answer all submitted questions but I expect that I will close the form this/​next week, at least until I work out a more sustainable model.”

The EA Librarian Project never made any subsequent public updates, though people actively using the program were notified it was inactive.[52] The broadest notification that the project had shuttered came in a footnote of a general update from the Community Health Team reading “Since launching the EA Librarian Project, Caleb has become the Interim Project Lead for EA Funds. As a result, the EA Librarian service is no longer accepting new questions.”

This is a disappointing lack of communication regarding a project that was billed as an experiment (which others could presumably learn from if they had relevant information), generated interest at launch, received at least some positive feedback, and was well suited to address worrisome reports about group leaders not answering basic questions they are asked.

Lack of guidance on romantic relationships after problems with Pareto Fellowship

CEA did not offer guidance on power dynamics in romantic relationships for many years, despite evidence of problematic behavior.

In June 2022, Julia Wise published a post on “Power Dynamics Between People in EA.” While I think this post was excellent, I find it problematic that it was only recently published.[53] EA is a community where social, professional, domestic and romantic lives are often enmeshed, creating significant potential for inappropriate behavior. This concern is more than theoretical, as a staff member leading CEA’s 2016 Pareto Fellowship “appeared to plan a romantic relationship with a fellow during the program” in a situation with troubling power dynamics. That experience could have been a learning opportunity (and may well have been had CEA fulfilled its commitment to publishing an evaluation of the Pareto Fellowship), but instead was a missed opportunity.

Conclusion

As Santayana famously wrote, “Those who cannot remember the past are condemned to repeat it.” Throughout this report I’ve demonstrated ways in which this idea applies to EA community building efforts. Past problems have persisted when ignored and eased when they’ve been used to inform new strategies.

I hope this report helps the EA community understand its past better, and in doing so, also helps it build its future better.

  1. ^

    CEA’s concerns about public evaluations include the staff time required to execute them, and that many of the most important things include assessments of individuals or sensitive situations that would be inappropriate to share. Dedicated MEL staff would certainly help with the first concern. While I recognize that some information couldn’t be shared publicly, I still believe it would be valuable to share the information that could be.

  2. ^

    As one data point, in a recent hiring round, “literally zero of [CEA’s] product manager finalist candidates had ever had the title “product manager” before” and people with experience did not seem to find the job attractive. I’ve been told that CEA made adjustments and was able to find significantly more experienced candidates in a subsequent round; other organizations would presumably benefit from learning how CEA achieved this.

  3. ^

    To be more precise, I think it is valuable in communicating how CEA publicly describes its mistakes, but very bad in terms of giving readers an accurate description of those mistakes.

  4. ^

    “We found it hard to make decisions on first-round applications that looked potentially promising but were outside of our in-house expertise. Many applicants had proposals for studies and charities we felt under-qualified to assess. Most of those applicants we turned down.”

  5. ^

    “The referral system has the significant downside of making it less likely that we encounter projects from people outside of our networks. It also meant that some potentially promising applicants may have failed to develop projects that would have been good candidates for EA Grants funding, because they didn’t know that EA Grants funding was still available.”

  6. ^

    See, for example, attempts to measure GWWC attrition via data from the EA Survey but without the benefit of donations reported to GWWC by its members.

  7. ^

    CEA has written about this dynamic here.

  8. ^

    For example, better grantmaking data could have prevented, or sped the discovery of, EA Grants’ operational problems.

  9. ^

    The Forum’s community page has some of this data, but not in a way that lends itself to analysis. For instance, there is a map of groups, but that data can’t be exported, requiring manual counting and data entry to determine how many groups are in each country.

  10. ^

    CEA has also expressed more general concerns about crowding out other group support projects: “A final concern about CEA trying to cover the entire groups space is that we think this makes it seem like we “own” the space – a perception that might discourage others from taking experimental approaches. We think there’s some evidence that we crowded out others from experimenting in the focus uni space this year [2022].”

  11. ^
  12. ^

    One CEA supporter update described Q2 plans such as “sharing more of our thinking on [the] GWWC blog” and “improving the online signup process for GWWC members”; another described “promoting GWWC” among “the different activities we pursue” and discussed GWWC outreach as an active priority.

  13. ^

    The exact date of the report is unclear. It is cited in a fundraising report describing GWWC’s plans for 2015, suggesting it was written early that year or late in 2014. GWWC has also told me about “a 2014 analysis… done in 2016 because it is an analysis of 2014 pledges who donated in 2015”. This might have been an update of the original analysis; otherwise I’m not sure how to reconcile it with the impact report in the 2015 fundraising document. Regardless of the exact publication date, the report was conducted a long time ago, using donation data from even longer ago.

  14. ^

    As Will MacAskill has observed, simply including Sam Bankman-Fried’s impact would radically increase estimates of the value of a pledge.

  15. ^

    Comment: “I don’t recall seeing the ~70-80% number mentioned before in previous posts but I may have missed it. I’m curious to know what the numbers are for the other cause areas and to see the reasoning for each laid out transparently in a separate post. I think that CEA’s cause prioritisation is the closest thing the community has to an EA ‘parliament’ and for that process to have legitimacy it should be presented openly and be subject to critique.”

  16. ^

    At the time, the opening sentence read “CEA’s overall aim is to do the most we can to solve pressing global problems — like global poverty, factory farming, and existential risk — and prepare to face the challenges of tomorrow.” I doubt many would read this sentence and assume CEA leadership thought existential risk should receive “70-80%” of resources when competing with other cause areas.

  17. ^

    I believe only two changes were made: In early 2021, Global Health and Development was added to the homepage’s reading list (after I observed that it was problematic to omit this highly popular and accessible cause). And the introductory definition of EA was slightly tweaked.

  18. ^

    “These are things like “you must have an application”, “we will give the intro talk, or at least have input into it”, and so on.) It was apparent to us well before the conference date that Roxanne/​EAO was overburdened, and yet these constraints were created that made the burden even larger.”

  19. ^

    “Roxanne asked other EAO staff to help with the grant application, but they were not able to finish it either… After our trial assignment for EAGx, it sounded to us that Roxanne was on board but needed to make a final determination with the rest of the team. That took a week to come, which was hard for us since we already had a very compressed timeline.”

  20. ^

    “Regardless of everything else, there should have been someone at EAO who was checking in on Roxanne, especially since she is only working part-time.”

  21. ^

    There is some record of a lack of responsiveness to EAGx organizers in 2017.

  22. ^

    “I think most fellows felt that it was really useful in various ways but also weird and sketchy and maybe harmful in various other ways. Several fellows ended up working for Leverage afterwards; the whole thing felt like a bit of a recruiting drive.”

  23. ^

    E.g. here, here, and here

  24. ^

    “We merge expert judgment with statistical models of project success. We used our expertise and the expertise of our advisers to determine a set of variables that is likely to be positively correlated with project success. We then utilize a multi-criteria decision analysis framework which provides context-sensitive weightings to several predictive variables. Our framework adjusts the weighting of variables to fit the context of the projects and adjusts the importance of feedback from different evaluators to fit their expertise…. We evaluate three criteria and 21 sub criteria to determine an overall impact score.”

  25. ^

    “We received over 70 applications before our first official deadline which exceeded our expectations. The quality of the projects was also higher than I expected.”

  26. ^

    “This project has shown promise… We plan to devote additional person-hours to the project to improve our evaluation abilities and to ensure that we evaluate projects more swiftly than we do currently.”

  27. ^

    For example, this post from Charity Entrepreneurship lists possible funding from EAV as a reason why they believed “adequate support exists for this project in its earliest stages.”

  28. ^

    “In July 2020, we shared in a public post that we expected to open applications for our Community Building Grants program “around January 2021”. We eventually decided to deprioritize this and push back the date. However, we didn’t communicate any information about our timeline until March 2021. Several group leaders expressed their disappointment in our communication around this. While we believe we made the right decision in not reopening the program, we should have shared that decision with group leaders much earlier than we did.”

  29. ^

    The forum.effectivealtruism.org/​​community page appears to have been soft-launched (prior to being fully populated) in late 2021 and fully launched in roughly April 2022.

  30. ^

    “We’ve conducted initial parts of the programme evaluation, though haven’t yet done this comprehensively, and we’re not at the moment planning on publishing a public impact evaluation for EA Community Building Grants before the end of 2020. This is mainly because we’ve decided to prioritise other projects (fundraising, grant evaluation, developing programme strategy) above a public impact review. Also, we’ve found both doing the impact evaluation and communicating this externally to be larger projects than we previously thought. In retrospect, I think it was a mistake for me to expect that we’d be able to get this done by August.”

  31. ^

    While CEA has provided some relevant data (e.g. survey data) it usually comes without much backhistory for context, without guidance as to whether the data provided represents all the data that was collected or was cherry-picked, and without any control group to assess counterfactual impact.

  32. ^

    For example, in late 2016, a former CEA employee observed (emphasis added):

    “I find it difficult to evaluate CEA especially after the reorganization, but I did as well beforehand. The most significant reason is that I feel CEA has been exceedingly slow to embrace metrics regarding many of its activities, as an example, I’ll speak to outreach.

    Big picture metrics: I would have expected one of CEA’s very first activities, years ago when EA Outreach was established, to begin trying to measure subscription to the EA community. Gathering statistics on number of people donating, sizes of donations, number that self-identify as EAs, percentage that become EAs after exposure to different organizations/​media, number of chapters, size of chapters, number that leave EA, etc. … So a few years in, I find it a bit mindblowing that I’m unaware of an attempt to do this by the only organization that has had teams dedicated specifically to the improvement and growth of the movement. Were these statistics gathered, we’d be much better able to evaluate outreach activities of CEA, which are now central to its purpose as an organization.”

  33. ^

    CEA notes “we selected 22 candidates to fund” but the spreadsheet only lists 21 grantees.

  34. ^

    “Correction: We originally stated that grant recipients had experienced payment delays of “up to six months.” After posting this, we learned of one case where payment was delayed for around a year. It’s plausible that this occurred in other cases as well. We deeply apologize for this payment delay and the harm it caused.”

  35. ^

    For context, EA Grants’ 2018 shortfall of $1.75 million was the same amount granted by the LTFF and EAIF combined in that year.

  36. ^
  37. ^

    The $480,000 granted was ~¾ of the original grant budget. CEA described “ withholding the remainder… to further fund some of the current recipients, contingent on performance.” It is unclear whether any such regrants were made.

  38. ^

    “From June 2017 to December 2018 (when I joined CEA), grant management was a part-time responsibility of various staff members who also had other roles. As a result, the program did not get as much strategic and evaluative attention as it needed. Additionally, CEA did not appropriately anticipate the operational systems and capacity needed to run a grantmaking operation, and we did not have the full infrastructure and capacity in place to run the program.”

  39. ^

    As a CEA trustee and lead investigator on Open Philanthropy’s grant to CEA, Beckstead presumably had an unusually good window into CEA’s activity.

  40. ^

    For example: “we may sometimes choose to fund projects where we are unsure of the object-level value of the project, if we think the project will produce useful knowledge for the community” and “We believe that untested strategies could yield significant information value for the effective altruism community, and will fund projects accordingly.”

  41. ^

    CEA classified grants into four categories, but a post meant “to give people a better sense of what kinds of projects we look for, should we run EA Grants rounds in the future” did not provide subtotals for those categories and did not mention that the EA Community Building and Long Term Future categories received 65% and 33% of funds respectively, while Animal Welfare and Global Health and Development each got only 1%.

  42. ^

    Unfortunately this transparency was only for those who sought out or stumbled upon the reports (as opposed to an email going to all donors).

  43. ^

    In April 2022, a question was posted on the EA Forum asking when/​whether the EAIF and LTFF would publish grant reports. A representative of the EAIF responded that the fund “is going to publish its next batch of payout reports before the end of June” and a representative of the LTFF said they thought the fund “will publish a payout report for grants through ~December in the next few weeks” i.e. by mid-May. Both those deadlines passed without new reports being published. The EAIF published a payout report in mid-July (covering grants made between September and December 2021). The LTFF published a payout report in mid-August (covering grants paid out between August and December 2021).

  44. ^

    This page also lumped in slow “communication with grantees” along with “slow grant disbursement” though it provides no additional information on the former.

  45. ^

    For example, the fund balances for the Animal Welfare Fund seem quite high relative to donations to that fund as reported by another dashboard (which could also be wrong). Fund balances also look quite high for the EAIF and LTFF, but I’ve been told current balances are reasonable given large recent grantmaking. The donations dashboard does not show sufficient donations to accommodate such large grantmaking, possibly because that dashboard omits gifts from large institutional donors (alternatively, the donations dashboard could itself have bad data). If the fund balances and donation dashboards both report accurate data for the Animal Welfare Fund, that would suggest that this fund has probably not been distributing money as quickly as donors would like.

  46. ^

    After EA Funds was spun out of CEA, the platform appears to have more staff capacity.

  47. ^

    Given historical problems related to Fund Manager capacity, it seems worrisome that the LTFF page currently lists only two managers.

  48. ^

    Per Kerry Vaughan: “My overall guess is that the vast majority of money donated so far has been from people who were already familiar with EA.” This strongly suggests that much of the money donated to EA Funds has mostly simply displaced other effective donations.

  49. ^

    In its 2016 Annual Report, CEA noted that the Trust had received donations of £1.3 million Million in Q1-Q3 of 2016 and that “The amounts donated to the Trust have grown substantially”. In three quarters of 2016, not including giving season, the Trust raised more than in all of 2015 (£1.2 million) and approximately triple donations from all of 2014. The 2016 figure is particularly notable relative to the ~£2 million moved EA Funds in their first ~9.5 months because the Trust was primarily for UK donors only funded a select group of poverty charities.

  50. ^
  51. ^

    This decision was made by CEA leadership at the time, rather than the community health team specifically. I include it in this section because the decision had implications for community health.

  52. ^

    Users were emailed with notification that the project was behind, and at some point the “EA Librarian” tag on the EA Forum was changed to “EA Librarian (project inactive)”.

  53. ^

    CEA has had internal guidance on this topic for much longer, possibly introducing it after Pareto.