Thanks Sarah, good to know!
AnonymousEAForumAccount
Thanks for the reply Toby! These seem like great steps to be taking, and I’m glad they’re in the works.
Since you ask about suggestions, here are some other things I’d be looking at if I were in your shoes.
Working with campus groups to solicit subscriptions. Organizers at Middlebury, a very small school, just reported creating 80 GWWC trial pledges through tabling. Presumably they could garner much higher numbers if they were asking for subscriptions rather than donations.
The total subscriber count has been falling since FTX. I suggest digging into the data on unsubscribers to learn more about this cohort. When did they subscribe? Were they previously engaging with the newsletter or does it look like people just unsubscribing from something they never looked at in the first place? I think this could provide a valuable data point regarding community retention/attrition, and I hope other projects (e.g. the forum team) would undergo a similar exercise.
There are currently ~60k subscribers, and approximately half of them joined in the short window between June 2016 and February 2017. This was obviously a period of aggressive outreach for the newsletter. The obvious question is: was it worthwhile? Presumably a lot of these folks never engaged with the newsletter or unsubscribed. But if a decent percentage of people who subscribed as a result of the more aggressive marketing went on to behave similarly to “normal” subscribers, that has big implications for the newsletter and other EA outreach activities.
Thanks, edited to fix
It’s great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things I’ll be looking for which would give me more confidence that this emphasis on growth will go well:
Prioritizing high-value community assets. Effectivealruism.org is the de facto landing page for anyone who googles “effective altruism”. Similarly, the EA newsletter is essentially the a mailing list that newbies can join. Historically, I think both these assets have been dramatically underutilized. CEA has acknowledged under-prioritizing effectivealtruism.org (“for several years promoting the website, including through search engine optimization, was not a priority for us”) and the staffmember responsible for the newsletter has also acknowledged that this hasn’t been a priority ( “the monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test… [But due to competing priorities] I never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, I’d have very little time or brain space to experiment.”) Both assets have the potential to be enormously valuable for many different parts of the EA community.
Creation of good, public growth dashboards. I sincerely hope that CEA will prioritize creating and sharing new and improved dashboards measuring community growth, the absence of which the community has been questioning for nearly a decade. CEA’s existing dashboard provides some useful information, but it has not always been kept up to date (a recent update helped with this, but important information like traffic to effectivealtruism.org and Virtual Program attendance are still quite stale). And even if all the information were fresh, the dashboard in its current state does not really measure the key question (“how fast is the community growing?”) nor does it provide context on growth (“how fast is the community growing relative to how fast we want it to grow?”) Measuring growth is a standard activity for businesses, non-profits, and communities; EA has traditionally underinvested in such measurement and I hope that will be changing under Zach’s leadership. If growth is “at the core of [CEA’s] mission”, CEA is the logical home for producing a community-wide dashboard and enabling the entire community to benefit from it.
Thoughtful reflection on growth measurement. CEA’s last public effort at measuring growth was an October 2023 memo for the Meta Coordination Forum. This project estimated that 2023 vs. 2022 growth was 30% for early funnel projects, 68% for mid funnel projects, and 8% for late funnel project. With the benefit of an additional 18 months of metric data and anecdata, these numbers seem highly overoptimistic. Forum usage metrics have been on a steady decline since FTX’s collapse in late 2022, EAG and EAGx attendance and connections have all decreased in 2023 vs. 2022 and 2024 vs. 2023, the number of EA Funds donors continues to decline on a year over year basis as has been the case since FTX’s collapse, Virtual Program attendance is on a multi-year downward trend, etc. There are a lot of tricky methodological issues to sort out in the process of coming up with a meaningful dashboard and I think the MCF memo generally took reasonable first stabs at addressing them; however, future efforts should be informed by shortcomings that we can now observe in the MCF memo approach.
Transparency about growth strategy and targets. I think CEA should publicly communicate its growth strategy and targets to promote transparency and accountability. This post is a good start, though as Zach writes it is “not a detailed action plan. The devil will of course be in those details.” To be clear, I think it’s important that Zach (who is relatively new in his role) be given a long runway to implement his chosen growth strategy. The “accountability” I’d like to see isn’t about e.g. community complaints if CEA fails to hit monthly or quarterly growth targets on certain metrics. It’s about honest communication from CEA about their long-term growth plan and regularly public check-ins about whether empirical data suggests the plan is going well or not. (FWIW, I think CEA has a lot of room for improvement in this area… For instance, I’ve probably read CEA’s public communications much more thoroughly than almost anyone, and I was extremely surprised to see the claim in the OP that “Growth has long been at the core of our mission.”)
Concretely, we’re planning to identify the kinds of signs that would cause us to notice this strategic plan was going in the wrong direction in order to react quickly if that happens. For example, we might get new information about the likely trajectory of AI or about our ability to have an impact with our new strategy that could cause us to re-evaluate our plans.
Glad to hear this is being planned. Do you have an estimate, even if rough, of when this might happen? Will you post the factors you identify publicly to invite feedback?
Relatedly, what do you think the probability is that this change is the wrong decision?
Our crux is likely around how much research a lottery winner would need to conduct to outperform an EA Funds manager.
I’m very skeptical that a randomly selected EA can find higher impact grant opportunities than an EA Funds manager in an efficient way. I’d find it quite surprising (and a significant indictment of the EA Funds model) if a random EA can outperform a Fund manager (specifically selected for their competence in this area) after putting in a dedicated week of research (say 40 hours). I’d find that a lot more plausible if a lottery winner put in much more time, say a few dedicated months. But then you’re looking at something like 500 hours of dedicated EA time, and you need a huge increase in expected impact over EA Funds to justify that investment for a grant that’s probably in the $100-200k range.
I do agree that a lottery winner can always choose to give through EA Funds which creates some option value, but I worry about a) winners overestimating the own grantmaking capabilities; b) the time investment of comparing EA Funds to other options; and c) the lack of evidence that any lottery winners are actually deferring to EA Funds (maybe just an artefact of not knowing where lottery winners have given since 2019).
I think this is likely due to the huge amount of publicity that surrounded the launch of What We Owe the Future feeding into a peak associated with the height of the FTX drama (MAU peaked in November 2022), which has then been followed by over two years of ~steady decline (presumably due to fallout from FTX). Note that the “steady and sizeable decline since FTX bankruptcy” pattern is also evident in EA Funds metrics.
There are currently key aspects of EA infrastructure that aren’t being run well, and I’d love to see EAIF fund improvements. For example, it could fund things like the operation of the effectivealtruism.org or the EA Newsletter. There are several important problems with the way these projects are currently being managed by CEA.
Content does not reflect the community’s cause prioritization (a longstanding issue). And there’s no transparency about this. An FAQ on Effectivealtruism.org mentions that “CEA created this website to help explain and spread the ideas of effective altruism.” But there’s no mention of the fact that the site’s cause prioritization is influenced by factors including the cause prioritization of CEA’s (explicitly GCR-focused) main funder (providing ~80% of CEA’s funding).
These projects get lost among CEA’s numerous priorities. For instance, “for several years promoting [effectivealtruism.org], including through search engine optimization, was not a priority for us. Prior to 2022, the website was updated infrequently, giving an inaccurate impression of the community and its ideas as they changed over time.” This lack of attention also led to serious oversites like Global Poverty (the community’s top priority at the time) not being represented on the homepage for an extended period. Similarly, Lizka recently wrote that “the monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test.” But due to competing priorities, “I never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, I’d have very little time or brain space to experiment.”
There doesn’t seem to be much, if any, accountability for ensuring these projects are operated well. These projects are a relatively small part of CEA’s portfolio, CEA is just one part of EV, and EV is undergoing huge changes. So it wouldn’t be shocking if nobody was paying close attention. And perhaps because of that, the limited public data we have available on both effectivealtruism.org and the EA newsletter doesn’t look great. Per CEA’s dashboard (which last updated these figures in June), after years of steady growth the newsletter’s subscriber count has been falling modestly since FTX collapsed. And traffic to ea.org’s “introduction page”, which is where the first two links on the homepage are designed to direct people, is the lowest it has been in at least 7 years and continues to drift downward.
I think all these problems could be improved if EAIF funded these projects, either by providing earmarked funding (and accountability) to CEA or by finding applicants to take these projects over.
To be clear, these aren’t the only “infrastructure” projects that I’d like to see EAIF fund. Other examples include the EA Survey (which IMO is already being done well but would likely appreciate EAIF funding) and conducting an ongoing analysis of community growth at various stages of the growth funnel (e.g. by updating and/or expanding this work).
I’d love to see Oliver Habryka get a forum to discuss some of his criticisms of EA, as has been suggested on facebook
From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn’t have then, and couldn’t now, say or do more to disown and disavow Leverage’s practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever…
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with—to put it bluntly—the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside.
While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage.
Circa 2018 CEA was literally supporting Leverage/Paradigm on an EA community building strategy event. In August 2018 (right in the middle of the 2017-2019 period at Leverage that Zoe Curzi described in her post), CEA supported and participated in an “EA Summit” that was incubated by Paradigm Academy (intimately associated with Leverage). “Three CEA staff members attended the conference” and the keynote was delivered by a senior CEA staff member (Kerry Vaughan). Tara MacAulay, who was CEO of CEA until stepping down less than a year before the summit to co-found Alameda Research, personally helped fund the summit.
At the time, “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community.” To address those concerns, Kerry committed to “address this in a separate post in the near future.” This commitment was subsequently dropped with no explanation other than “We decided not to work on this post at this time.”
This whole affair was reminiscent of CEA’s actions around the 2016 Pareto Fellowship, a CEA program where ~20 fellows lived in the Leverage house (which they weren’t told about beforehand), “training was mostly based on Leverage ideas”, and “some of the content was taught by Leverage staff and some by CEA staff who were very ‘in Leverage’s orbit’.” When CEA was fundraising at the end of that year, a community member mentioned that they’d heard rumors about a lack of professionalism at Pareto. CEA staff replied, on multiple occasions, that “a detailed review of the Pareto Fellowship is forthcoming.” This review was never produced.
Several years later, details emerged about Pareto’s interview process (which nearly 500 applicants went through) that confirmed the rumors about unprofessional behavior. One participant described it as “one of the strangest, most uncomfortable experiences I’ve had over several years of being involved in EA… It seemed like unscientific, crackpot psychology… it felt extremely cultish… The experience left me feeling humiliated and manipulated.”
I’ll also note that CEA eventually added a section to its mistakes page about Leverage, but not until 2022, and only after Zoe had published her posts and a commenter on Less Wrong explicitly asked why the mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. The mistakes page now acknowledges other aspects of the Leverage/CEA relationship, including that Leverage had “a table at the careers fair at EA Global several times.” Notably, CEA has never publicly stated that working with Leverage was a mistake or that Leverage is problematic in any way.
The problems at Leverage were Leverage’s fault, not CEA’s. But CEA could have, and should have, done more to distance EA from Leverage.
Very interesting, thanks so much for doing this!
I dunno, I think a funder that had a goal and mindset of funding EA community building could just do stuff like fund cause-agnostic EAGs and a maintenance of a cause-agnostic effectivealtruism.org, and nor really worry about things like the relative cost-effectiveness of GCR community building vs. GHW community building.
Some Prisoners Dilemma dynamics are at play here, but there are some important differences (at least from the standard PD setup).
The PD setup pre-supposes guilt, which really isn’t appropriate in this case. An investigation should be trying to follow the facts wherever they lead. It’s perfectly plausible that, for example, an investigation could find that reasonable actions were taken after the Slack warning, that there were good reasons for not publicly discussing the existence or specifics of those actions, and that there really isn’t much to learn from the Slack incident. I personally think other findings are more likely, but the whole rationale for an independent investigation is that people shouldn’t have to speculate about questions we can answer empirically.
People who aren’t “guilty” could “defect” and do so in a way where they wouldn’t be able to be identified. For example, take someone from the EA leaders Slack group who nobody would expect to be responsible for following up about the SBF warnings posted in that group. That person could provide investigators a) a list of leaders in the group who could reasonably be expected to follow-up and b) which of those people acknowledged seeing the Slack warnings. They could do so without compromising their identity. The person who discussed the Slack warnings with the New Yorker reporter basically followed this template.
Re: your comment that “if other prisoners strongly oppose cooperation, they may find a way to collectively punish those who do defect”, this presumably doesn’t apply to people who have already “defected”. For instance, if Tara has a paper trail of the allegations she raised during the Alameda dispute and shared that with investigators, I doubt that would burn any more bridges with EA leadership than she’s already burned.
I agree this would be a big challenge. A few thoughts…
An independent investigation would probably make some people more likely to share what they know. It could credibly offer them anonymity while still granting proper weight to their specific situation and access to information(unlike posting something via a burner account, which would be anonymous but less credible). I imagine contributing to a formal investigation would feel more comfortable to a lot of people than weighing in on forum discussions like this one.
People might be incentivized to participate out of a desire not to have the investigation publicly report “person X declined to participate”. I don’t think publicly reporting that would be appropriate in all cases where someone declined to participate, but I would support that in cases where the investigation had strong reasons to believe the lack of participation stemmed from someone wanting to obscure their own problematic behavior. (I don’t claim to know exactly where to draw the line for this sort of thing).
To encourage participation, I think it would be good to have CEA play a role in facilitating and/or endorsing (though maybe not conducting) the investigation. While this would compromise its independence to some degree, that would probably be worth it to provide a sort of “official stamp of approval”. That said, I would still hope other steps would be taken to help mitigate that compromise of independence.
As others have noted, some people would likely view participation as the right thing to do.
Have you directly asked these people if they’re interested (in the headhunting task)? It’s sort of a lot to just put something like this on someone’s plate (and it doesn’t feel to me like a-thing-they’ve-implicitly-signed-up-for-by-taking-their-role).
I have not. While nobody in EA leadership has weighed in on this explicitly, the general vibe I get is “we don’t need an investigation, and in any case it’d be hard to conduct and we’d need to fund it somehow.” So I’m focusing on arguing the need for an investigation, because without that the other points are moot. And my assumption is that if we build sufficient consensus on the need for an investigation, we could sort out the other issues. If leaders think an investigation is warranted but the logistical problems are insurmountable, they should make that case and then we can get to work on seeing if we can actually solve those logistical problems.
surely the investigation should have remit to add questions as it goes if they’re warranted by information it’s turned up?
Yeah, absolutely. What I had in mind when I wrote this was this excerpt from an outstanding comment from Jason on the Mintz investigation; I’d hope these ideas could help inform the structure of a future investigation:
How The Investigation Could Have Actually Rebuilt Lost Trust and Confidence
There was a more transparent / credible way to do this. EVF could have released, in advance, an appropriate range of specific questions upon which the external investigator was being asked to make findings of fact—as well as a set of possible responses (on a scale of “investigation rules this out with very high confidence” to “investigation shows this is almost certain”). For example—and these would probably have several subquestions each—one could announce in advance that the following questions were in scope and that the investigator had committed to providing specific answers:
Did anyone associated with EVF ever raise concerns about SBF being engaged in fraudulent activity? Did they ever receive any such concerns?
Did anyone associated with EVF discourage, threaten, or seek to silence any person who had concerns about illegal, unethical, or fraudulent conduct by SBF? (cf. the “Will basically threatened Tara” report).
When viewed against the generally-accepted norms for donor vetting in nonprofits, was anyone associated with EVF negligent, grossly negligent, or reckless in evaluating SBF’s suitability as a donor, failing to raise concerns about his suitability, or choosing not to conduct further investigation?
That kind of pre-commitment would have updated my faith in the process, and my confidence that the investigation reached all important topics. If EVF chose not to release the answers to those questions, it would have known that we could easily draw the appropriate inferences. Under those circumstances—but not the actual circumstances—I would view willingness to investigate as a valuable signal.
- Jul 28, 2024, 5:21 PM; 6 points) 's comment on We need an independent investigation into how EA leadership has handled SBF and FTX by (
Here’s an update from CEA’s operations team, which has been working on updating our practices for handling donations. This also applies to other organizations that are legally within CEA (80,000 Hours, Giving What We Can, Forethought Foundation, and EA Funds).
“We are working with our lawyers to devise and implement an overarching policy for due diligence on all of our donors and donations going forward.
We’ve engaged a third party who now conducts KYC (know your client) due diligence research on all major donors (>$20K a year).
We have established a working relationship with TRM who conduct compliance and back-tracing for all crypto donations.”
I honestly doubt that this process would have, or should have, flagged anything about SBF. But I can imagine it helping in other cases, and I think it’s important for CEA to actually be following its stated procedures.
I hope that the “overarching policy for due diligence on all of our donors” that was put together post-Delo in 2021 was well designed. But it’s also worth noting Zach has also discussed “increasing the rigor of donor due diligence” in 2023. Maybe the 2023 improvements took the process from good to great. Maybe they suggest that the 2021 policies weren’t very good. It’d be great for the new and improved policy, and how it differs from the previous policy, to be shared (as Zach has suggested it will be) so other orgs can leverage it and to help the entire community understand what specific improvements have been made post-FTX.
That may well have been OP’s thinking and they may have been correct about the relative cost effectiveness of community building in GCR vs. GHW. But that doesn’t change the fact that this funding strategy had massive (and IMO problematic) implications for the incentive structure of the entire EA community.
I think it should be fairly uncontroversial that the best way to align the incentives of organizations like CEA with the views and values of the broader community would be if they were funded by organizations/program areas that made decisions using the lens of EA, not subsets of EA like GCR or GHW. OP is free to prioritize whatever it wants, including prioritizing things ahead of aligning CEA’s incentives with those of the EA community. But as things stand significant misalignment of incentives exists, and I think it’s important to acknowledge and spread awareness of that situation.
Just to clarify, I agree that EA should not have been expected to detect or predict FTX’s fraud, and explicitly stated that[1]. The point of my post is that other mistakes were likely made, we should be trying to learn from those mistakes, and there are worrisome indications that EA leadership is not interested in that learning process and may actually be inhibiting it.
- ^
“I believe it is incredibly unlikely that anyone in EA leadership was aware of, or should have anticipated, FTX’s massive fraud.”
- ^
Thanks Angelina for your engagement and your thoughtful response, and sorry for my slow reply!
Re: dashboards, I’m very sympathetic to the difficulties of collecting metrics from across numerous organizations. It would be great to see what we can learn from that broader data set, but if that is too difficult to realistically keep up to date then the broader dashboard shouldn’t be the goal. The existing CEA dashboard has enough information to build a “good enough” growth dashboard that could easily be updated and would be a vast upgrade to EA’s understanding of its growth.
But for that to happen, the dashboard would need to transition from a bunch of charts showing metrics for different program areas to a dashboard that’s actually measuring growth rates in the metrics and program areas over different time frames, showing how those growth rates have evolved, aggregating and comparing those growth rates across metrics and time frames, and summarizing the results. (IMO you could even drop some of the less important metrics from the current dashboard. Ideally you would also add important and easily/consistently available metrics like google search activity for EA and Wikipedia page views).
Re: transparency around growth targets, let me explain why “I was extremely surprised to see the claim in the OP that “Growth has long been at the core of our mission.”” In my experience, organizations that have growth at the core of their mission won’t shut up about growth. It’s the headline of communications, not in a vague sense, but in a well-defined and quantified sense (i.e. “last quarter our primary metric, defined in such and such a way, grew at x%). There’s an emphasis on understanding specific drivers of, and bottlenecks of, growth.
In contrast, the community has been expressing confusion at CEA’s unwillingness to measure growth for nearly a decade. We’ve seen remarkably little communication from CEA about how fast it believes the community is growing or how it even thinks about measuring it. Your post estimating growth rates is an exception, but even that was framed as a “first stab”, it left important methodological questions unresolved, and has since been abandoned. If growth is so important to CEA, why don’t we know what CEA thinks EA’s growth rate has been the last several years? And if, as Zach says in the OP, growth has been “deprioritized” post-FTX and “during 2024, we explicitly deprioritized trying to grow the EA community”, why weren’t these decisions clearly communicated at the time?
CEA will at times mention that a specific program area or two has experienced rapid growth, but those mentions typically occur in a vacuum without any context about how fast other programs are growing (which can make it seem like cherry-picking). When CEA has talked about its high level strategy, I haven’t drawn the conclusion that growth was “at the core of the mission”; the focus has been more on things like “creating and sustaining high-quality discussion spaces.” And the strategy has often seemed to place more emphasis on targeting particularly high leverage groups (e.g. elite universities) than approaches that are more scalable (e.g. targeting universities that are both good and big, prioritizing virtual programs, etc). In my view, CEA has focused much more on throttling community growth rates back to levels it views as healthy vs. growing itself or raising the capacity to grow faster in a healthy way. Maybe that was a good decision, but I see it very differently from placing growth at the core of the mission.
Re: the intersection of community assets and transparency around growth strategy: since I have your ear, I want to point out a problem that I really hope you’ll address.
On its “mistakes” page, CEA acknowledges that “At times, we’ve carried out projects that we presented as broadly EA that in fact overrepresented some views or cause areas that CEA favored. We should have either worked harder to make these projects genuinely representative, or have communicated that they were not representative”. The page goes on to list examples of this mistake that span a decade.
Right now, under “who runs this website”, the effectivealtruism.org site simply mentions CEA and links to CEA’s website. If someone looks at the “mission” (previously “strategy”) page on CEA’s site, in the “how we think about moderation” section one learns that “When representing this diverse community we think that we have a duty to be thoughtful about how we approach moderation and content curation… We think that we can do this without taking an organizational stance on which cause or strategy is most effective.”
It is only if one then clicks through to a more detailed post about moderation and curation that one learns that “Of the cause-area-specific materials, roughly 50% focuses on existential risk reduction (especially AI risk and pandemic risk), 15% on animal welfare, and 20% on global development, and 15% on other causes (including broader longtermism).”
Yet even that more detailed page, does not explain that the top “Factors that shape CEA’s cause prioritization… (and, for example, why AI safety currently receives more attention than other specific causes)” are: “the opinions of CEA staff”, “our funders” (“The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies”), and “The views of people who have thought a lot about cause prioritization”, but that the views of the EA community are not included in these factors. This information can only be found in a forum post Zach wrote, but which is not linked to from CEA’s website anywhere. So someone coming from effectivealtruism.org would have no way to find this information.
I hope that part of prioritizing community assets like effectivealtruism.org will include transparency around how/why the content those assets use is created. The status quo looks to me like it’s just continuing the mistakes of the past.