Responses and Testimonies on EA Growth
Follow up to Monday’s post Why Hasn’t Effective Altruism Grown Since 2015?. See discussions on r/scc, LessWrong and EA Forum.
I’m honored to have received responses from Scott Alexander, Katja Grace (AI Impacts), Peter Hurford (Rethink Priorities), and Rob Bensinger (MIRI), among many other insightful replies.
This is a long post, so I’m going to put the summary first:
EA has shifted from “earning to give” to “talent-focus”, leading to less of as mass movement.
By some metrics, EA is continuing to grow. Most notably, non-Open Phil donations to Give Well causes are way up the last few years.
Good Ventures is intentionally holding off increased giving while Give Well and Open Philanthropy build capacity.
In short: EA is stagnating in some ways, and not in others. Overall, this seems to be according to plan, and is not a failure of growth funding to achieve its stated cause.
EA has shifted from “money-focus” to “talent-focus”, and is bottlenecked by research debt
Scott Alexander provides a useful history: Around 2014, Good Ventures stepped in and started providing a huge amount of money. For comparison, Giving What We Can has recorded $222 million in donations ever, Good Ventures gives around that much every year. This made “earning to give” much less compelling. [1]
Second, as EA has grown, it’s become harder and harder to rise quickly and get a “good” job at a top institution:
there’s a general sense that most things have been explored, there are rules and institutions, and it’s more of a problem of learning an existing field and breaking into an existing social network rather than being part of a project of building something new.
These are both compelling to me, but we shouldn’t be fatalistic about the latter and just accept that intellectual movements have limited capacity. Effective altruism is aware of the problem, or at least of something similar From MIRI:
Imagine that you have been tasked with moving a cube of solid iron that is one meter on a side. Given that such a cube weighs ~16000 pounds, and that an average human can lift ~100 pounds, a naïve estimation tells you that you can solve this problem with ~150 willing friends.
But of course, a meter cube can fit at most something like 10 people around it. It doesn’t matter if you have the theoretical power to move the cube if you can’t bring that power to bear in an effective manner. The problem is constrained by its surface area.
What’s the solution? From Distill’s opening essay on Research Debt:
Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next.
...The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It’s entirely possible to build paths and staircases into these mountains. The climb isn’t something to be proud of.
The climb isn’t progress: the climb is a mountain of debt.
So if it’s gotten harder to build something new, we shouldn’t take it as the natural result of building a mountain of knowlege. We should see it as a failure of the community to distill that knowledge, perform the interpretive labor, and build a path for future scholars. Research distillation is one possibility. Another is tackling the problem from new disciplines.
Consider Leopold Aschenbrenner’s Global Priorities Institute paper on Existential risk and growth. He wrote this as an undergraduate, and it’s now cited on the 80,000 hours problem profile for Economic Growth. Why was this possible? I would argue it’s because academic macroeconomics has been relatively neglected as an tool for EA-relevant problems. There are plenty of RCTs and development economists, but few people seem to be taking this perspective.
That’s not a critique of EA’s diversity. If you take a quick look at GiveWell’s team page, you’ll see a pretty wide variety of academic backgrounds. It’s not like it’s all Oxford philosophers.
But I think the problem persists because of how we conceive of EA-relevant work, and the relative lack of mature institutions. The other shortcut to the top of the mountain is research advising. But generally this requires being part of an existing EA institution, which requires prior research experience, and as Scott noted, it’s just become very difficult to break in.
And in EA’s defense, it’s not as if academia has this solved either. Major universities are basically not growing either, and it is becoming harder to get to the top of many fields. See also Benjamin Jones on the The Burden of Knowledge.
EA hasn’t stopped growing
Peter Hurford shared a much more comprehensive list of EA growth metrics. Some of them are going up, some are stagnating or going down. Here’s a non-comprehensive list of evidence in favor of growth from 2015-2018:
80k pageviews
EA subreddit
80k newsletter signups
Donations recorded in EA survey
And in favor of stagnation:
Google search interest
Pageviews of Wikipedia page for Effective Altruism
The Life You Can Save web traffic
EA survey respondents who identify as EA
Non-Open Phil Give Well donations (though this is now up recently)
Give Well unique visitors
Robert Wiblin notes that “It’s worth keeping in mind that some of these rows are 10 or 100 times more important than others”. If you’re really curious, the whole post is great.
Still, it’s complicated. Some of these are measures of active interest. So you might argue that if 70k people read the Wikipedia page for EA every year, that’s a huge win. People who are already part of the community aren’t going to be referencing the page every year, so this implies some kind of growth.
In other cases, I’m less convinced. 80k newsletter signups are increasing but EA survey respondents are stagnant, which I interpret to mean the newsletter is just growing within the existing EA community, rather than reaching new people.
u/xarkn also provides a short analysis showing that posts on EA forum are accelerating.
Rob Bensinger says that growth has been averted because there are downsides to being a larger movement. Peter’s post confirms that:
the result of the intentional effort across several groups and individuals in EA over the past few years to focus on high-fidelity messaging and growing the impact of pre-existing EAs and deliberate decisions to stop mass marketing, Facebook advertising, etc. The hope is that while this may bring in fewer total people, the people it does bring in will be much higher quality on average.
On the same theme of “stagnation as a choice”, mingyuan points out that Good Ventures is intentionally holding off increased giving while GiveWell and OpenPhil build capacity. She links to these helpful posts (1, 2). Rob ads “They also think they can find more valuable things to spend the money on than bednets and GiveDirectly.”
Katja Grace points out that I’m essentially double-counting OpenPhil stagnation, and should really be focusing on GiveWell’s non-OpenPhil numbers, which you’ll note are way up:
Katja also links to this post from late 2020 where Open Phil says “We have allocated an additional $100 million for GiveWell top charities, GiveWell standout charities, and GiveWell Incubation Grants in the year-end period (beyond what we’ve already granted to GiveWell-recommended charities earlier this year).” GiveWell hasn’t released 2020 data so this hasn’t shown up yet, but will presumably look like a large jump over 2019.
She also debates my interpretation of GWWC growth, and argues that cumulative growth is a win for EA even if the rate is decelerating.
I pretty much agree with all of her points, except the conclusion:
I’m inclined to interpret this evidence as mildly supporting ‘EA has grown since 2015’, but it doesn’t seem like much evidence either way. I think we should at least hold off on taking for granted that EA hasn’t grown since 2015 and trying to explain why.
Even if we’re not sure about the rate of growth, there’s no need to “hold off” trying to explain it. Perhaps I should have titled my post something like:
Why isn’t EA growing even faster?
If EA is growing, why doesn’t it show up in some statistics?
Assuming EA hasn’t grown, why not?
These framings all survive Katja’s criticism, and the bulk of my post makes sense under any of them. If we’re only about 50% sure EA is stagnating, it’s still worth trying to understand why.
If I had to rewrite it today, it would be with the framing “Given that we’re pouring a substantial amount of money into EA community growth, why doesn’t it show up in some of these metrics?”
The other big question, whether having an EA mindset is innate, remains relevant whether we are attempting to grow or merely grow faster.
Other explanations
u/skybrian2 writes that the global health stuff is pretty much solved, we know where donations will have the most impact. At the same time, US politics has gotten a lot worse, and there are now more compelling non-EA causes.
I find this pretty compelling. In 2015 non-EA work didn’t feel quite as critical. In 2017, it felt like fixing US politics was a prerequisite to making progress on many other problems.
u/fubo argues that social graphs are saturated, there’s been burn out and a demographic shift.
I wrote earlier that the median SSC survey respondent was 30 in 2019. So it’s reasonable to think that over the last few years people started setting down, wanting to own homes, start families and so on. That all assumes that it’s a single cohort, but this seems reasonable.
Edit: An earlier version of this post missed some of the comments from the cross-post on EA forum. Here are some highlights:
Brain Tan mentions that the number of local groups is growing quickly. Though again I would note that the rate of change peaked in 2015.
David Moss shares this chart saying “I fear that most of these metrics aren’t measures of EA growth, so much as of reaping the rewards of earlier years’ growth… looking at years in EA and self-reported level of engagement, we can see that it appears to take some years for people to become highly engaged”.
I have a different interpretation, which is that less engaged people are much more likely to churn out of the movement entirely and won’t show up in this data.
Testimonies
There were lots of great responses, some quoted in full, others excepted below. This is not an exhaustive list, and obviously you shouldn’t assume that it’s a random sample. I won’t provide too much commentary, except to say what surprised me:
Lots of people come across Yudkowsky/LW as teenagers, which aligns with my earlier hypothesis about the SSC survey data.
A few people mentioned being Christians in EA, despite the “godlessness” of the rationalist movement
There seems to be a lot of recent growth among elite university chapters, confirming the notion that EA has pivoted from attempting to be a mass movement to trying to recruit talent
A lot of people confirm that they already felt a strong predisposition towards rationalist/EA ideas and that they have been largely “unable” to convince people. On the other hand, Peter Hurford writes that several people in the EA survey say they were “convinced”. I’m not sure this is actually incompatible, it depends on which people you’re talking about.
Without further ado, here are some of the notes I received.
Sabrina Chwalek
I think google search results and dollars donated aren’t capturing a huge section of the movement for a simple reason: we’re still students. If you look at CEA’s strategy, they’re focusing “recruitment efforts on students and young professionals” and prioritizing the long-term growth of the movement. Most of the community building grants are going to university groups who are successfully recruiting undergrads, but those undergrads aren’t donating yet. And since most university groups introduce people to EA through their internal fellowships and programming, there’s less of a need to go google what EA is. (For example, Brown EA only started two years ago and >100 people have gone through our fellowship programs.) Plus, CEA’s 2020 report says they doubled the number of attendees at their events last year.
I can’t speak on behalf of CEA and other EA orgs, but it seems plausible that the stagnation in movement growth would coincide with the decision to focus on student groups. I believe CEA is also trying to prioritize attracting highly engaged EAs, rather than just semi-involved community members, which means it’s less important to have a larger number of less-engaged people.
Based off my experience facilitating fellowship groups, people seem to fall into the following buckets for how innate EA is. First, EA all makes sense. These are the rare people who’ve already heard about EA or Less Wrong and are EA-aligned. Second, they’re passionate about one cause area in the movement (global health and development, animal welfare, etc.) and end up being exposed to other cause areas and ideas through our fellowship programs. (The second group of people is significantly more common than the first.) Third, they’re initially put off by EA/don’t understand the movement, but for whatever reason things fall into place during the fellowship. Then the remaining group of people don’t engage for various reasons.
In response to your final paragraph, my own experience joining EA was very natural. I came across 80,000 Hours in high school and definitely felt like “wow, there’s an entire movement of people who see the world like I do and want to do the most good they can.” However, I don’t think EA has to be intuitive for people who can become engaged members. In the beginning, I mostly cared a lot about global health and development and was consequently really hooked by the idea of effective giving and expanding our moral circles. It took me a couple years before I fully came around to longtermism and the significance of x-risks, and I’m still grappling with how longtermism fits into my tentative career options. Spending more time in the movement opened my eyes to a range of other ideas and cause areas.
Anonymous
My origin story is: I read Yudkowskys old essay on the FAQ to the meaning of life, and I was instantly converted. Since then I followed LessWrong, and later joined Effective Altruism when that popped up, as well as starting to read SSC.
hat is it about it: I think it’s just someone smart reasoning seriously on the most important topics and trying to get it right. To this day I have probably found most of the new interesting ideas I have ever encountered from here.
Marshall Quander Polaris
The changes to my philosophical outlook due to the rationalist community and other subsequent education have just been cleaning up around the edges, i.e. figuring out what exactly I think is a better state of the world, what I think a moral person is, etc.
Anonymous
I first encountered the rationalsphere through the original lesswrong sequences back when I was a child. The feeling that I got when reading them was definitely that they “fit into place”. I don’t recall much in the way of “revolutionary new concepts overturning my conception of the world”, but rather mostly a combination of “that’s roughly what I thought, explained eloquently and in more detail” and “I haven’t really thought seriously about that specific topic before, but yeah, that seems right”.
...I had a similar reaction to EA upon encountering it, thinking something along the line of “yeah, that’s pretty obviously the right thing to do.”
...Think along the lines of someone making a remark about you being “willing to bite that bullet” on some issue, but where you just feel like “bite what bullet?”
Anonymous
Perhaps the big growth in effective altruism is getting to the point where we are spreading the beliefs and spirit of reform without necessarily needing others to join the community. I have no evidence for this, not even anecdotal, but if we can change the charitable sector’s mindset to pay more attention to outcomes assessment and cost effectiveness, that’s a victory even if no one considers themselves an EA.
James
My experience of reading Yudkowsky’s sequences back in 2012 was revelatory. My experience of reading Peter Singer and existential risk stuff matches yours, but that all happened after the sequences.
Anonymous
I was very resistant to EA for a long time. EA acquaintances tried to convert me and failed; the whole view of the world they were promulgating seemed very flat. Maybe a year later, I came across a blog post from a Christian Effective Altruist. In his telling, it was obvious that as a Christian, it is right to give money to the poor—and, furthermore, God doesn’t show favoritism, and neither should you. This blog post basically converted me. It opened up a way to being something-like-an-effective-altruist without having to be a utilitarian. It turned EA from being about math to being about being-turned-outward, receptive to the humanity of everyone else, not just the people closest to you. (This scope is expanding for me also to e.g. animals.)
...All of this is to say, I was probably innately open to being turned toward a pretty radical moral system—the 10% was not the sticking point for me. I am also not innately an Effective Altruist—I am super turned off by utilitarianism and am indeed probably not an Effective Altruist at all. But I did have some sort of conversion experience that resulted in me taking the GWWC pledge—it made caring about e.g. global health over other causes ‘click’ for me in a way that utilitarian arguments failed to.
Trenton Bricken
Personal anecdote: I was raising money for charities, got fed up as they seemed silly and googled “what is the most effective charity” leading to givewell. Separately came across Nick Bostrom and read superintelligence and it all just made sense. This was all before I knew what EA was or that it was an umbrella for all of this.
Mathematics Grad Student
My experience is that these things were not inside me all along. I never thought “wow, everyone else is an idiot” or “wow, these people get it.” I just thought “oh, cool, that makes sense.”
Before encountering SSC and LessWrong about five years ago, I had opinions like “death is the natural order of things” (I am now anti-nonconsensual-death) and “polyamory is bad” (I have since discovered that I somewhat prefer polyamory to monogamy). These are somewhat extreme examples; on most topics rationalist writing didn’t change my mind so much as just introduce me to ideas I had never thought about before, but which I promptly recognized as valuable/useful. A lot of it feels obvious-in-retrospect but not something I would have thought of on my own. EA (with the exception of AI stuff, which took me a while to come around on) is the thing which felt most immediately deeply obvious as soon as I encountered the idea, but even then it is not something I would have generated independently.
I will add a couple of caveats. One is that although rationalist and EA ideas were novel to me when I encountered them, it is plausible that some people are more “innately receptive” to these ideas than others, and I am toward the more receptive end of the distribution. Another is that I am not as serious about rationalism or EA as some. I don’t regularly read LessWrong and have read only some random subset of the Sequences (though I do regularly or semiregularly read other rationalist blogs, e.g. SSC/ACX and a few of the things on its blogroll). Likewise, I donate 10% but I have not switched careers for EA reasons. (Yet, growth mindset...?)
Long Cao
Yes, it was certainly an eye-opening/enlightenment event. Helped me notice bias in daily reasoning, improved (to some minor extent) my decision making which resulted in positive gain.
I was from a 3rd-world country where education is more like ornament, and people always do differently from what they say.
Floris Wolswijk
Somewhere in 2013 I spend some time in the summer learning about ethics and Peter Singer gave one lecture during that course. This led me to think something along the lines of “I’m a student living on say 1000 euro per month, if I start work and earn 2000+, I should be able to donate 10% of that”. Reading one or two EA-related books later, I was (and still am) organizing a local group and donating 10% of my income.
All that to say that I think I was very open to the ideas and was/am still using a similar style of reasoning that appealed to me. It does give me some warm fuzzies to donate, but mostly I think it’s the right thing to do and not that difficult to do.
At the same time, with our local group I think that we’ve spoken to 800+ people in person over the years and that has resulted in only marginal changes. Hopefully some will donate more effectively (e.g. family and friends come to me to ask about a recommendation related to climate change), and at least one person has since taken the GWWC Pledge.
Anonymous
Firstly, I should say that most of my exposure to EA has been through Christians in EA, a rare convergence of beliefs that definitely makes me an outlier in this generally godless movement. However, I think I’m pretty engaged in the broader EA movement as both a consumer (blogs, books, podcasts) and as a participant (I’ve been to a student conference, and am part of an EA fellowship right now, and I’m planning to use the PhD I’m currently working on to either donate a lot of money or to directly work on global health or biosecurity).
I saw your reddit post and I agree that EA seems more to express something people already agree with than to change their mind. While some people feel like EA is too demanding, my Christian beliefs and my own optimising mindset had already converged on Singer-style utilitarianism even if I’d never read Singer, and finding EA was actually a relief—firstly, that I’m not alone in this, and secondly, I can stop feeling guilty about everything and instead focus my guilt on a small number of important things.
There are a finite number of people with this kind of mindset and I expect we’ve reached most of them, at least with the current EA pitch. For all that we focus on the rational appeal, the emotional appeal of the ideas is probably more important—you have to care about being effective in addition to caring about helping others, and those don’t seem particularly well correlated.
Most of the potential for future growth is probably going to be in different cultural contexts, but that’s inherently harder and slower since we basically need “translators” to stumble on EA and then get involved. We may be at the point of diminishing returns for more recruitment, but on the other hand maintaining the movement at its current size will require new recruitment. I personally think there’s potential to get more Christians involved by emphasising how well EA complements Christian doctrine, I imagine there are ways to do this in other cultural contexts as well. I actually know a Muslim EA that I’m planning to discuss this with on Sunday in the context of movement building, I think your post will give me lots to think about so I might link to that.
However, I’m not sure if telling people about EA achieves nothing if they don’t join us, as you say people mostly agree with our ideas but just don’t identify with the movement. That suggests to me that we have plenty to work with. This is going to be harder to quantify, but optimistically I’d hope we could make AI research think more about safety, make politics (voters and politicians) more concerned about the long term future, speed up the development of vegan substitutes for animal products, and make the average person think more critically about charity and where the money actually goes. These are ambitious goals, but I don’t think they’re beyond the capabilities of a small but well resourced and committed group. Some progress towards these goals seems likely anyway but hopefully it’s not too arrogant to think we can have an amplifying role.
80,000 hours probably has a lot of impact, even if people just go through an “EA phase” then loose interest, they’ll hopefully end up on a more EA trajectory and then stick with it as the path of least resistance. Hopefully. I guess it could instead be world ending if we tell people how powerful AI and molecular biology are, and we convince them to change careers but also talk so much about ethics that it bores them.
James Brooks
I strongly identify with your quote from John Nerst’s Origin story. Since I was a child I have wanted to deeply understand everything then fix it. I went on a round the world travel and while on it wrote about 70k words on what I then called practical philosophy. I got home, searched to see if anything similar had been written and found the sequences which contained a superset of what I had already written. Started attending then quickly organising the lesswrong meetups, went on a CFAR course and while staying in the bay areas after that ended going to an EA meetup because I heard there was good conversation and free pizza (nothing to do with the EA part). My transition from rationality to EA was very slow. I don’t even know why it was slow. I thought many of the ideas were true, maybe it was all too much to take in. I still feel overwhelmed by it all ~ 8 years later.
I have made my two closest friends EA ‘adjacent’, one literally stopped me mid conversation to set up a monthly donation to GiveWell the first time I mentioned it. They read Slate Star and Zvi … but would not attend a meetup or call themselves EA or anything like that.
I just had a chat with a student today who got super into the idea of 80k within seconds of me starting to explain it. (my explanation was something like “they are a charity that gives career advice to people who want to make a positive difference in the world, they literally have a page of the most important areas to improve the world and advice on finding one that should work well for you” that’s all I needed to say)
Most conversations about EA to people who have not heard it are a debate about some particular concern or general “that’s seems like a good idea” then the conversation moves on never to return to it.
Footnotes [1] An earlier draft of this post mischaracterized Scott Alexander’s views. See this comment for details, or read the original in full.
- A Red-Team Against the Impact of Small Donations by 24 Nov 2021 16:03 UTC; 182 points) (
- EA Updates for April 2021 by 26 Mar 2021 14:26 UTC; 39 points) (
- 5 Jun 2021 21:40 UTC; 26 points) 's comment on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare by (
Thanks for raising this question about EA’s growth, though I fully agree it would have been better to frame that question more like: “Given that we’re pouring a substantial amount of money into EA community growth, why doesn’t it show up in some of these metrics?” To that end, while I may refer to “growing” or “not growing” below for brevity I mean those terms relative to expectations rather than in an absolute sense. With that caveat out of the way…
There’s a very telling commonality about almost all the possible explanations that have been offered so far. Aside from a fraction of one comment, none of the explanations in the OP or this followup post even entertain the possibility that any mistakes by people/organizations in the EA community inhibited growth. That seems worthy of a closer look. We expect an influx of new funding (ca. ~2016-7) to translate into growth (with some degree of lag), but only if it is deployed in effective strategies that are executed well. If we see funding but not growth, why not look at which strategies were funded and how well they were executed?
CEA is probably the most straightforward example to look at, as an organization that has run a lot of community building projects and that received much of Open Phil’s initial “EA focus area” funding. Let’s look only at projects from 2015-2019, as more recent projects might not be reflected in growth statistics yet. In rough chronological order, here are some projects where mistakes could have plausibly impacted EA’s growth trajectory.
GWWC grew quickly in its early years (2009-2014), but starting in 2016 CEA made the mistake (their wording) of “underinvesting in GWWC” (the main person responsible for the project had other full-time responsibilities), leading growth to suffer until GWWC was spun off as an independent organization in late 2020.
EA Ventures was publicly launched in early 2015 and used significant amounts of staff and applicant time, but seems to have made only 2 small grants in its entire history. This project was shut down in 2016.
EA Global 2015 elicited prominent negative publicity and complaints from the community about cause representation.
Pareto Fellowship attracted hundreds of applicants, many of whom could have been alienated because “the interview process was unprofessional and made them deeply uncomfortable.” This project was shut down in 2016.
EA Funds received little attention after it launched in 2017, leading to confusion about whether the platform was even operational during its first giving season and repeated complaints about a lack of transparency and infrequent grantmaking.
EA Grants was piloted at small scale in 2017 before announcing plans to grant ~$2.6 million in 2018. Less than half that amount was actually granted, and the majority of money given was via a referral program rather than the open applications that were originally intended. The program was plagued by operational issues, many of them quite basic (“We did not maintain well-organized records of individuals applying for grants, grant applications under evaluation, and records of approved or rejected applications”). This project was shut down in April 2020.
Community Building Grants launched in early 2018, taking resources away from other important community building efforts in a way that probably inhibited growth. Since its inception the CBG program has consistently failed to meet its goals for accepting open applications (including at time of writing). In 2019, CEA had a combined budget for EA Grants and CBGs of $3.7 million, but together they ended up only granting ~$1.1 million due to operational limitations.
Shouldn’t we consider the possibility that one or more of these issues contributed to EA’s underwhelming growth trajectory? The GWWC case seems pretty clear cut. CEA realized they had made a “mistake” (CEA’s word) by under-prioritizing GWWC, and spun the project off. Now that GWWC is getting proper attention, growth has very rapidly picked back up. If GWWC had been prioritized all along, wouldn’t we be seeing better growth metrics than we see now? To his credit, Aaron Gertler (who works at CEA) flagged GWWC’s growth rate as “solid evidence of weaknesses in community building strategy”, though nobody else engaged with this observation.
If we use Occam’s Razor to try and understand a lack of growth, isn’t “we deprioritized a major growth vehicle” a simpler explanation than “there was a shift in which rationalist blogs were popular”, “Google Trends data is wrong”, or “EA is innate, you can’t convert people”? And couldn’t you say the same thing about EA Grants/CBGs granting ~$4 million less than planned (~$2m granted vs. ~$6m planned) in 2018-19? Or the Pareto Fellowship putting “nearly 500” applicants (all people so eager to deepen their engagement with EA that they applied for an intensive fellowship) and “several hundred” semi-finalists through what one interviewee described as: “one of the strangest, most uncomfortable experiences I’ve had over several years of being involved in EA. I’m posting this from notes I took right after the call, so I am confident that I remember this accurately… It seemed like unscientific, crackpot psychology. It was the sort of thing you’d expect from a New Age group or Scientology… The experience left me feeling humiliated and manipulated.”
Let me pause here to say: There’s plenty of merit in some of the other explanations for lower than expected growth that people have raised, like a pivot in emphasis from donations to careers. My list above is also clearly cherry-picked to illustrate a point. CEA has obviously also done a lot of things to help the EA community. And CEA has recognized and taken steps to address many of the problems I mentioned, like spinning off GWWC and EA Funds, changing management teams and strategies, shutting down problematic projects like EA Ventures, the Pareto Fellowship, and EA Grants, etc. And it should be obvious that any organization will make mistakes and that plenty of people beyond CEA have made mistakes since 2015 (I know I have!) Indeed, it could be useful to invite people and organizations to submit mistakes they’ve made over the years so that we can collectively learn from them. Even if these mistakes were reasonable decisions at the time, hindsight is a wonderful teacher if you use it.
But here’s the critical point: even if you don’t think “mistakes were made” is the main explanation for why growth has been slow, it should scare the hell out of you that only one person thought to mention it in passing because clearly some mistakes were made. Recognizing these problems is how you begin to fix them. CEA recognized that GWWC wasn’t getting enough attention and spun it off to remedy that. Lo and behold, in the new Executive Director’s “first three months, the rate of new pledges tripled.”
This refusal to look inward is a blind spot, and a particularly troubling one for a community that prides itself on unbiased thinking, openness to uncomfortable ideas, and strong epistemics. Here’s a (crucial?) consideration: if GWWC had been reasonably staffed and compounding at a higher growth rate for the last five years, if the Pareto Fellowship put ~500 extremely enthusiastic EAs through a normal rather than alienating interview process in 2016, if EA Grants/CBGs had granted an additional ~$4 million into the community as planned (tripling the amount actually granted), if the time and money invested in projects that didn’t get off the ground had gone toward sustainable projects paying ongoing dividends, and if other people and organizations hadn’t made countless other mistakes over the years, what would our community growth metrics look like today? Are we confident all the necessary lessons have been learned from these mistakes, and that we’re not at risk of repeating those mistakes?
I agree that this is a (significant) part of the explanation. For instance, I think there are a variety of things I could have done last year that would have helped our groups support improve more quickly.
Plug: if you have feedback about mistakes CEA is making or has made, I’d love to hear it. You can share thoughts (including anonymously) here.
Thanks Max! It’s incredibly valuable for leaders like yourself to acknowledge the importance of identifying and learning from mistakes that have been made over the years.
I agree that it’s worth asking for an explanation why growth has—if anything—slowed, while funds have vastly increased. One interesting exercise is to categorise the controversies. Some major categories:
Leverage-people violating social norms (which was a mistake)
CEA under-delivering operationally (mistake)
Re-prioritising toward longtermism (not a mistake imo)
Re-prioritising away from community growth (unclear whether a mistake)
The mistakes:
GWWC deprioritised (3,4)
EA Ventures (1,2)
EA Global 2015 PR (1,3)
Pareto Fellowship cultishness (1)
EA Funds deprioritised (2)
EA Grants under-delivered (2)
Community Building Grants under-delivered (2)
But the more interesting question is: “fundamentally, why has growth not sped up?”. In my view, (1-3) did not directly slow growth. But (1-3) led EA community leaders to deprioritise community growth (i.e. led to (4)). And the lack of acceleration is basically because we didn’t allocate that much resources to community growth. At any given time, most community leaders (except perhaps when writing and promoting books) have spent their time on research, business, academia, and grantmaking therein, rather than community growth.
I think that in order to reinvigorate community growth, you really need to work on (4). We need to develop a new vision for what growth of the EA community (or some related community, such as a longtermist one) should look like, and establish that it would be worthwhile, before investing in it. How could it gather elite talent, of the sort that can substantially help with effectively disbursing funds to reduce long-term risks? And so on.
This surprised me—wouldn’t you expect 1 and 2 to directly slow growth somewhat, e.g. by putting people off or causing growth projects to fail to meet their goals? (Maybe you just don’t think these were very significant?)
I think it’s good to ask “what was the relative importance of these factors?”, but the framing of “fundamentally, why has growth not sped up?” seems to be implicitly pushing towards there being a single explanation. I think there were probably multiple significant factors.
Re your last para, I hope that CEA’s plans are part of the answer, although I think it’s good for us to pursue a variety of approaches—e.g. I also think it’s good for GWWC to spread its message somewhat more quickly/widely.
I’d agree that on the current margin, “EAs getting harder to find” could be a factor, as well as some combination of things like (#2-4).
Having said that, what seems like an underrated fact is that although EA outreach (CEA/80k/CFAR) deploys less funds than EA research (FHI/MIRI/CSER/...), a priori, I’d expect outreach to scale better—since research has to be more varied, and requires more specific skills. This leads to the question: why we don’t we yet have a proof of concept for turning ~$100M into high quality movement growth? Maybe this is the biggest issue. (#2) can explain why CEA hasn’t offered this. (#4) is more comprehensive, because it explain why 80k and others haven’t.
I largely agree with your categorizations, and how you classify the mistakes. But I agree with Max that I’d expect 1 and especially 2 to impact growth directly.
FWIW, I don’t think it was a mistake to make longtermism a greater priority than it had been (#3), but I do think mistakes were made in pushing this way too far (e.g. having AI/longtermist content dominate the EA Handbook 2.0 at the expense of other cause areas) and I’m concerned this is still going on (see for example the recent announcement that the EA Infrastructure Fund’s new managers are all longtermists.)
To be fair, people pivoted hard toward longtermism because they’re convinced that it’s a much higher priority, which seems correct to me.
Just wanted to say that I really liked this comment, thanks for writing it.
Thanks Jonas!
I agree with a lot of this, and I appreciated both the message and the effort put into this comment. Well-substantiated criticism is very valuable.
I do want to note that GWWC being scaled back was flagged elsewhere, most explicitly in Ben Todd’s comment (currently 2nd highest upvoted on that thread). But for example, Scott’s linked reddit comment also alludes to this, via talking about the decreased interest in seeking financial contributions.
But it’s true that in neither case would I expect the typical reader to come away with the impression that a mistake was made, which I think is your main point and a good one. This is tricky because I think there’s significant disagreement about whether this was a mistake or a correct strategic call, and in some cases I think what is going on is that the writer thinks the call was correct (in spite of CEA now thinking otherwise), rather than simply refusing to acknowledge past errors.
Thanks AGB!
I do think it was a mistake to deprioritize GWWC, though I agree this is open to interpretation. But I want to clarify that my main point is that the EA community seems to have strong and worrisome cultural biases toward self-congratulation and away from critical introspection.
Another factor that has slowed EA’s growth over the years: people are leaving EA because of bad experiences with other EAs.
That’s not some pet theory of mine, that’s simply what EA’s reported in the 2019 EA survey. There were 178 respondents who reported knowing a highly engaged EA who left the community, and by far the most cited factor (37% of respondents) was “bad experiences with other EAs.” I think it’s safe to say these bad experiences are also likely driving away less engaged EAs who could have become more engaged.
One could argue that this factor is minor in the scheme of things, and maybe they’d be right. But this is another clear example where a) there’s something negatively impacting EA’s growth, b) the problem is obviously caused by EAs, and c) the problem didn’t even make the list of possible explanations. I think that supports my argument about EA’s blind spots.
For this to be the explanation presumably intra-EA conflict would not merely need to be driving people away, but driving people away at higher rates than it used to. It’s not clear to me why this would be the case.
It’s also worth noting that highly engaged EAs are quite close socially. It’s possible that many of those 178 people might be thinking of the same people!
My mental model is that in the early years, a disproportionately large portion of the EA community consisted of the community’s founders and their friends (and friends of friends, etc.) This cohort is likely to be very tolerant of the early members’ idiosyncrasies- it’s even possible some of those friendships were built around those idiosyncrasies. As time went on, more people found EA through other means (reading DGB in a university class, hearing about EA on a podcast, etc.) This new cohort is much less likely to tolerate those early idiosyncrasies. (The Pareto Fellowship application process could be a good example).
Good point about double counting as a possible issue. That means we shouldn’t try to infer the number of people driven away. However, we should still be able to say that bad experiences with other EAs are causing more engaged EAs to leave than other factors, since those other factors are also subject to double counting.
That’s true, and those friendships also probably reduced conflict as well—much harder to take a very negative view of someone you know well socially.
I think it’s great to collect responses to posts in this kind of highlights-from-the-comments-on style.
I especially enjoyed the testimonies from various Christians in EA; I fit the “generally godless” descriptor one person used, but I am glad to hear of Christians finding convergence between EA ideas and their own charitable commitments.
This relates to a thing I’ve long wondered, which is whether there ought to be a version of the GWWC pledge that fits better with zakat, i.e. committing 2.5% of one’s surplus wealth to the most effective causes, instead of 10% of income (which fits better with Jewish ma’aser ksafim / Christian tithing traditions).
TLYCS’s experience definitely suggests growth, not stagnation. Between 2014 and 2019, money moved increased at a compound annual growth rate of 75%. Web traffic did flatten out, but mostly after we started focusing more on offline fundraising.
Similarly, Effective Altruism Australia has been moving about 30% more money each year.
https://effectivealtruism.org.au/reports/
I hear similar numbers from most regranting organisations.
Thanks for quoting me, though you cut out the bit where I say:
That said, while differential attrition is a serious problem (particularly in the earlier cohorts), I think it remains clear that people typically take some years to become highly engaged. Clearly very, very few people are highly engaged in their first year or so of EA involvement (only about 5%, or 10 people were highly engaged from the 2019 cohort in 2019). If EA were gaining highly engaged EAs only at that rate, (with the percentage of engaged EAs increasing only due to less engaged EAs dropping out) we’d be in a very poor state, gaining only a handful of engaged EAs per year. It also doesn’t accord with the raw numbers of highly engaged EAs in each of the cohorts: there were 3x as many highly engaged EAs in the 2018 cohort as the 2019, twice as many highly engaged EAs in the 2017 cohort as the 2018 cohort and about 30% more in 2016 as in 2017. And total cohort size hadn’t been decreasing dramatically over time time frame either. So it seems more natural to conclude that EAs are slowly increasing in engagement. As I say, we’ll go into this in more detail in this year’s series though.