EA Leaders Forum: Survey on EA priorities (data and analysis)

Thanks to Alexan­der Gor­don-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Ju­lia Wise, Rob Wiblin, Sky May­hew, and Will MacAskill for as­sist­ing in var­i­ous parts of this pro­ject, from fi­nal­iz­ing sur­vey ques­tions to pro­vid­ing feed­back on the fi­nal post.


Clar­ifi­ca­tion on pro­nouns: “We” refers to the group of peo­ple who worked on the sur­vey and helped with the writeup. “I” refers to me; I use it to note some spe­cific de­ci­sions I made about pre­sent­ing the data and my ob­ser­va­tions from at­tend­ing the event.


This post is the sec­ond in a se­ries of posts where we aim to share sum­maries of the feed­back we have re­ceived about our own work and about the effec­tive al­tru­ism com­mu­nity more gen­er­ally. The first can be found here.


Each year, the EA Lead­ers Fo­rum, or­ga­nized by CEA, brings to­gether ex­ec­u­tives, re­searchers, and other ex­pe­rienced staffers from a va­ri­ety of EA-al­igned or­ga­ni­za­tions. At the event, they share ideas and dis­cuss the pre­sent state (and pos­si­ble fu­tures) of effec­tive al­tru­ism.

This year (dur­ing a date range cen­tered around ~1 July), in­vi­tees were asked to com­plete a “Pri­ori­ties for Effec­tive Altru­ism” sur­vey, com­piled by CEA and 80,000 Hours, which cov­ered the fol­low­ing broad top­ics:

  • The re­sources and tal­ents most needed by the community

  • How EA’s re­sources should be al­lo­cated be­tween differ­ent cause areas

  • Bot­tle­necks on the com­mu­nity’s progress and impact

  • Prob­lems the com­mu­nity is fac­ing, and mis­takes we could be mak­ing now

This post is a sum­mary of the sur­vey’s find­ings (N = 33; 56 peo­ple re­ceived the sur­vey).

Here’s a list of or­ga­ni­za­tions re­spon­dents worked for, with the num­ber of re­spon­dents from each or­ga­ni­za­tion in paren­the­ses. Re­spon­dents in­cluded both lead­er­ship and other staff (an or­ga­ni­za­tion ap­pear­ing on this list doesn’t mean that the org’s leader re­sponded).

  • 80,000 Hours (3)

  • An­i­mal Char­ity Eval­u­a­tors (1)

  • Cen­ter for Ap­plied Ra­tion­al­ity (1)

  • Cen­tre for Effec­tive Altru­ism (3)

  • Cen­tre for the Study of Ex­is­ten­tial Risk (1)

  • Deep­Mind (1)

  • Effec­tive Altru­ism Foun­da­tion (2)

  • Effec­tive Giv­ing (1)

  • Fu­ture of Hu­man­ity In­sti­tute (4)

  • Global Pri­ori­ties In­sti­tute (2)

  • Good Food In­sti­tute (1)

  • Ma­chine In­tel­li­gence Re­search In­sti­tute (1)

  • Open Philan­thropy Pro­ject (6)

Three re­spon­dents work at or­ga­ni­za­tions small enough that nam­ing the or­ga­ni­za­tions would be likely to de-anonymize the re­spon­dents. Three re­spon­dents don’t work at an EA-al­igned or­ga­ni­za­tion, but are large donors and/​or ad­vi­sors to one or more such or­ga­ni­za­tions.

What this data does and does not represent

This is a snap­shot of some views held by a small group of peo­ple (albeit peo­ple with broad net­works and a lot of ex­pe­rience with EA) as of July 2019. We’re shar­ing it as a con­ver­sa­tion-starter, and be­cause we felt that some peo­ple might be in­ter­ested in see­ing the data.

Th­ese re­sults shouldn’t be taken as an au­thor­i­ta­tive or con­sen­sus view of effec­tive al­tru­ism as a whole. They don’t rep­re­sent ev­ery­one in EA, or even ev­ery leader of an EA or­ga­ni­za­tion. If you’re in­ter­ested in see­ing data that comes closer to this kind of rep­re­sen­ta­tive­ness, con­sider the 2018 EA Sur­vey Series, which com­piles re­sponses from thou­sands of peo­ple.

Ta­lent Needs

What types of tal­ent do you cur­rently think [your or­ga­ni­za­tion //​ EA as a whole] will need more of over the next 5 years? (Pick up to 6)

This ques­tion was the same as a ques­tion asked to Lead­ers Fo­rum par­ti­ci­pants in 2018 (see 80,000 Hours’ sum­mary of the 2018 Ta­lent Gaps sur­vey for more).

Here’s a graph show­ing how the most com­mon re­sponses from 2019 com­pare to the same cat­e­gories in the 2018 tal­ent needs sur­vey from 80,000 Hours, for EA as a whole:

And for the re­spon­dent’s or­ga­ni­za­tion:

The fol­low­ing table con­tains data on ev­ery cat­e­gory (you can see sortable raw data here):


  • Two cat­e­gories in the 2019 sur­vey were not pre­sent in the 2018 sur­vey; these cells were left blank in the 2018 column. (Th­ese are “Per­sonal back­ground...” and “High level of knowl­edge and en­thu­si­asm...”)

  • Be­cause of differ­ences be­tween the groups sam­pled, I made two cor­rec­tions to the 2018 data:

    • The 2018 sur­vey had 38 re­spon­dents, com­pared to 33 re­spon­dents in 2019. I mul­ti­plied all 2018 figures by 3338 and rounded them to provide bet­ter com­par­i­sons.

    • After this, the sum of 2018 re­sponses was 308; for all 2019 re­sponses, 351. It’s pos­si­ble that this in­di­cates a differ­ence in how many things par­ti­ci­pants thought were im­por­tant in each year, but it also led to some con­fus­ing num­bers (e.g. a 2019 cat­e­gory hav­ing more re­sponses than its 2018 coun­ter­part, but a smaller frac­tion of the to­tal re­sponses). To com­pen­sate, I mul­ti­plied all 2018 figures by 351308 and rounded them.

    • Th­ese cor­rec­tions roughly can­cel­led out, with the 2018 sums re­duced by roughly 1%, but I opted to in­clude and men­tion them any­way. Such is the life of a data cleaner.

  • While the groups of re­spon­dents in 2018 and 2019 over­lapped sub­stan­tially, there were some new sur­vey-tak­ers this year; shifts in per­ceived tal­ent needs could partly re­flect differ­ences in the views of new re­spon­dents, rather than only a shift in the views of peo­ple who re­sponded in both years.

  • Some skills were named as im­por­tant more of­ten in 2019 than 2018. Those that saw the great­est in­crease (EA as a whole + re­spon­dent’s or­ga­ni­za­tion):

    • Economists and other quan­ti­ta­tive so­cial sci­en­tists (+8)

    • One-on-one so­cial skills and emo­tional in­tel­li­gence (+8)

    • The abil­ity to figure out what mat­ters most /​ set the right pri­ori­ties (+6)

    • Move­ment build­ing (e.g. pub­lic speak­ers, “faces” of EA) (+6)

  • The skills that saw the great­est to­tal de­crease:

    • Oper­a­tions (-16)

    • Other math, quant, or stats ex­perts (-6)

    • Ad­minis­tra­tors /​ as­sis­tants /​ office man­agers (-5)

    • Web de­vel­op­ment (-5)

Other com­ments on tal­ent needs

  • “Some com­bi­na­tion of hu­mil­ity (will­ing to do triv­ial-seem­ing things) plus tak­ing one­self se­ri­ously.”

  • “More ex­ecu­tors; more peo­ple with differ­ent skills/​abil­ities to what we already have a lot of; more peo­ple will­ing to take weird, high-var­i­ance paths, and more peo­ple who can com­mu­ni­cate effec­tively with non-EAs.”

  • “I think man­age­ment ca­pac­ity is par­tic­u­larly ne­glected, and re­lates strongly to our abil­ity to bring in tal­ent in all ar­eas.”


The 2019 re­sults were very similar to those of 2018, with few ex­cep­tions. De­mand re­mains high for peo­ple with skills in man­age­ment, pri­ori­ti­za­tion, and re­search, as well as ex­perts on gov­ern­ment and policy.

Differ­ences be­tween re­sponses for 2018 and 2019:

  • Oper­a­tions, the area of most need in the 2018 sur­vey, is seen as a less press­ing need this year (though it still ranked 6th). This could in­di­cate that we’ve be­gun to suc­ceed at clos­ing the op­er­a­tions skill bot­tle­neck.

    • How­ever, more re­spon­dents per­ceived a need for op­er­a­tions tal­ent for their own or­ga­ni­za­tions than for EA as a whole. It might be the case that re­spon­dents per­ceive that the gap has closed more for other or­ga­ni­za­tions than it ac­tu­ally has.

  • This year saw an in­crease in per­ceived need for move­ment-build­ing skills and for “one-on-one skills and emo­tional in­tel­li­gence”. Taken to­gether, these cat­e­gories seem to in­di­cate a greater fo­cus on in­ter­per­sonal skills.

Cause Priorities

Known causes

This year, we asked a ques­tion about how to ideally al­lo­cate re­sources across cause ar­eas. (We asked a similar ques­tion last year, but with cat­e­gories that were differ­ent enough that com­par­ing the two years doesn’t seem pro­duc­tive.)

The ques­tion was as fol­lows:

What (rough) per­centage of re­sources should the EA com­mu­nity de­vote to the fol­low­ing ar­eas over the next five years? Think of the re­sources of the com­mu­nity as some­thing like some frac­tion of Open Phil’s fund­ing, pos­si­ble dona­tions from other large donors, and the hu­man cap­i­tal and in­fluence of the ~1000 most en­gaged peo­ple.

This table shows the same data as above, with me­dian and quar­tile data in ad­di­tion to means. (If you or­dered re­sponses from least to great­est, the “lower quar­tile” num­ber would be one-fourth of the way through the list [the 25th per­centile], and the “up­per quar­tile” num­ber would be three-fourths of the way through the list [the 75th per­centile].)

Other com­ments (known causes)

  • “7% split be­tween nar­row long-ter­mist work on non-GCR is­sues (e.g. S-risks), 7% to other short-ter­mist work like sci­en­tific re­search”

  • “3% to re­duc­ing suffer­ing risk in car­ry­ing out our other work”

  • “15% to ex­plore var­i­ous other cause ar­eas; 7% on global de­vel­op­ment and eco­nomic growth (as op­posed to global *health*); 3% on men­tal health.”

Our com­men­tary (known causes)

Though many cause ar­eas are not strictly fo­cused on ei­ther the short-term or long-term fu­ture, one could group each of the speci­fied pri­ori­ties into one of three cat­e­gories:

Near-term fu­ture: Global health, farm an­i­mal welfare, wild an­i­mal welfare

Long-term fu­ture: Pos­i­tively shap­ing AI (shorter or longer timelines), biose­cu­rity and pan­demic pre­pared­ness, broad longter­mist work, other ex­tinc­tion risk mitigation

Meta work: Build­ing the EA com­mu­nity, re­search on cause priorisation

With these cat­e­gories, we can sum each cause to get a sense for the av­er­age frac­tion of EA re­sources re­spon­dents think should go to differ­ent ar­eas:

  • Short-term: 23.5% of resources

  • Long-term: 54.3%

  • Meta: 20.3%

(Be­cause re­spon­dents had the op­tion to sug­gest ad­di­tional pri­ori­ties, these an­swers don’t add up to 100%.)

While long-term work was gen­er­ally ranked as a higher pri­or­ity than short-term or meta work, al­most ev­ery at­tendee sup­ported al­lo­cat­ing re­sources to all three ar­eas.

Cause X

What do you es­ti­mate is the prob­a­bil­ity (in %) that there ex­ists a cause which ought to re­ceive over 20% of EA re­sources (time, money, etc.), but cur­rently re­ceives lit­tle at­ten­tion?

Of 25 to­tal re­sponses:

  • Mean: 42.6% probability

  • Me­dian: 36.5%

  • Lower quar­tile: 20%

  • Up­per quar­tile: 70%

Other com­ments (Cause X):

  • “I’ll in­ter­pret the ques­tion as fol­lows: “What is the prob­a­bil­ity that, in 20 years, we will think that we should have fo­cused 20% of re­sources on cause X over the years 2020-2024?”″ (Re­spon­dent’s an­swer was 33%)

  • “The prob­a­bil­ity that we find the cause within the next five years: 2%” (Re­spon­dent’s an­swer to the origi­nal ques­tion was 5% that the cause ex­isted at all)

  • “~100% if we al­low nar­row bets like ‘tech­nol­ogy X will turn out to pay off soon.’ With more re­stric­tion for fore­see­abil­ity from our cur­rent epistemic stand­point 70% (ex­am­ples could be poli­ti­cal ac­tivity, cre­at­ing long-term EA in­vest­ment funds at scale, cer­tain techs, etc). Some is­sues with what counts as ‘lit­tle’ at­ten­tion.”” (We logged this as 70% in the ag­gre­gated data)

  • “10%, but that’s mostly be­cause I think it’s un­likely we could be sure enough about some­thing be­ing best to de­vote over 20% of re­sources to it, not be­cause I don’t think we’ll find new effec­tive causes.”

  • “Depends how gran­u­larly you define cause area. I think within any big over­ar­ch­ing cause such as “mak­ing AI go well” we are likely (>70%) to dis­cover new an­gles that could be their own fields. I think it’s fairly un­likely (<25%) that we dis­cover an­other cause as large /​ ex­pan­sive as our top few.” (Be­cause this an­swer could have been in­ter­preted as any of sev­eral num­bers, we didn’t in­clude it in the av­er­age)

  • “I ob­ject to call­ing this ‘cause X’, so I’m not an­swer­ing.”

Fi­nally, since it turned out that no sin­gle re­sponse reached a mean of 20%, it seems likely that 20% was too high a bar for “Cause X” — that would make it a higher over­all pri­or­ity for re­spon­dents than any other op­tion. If we ask this ques­tion again next year, we’ll con­sider low­er­ing that bar.

Or­ga­ni­za­tional constraints

Fund­ing constraints

Over­all, how fund­ing-con­strained is your or­ga­ni­za­tion?

(1 = how much things cost is never a prac­ti­cal limit­ing fac­tor for you; 5 = you are con­sid­er­ing shrink­ing to avoid run­ning out of money)

Ta­lent constraints

Over­all, how tal­ent-con­strained is your or­ga­ni­za­tion?

(1 = you could hire many out­stand­ing can­di­dates who want to work at your org if you chose that ap­proach, or had the ca­pac­ity to ab­sorb them, or had the money; 5 = you can’t get any of the peo­ple you need to grow, or you are los­ing the good peo­ple you have)

Note: Re­sponses from 2018 were taken on a 0-4 scale, so I nor­mal­ized the data by adding 1 to all scores from 2018.

Other con­straints noted by respondents

In­clud­ing the 1-5 score if the re­spon­dent shared one:

  • “Con­straints are mainly in­ter­nal gov­er­nance and uni­ver­sity bu­reau­cracy.” (4)

  • “Bureau­cracy from our uni­ver­sity, and wider academia; man­age­ment and lead­er­ship con­straints.” (3)

  • “Re­search man­age­ment con­strained. We would be able to hire more re­searchers if we were able to offer bet­ter su­per­vi­sion and guidance on re­search pri­ori­ties.” (4)

  • “Con­strained on some kinds of or­ga­ni­za­tional ca­pac­ity.” (4)

  • “Con­straints on time, man­age­ment, and on­board­ing ca­pac­ity make it hard to find and effec­tively use new peo­ple.” (4)

  • “Need more men­tor­ing ca­pac­ity.” (3)

  • “Man­age­ment ca­pac­ity.” (5)

  • “Limited abil­ity to ab­sorb new peo­ple (3), difficulty get­ting pub­lic at­ten­tion to our work (3), and limited abil­ity for our cause area in gen­eral to ab­sorb new re­sources (2); the last of these is re­lated to con­straints on man­age­rial tal­ent.”

  • “We’re do­ing open-ended work for which it is hard to find the right path for­ward, re­gard­less of the tal­ent or money available.”

  • “We’re cur­rently ex­tremely limited by the num­ber of peo­ple who can figure out what to do on a high level and con­tribute to our over­all strate­gic di­rec­tion.”

  • “Not want­ing to over­whelm new man­agers. Want­ing to pre­serve our cul­ture.”

  • “Limited man­age­ment ca­pac­ity and scoped work.”

  • “Man­age­ment-con­strained, and it’s difficult to on­board peo­ple to do our less well-scoped work.”

  • “Lack of a per­ma­nent CEO, mean­ing a hiring and strat­egy freeze.”

  • “We are bot­tle­necked by learn­ing how to do new types of work and train­ing up peo­ple to do that work much more than the availa­bil­ity of good can­di­dates.”

  • “On­board­ing ca­pac­ity is low (es­pe­cially for re­search men­tor­ship)”

  • “In­sti­tu­tional, bu­reau­cratic and growth/​mat­u­ra­tion con­straints (2.5)”


Re­spon­dents’ views of fund­ing and tal­ent con­straints have changed very lit­tle within the last year. This may in­di­cate that es­tab­lished or­ga­ni­za­tions have been able to roughly keep up with their own growth (find­ing new fund­ing/​peo­ple at the pace that ex­pan­sion would re­quire). We would ex­pect these con­straints to be differ­ent for newer and smaller or­ga­ni­za­tions, so the scores here could fail to re­flect how EA or­gani­sa­tions as a whole are con­strained on fund­ing and tal­ent.

Man­age­ment and on­board­ing ca­pac­ity are by far the most fre­quently-noted con­straints in the “other” cat­e­gory. They seemed to over­lap some­what, given the num­ber of re­spon­dents who men­tioned them to­gether.

Bot­tle­necks to EA impact

What are the most press­ing bot­tle­necks that are re­duc­ing the im­pact of the EA com­mu­nity right now?

Th­ese op­tions are meant to re­fer to differ­ent stages in a “fun­nel” model of en­gage­ment. Each rep­re­sents move­ment from one stage to the next. For ex­am­ple, “grab­bing the in­ter­est of peo­ple who we reach” im­plies a bot­tle­neck in get­ting peo­ple who have heard of effec­tive al­tru­ism to con­tinue fol­low­ing the move­ment in some way. (It’s not clear that the op­tions were always in­ter­preted in this way.)

Th­ese are the op­tions re­spon­dents could choose from:

  • Reach­ing more peo­ple of the right kind (note: this term was left un­defined on the sur­vey; in the fu­ture, we’d want to phrase this as some­thing like “reach­ing more peo­ple al­igned with EA’s val­ues”)

  • Grab­bing the in­ter­est of peo­ple who we reach, so that they come back (i.e. not bounc­ing the right peo­ple)

  • More peo­ple tak­ing mod­er­ate ac­tion (e.g. mak­ing a mod­er­ate ca­reer change, tak­ing the GWWC pledge, con­vinc­ing a friend, learn­ing a lot about a cause) con­verted from in­ter­ested peo­ple due to bet­ter in­tro en­gage­ment (e.g. bet­ter-writ­ten con­tent, ease in mak­ing ini­tial con­nec­tions)

  • More ded­i­cated peo­ple (e.g. peo­ple work­ing at EA orgs, re­search­ing AI safety/​biose­cu­rity/​eco­nomics, giv­ing over $1m/​year) con­verted from mod­er­ate en­gage­ment due to bet­ter ad­vanced en­gage­ment (e.g. more in-depth dis­cus­sions about the pros and cons of AI) (note: in the fu­ture, we’ll prob­a­bly avoid giv­ing spe­cific cause ar­eas in our ex­am­ples)

  • In­crease the im­pact of ex­ist­ing ded­i­cated peo­ple (e.g. bet­ter re­search, co­or­di­na­tion, de­ci­sion-mak­ing)

Other notes on bot­tle­necks:

  • “It feels like a lot of the think­ing around EA is very cen­tral­ized.”

  • “I think ‘reach­ing more peo­ple’ and ‘not bounc­ing peo­ple of the right kind’ would look some­what qual­i­ta­tively differ­ent from the sta­tus quo.”

  • “I’m very tempted to say ‘reach­ing the right peo­ple’, but I gen­er­ally think we should try to make sure the bot­tom of the fun­nel is fixed up be­fore we do more of that.”

  • “Hy­poth­e­sis: As EA sub­fields are be­com­ing in­creas­ingly deep and spe­cial­ized, it’s be­com­ing difficult to find peo­ple who aren’t in­timi­dated by all the un­der­stand­ing re­quired to de­velop the am­bi­tion to be­come ex­perts them­selves.”

  • “I think poor com­mu­ni­ca­tions and lack of man­age­ment ca­pac­ity turn off a lot of peo­ple who prob­a­bly are value-al­igned and could con­tribute a lot. I think those two fac­tors con­tribute to EAs look­ing weirder than we re­ally are, and pose a high bar­rier to en­try for a lot of out­siders.”

  • “A more nat­u­ral break­down of these bot­tle­necks for me would be about the en­gage­ment/​en­dorse­ment of cer­tain types of peo­ple: e.g. ex­perts/​pres­ti­gious, rank and file con­trib­u­tors, fans/​laypeo­ple. In this break­down, I think the most press­ing bot­tle­neck is the first cat­e­gory (ex­perts/​pres­ti­gious) and I think it’s less im­por­tant whether those peo­ple are slightly in­volved or heav­ily in­volved.”

Prob­lems with the EA com­mu­nity/​movement

Be­fore get­ting into these re­sults, I’ll note that we col­lected al­most all sur­vey re­sponses be­fore the event be­gan; many ses­sions and con­ver­sa­tions dur­ing the event, in­spired by this sur­vey, cov­ered ways to strengthen effec­tive al­tru­ism. It also seemed to me, sub­jec­tively, as though many at­ten­dees were cheered by the com­mu­nity’s re­cent progress, and gen­er­ally op­ti­mistic about the fu­ture of EA. (I was on­site for the event and par­ti­ci­pated in many con­ver­sa­tions, but I didn’t at­tend most ses­sions and I didn’t take the sur­vey.)

CEA’s Ben West in­ter­viewed some of this sur­vey’s re­spon­dents — as well as other em­ploy­ees of EA or­ga­ni­za­tions — in more de­tail. His writeup in­cludes thoughts from his in­ter­vie­wees on the most ex­cit­ing and promis­ing as­pects of EA, and we’d recom­mend read­ing that alongside this data (since ques­tions about prob­lems will nat­u­rally lead to an­swers that skew nega­tive).

Here are some spe­cific prob­lems peo­ple of­ten men­tion. Which of them do you think are most sig­nifi­cant? (Choose up to 3)

What do you think is the most press­ing prob­lem fac­ing the EA com­mu­nity right now?

  • “I think the cluster around vet­ting and train­ing is sig­nifi­cant. Ditto de­mo­graphic di­ver­sity.”

  • “I think a lot of so­cial fac­tors (many of which are listed in your next ques­tion: we are a very young, white, male, elitist, so­cially awk­ward, and in my opinion of­ten over­con­fi­dent com­mu­nity) turn peo­ple off who would be value al­igned and able to con­tribute in sig­nifi­cant ways to our im­por­tant cause ar­eas.”

  • “Peo­ple in­ter­ested in EA be­ing risk averse in what they work on, and there­fore want­ing to work on things that are pretty mapped out and already thought well of in the com­mu­nity (e.g. work­ing at an EA org, EtG), rather than try­ing to map out new effec­tive roles (e.g. learn­ing about some spe­cific area of gov­ern­ment which seems like it might be high lev­er­age but about which the EA com­mu­nity doesn’t yet know much).”

  • “Things for longter­mists to do other than AI and bio.”

  • “Giv­ing pro­duc­tive and win-gen­er­at­ing work to the EAs who want jobs and op­por­tu­ni­ties for im­pact.”

  • “Failure to reach peo­ple who, if we find them, would be very highly al­igned and en­gaged. Espe­cially over­seas (China, In­dia, Arab world, Span­ish-speak­ing world, etc). “

  • “Hard to say. I think it’s plau­si­bly some­thing re­lated to the (lack of) ac­cessibil­ity of ex­ist­ing net­works, vet­ting con­straints, and men­tor­ship con­straints. Or per­haps some­thing re­lated to in­flex­i­bil­ity of or­ga­ni­za­tions to change course and throw all their weight into cer­tain prob­lem ar­eas or spe­cific strate­gies that could have an out­sized im­pact.”

  • “Re­la­tion­ship be­tween EA and longter­mism, and how it in­fluences move­ment strat­egy.”

  • “Per­cep­tion of in­su­lar­ity within EA by rele­vant and use­ful ex­perts out­side of EA.”

  • “Group­think.”

  • “Not reach­ing the best peo­ple well.”

  • An­swers from one re­spon­dent:

    • (1) EA com­mu­nity is too cen­tral­ized (lead­ing to group­think)

    • (2) the com­mu­nity has some un­healthy and non-in­clu­sive norms around ruth­less util­ity max­i­miza­tion (lead­ing to burnout and ex­clu­sion of peo­ple, es­pe­cially women, who want to have kids)

    • (3) dis­pro­por­tionate fo­cus on AI (lead­ing to over­fund­ing in that space and a lot of peo­ple get­ting frus­trated be­cause they have trou­ble con­tribut­ing in that space)

    • (4) too tightly cou­pled with the Bay Area ra­tio­nal­ist com­mu­nity, which has a bad rep­u­ta­tion in some circles

What per­son­ally most both­ers you about en­gag­ing with the EA com­mu­nity?

  • “I dis­like a lot of on­line am­a­teurism.”

  • “Abra­sive peo­ple, es­pe­cially on­line.”

  • “Us­ing ra­tio­nal­ist vo­cab­u­lary.”

  • “The so­cial skills of some folks could be im­proved.”

  • “In­su­lar­ity, lack of di­ver­sity.”

  • “Too buz­zword-y (not liter­ally that, but the thing be­hind it).”

  • “Per­ceived hos­tility to­wards suffer­ing-fo­cused views.”

  • “Peo­ple aren’t max­i­miz­ing enough; they’re too quick to set­tle for ‘pretty good’.”

  • “Be­ing as­so­ci­ated with “ends jus­tify the means” type think­ing.”

  • “Hubris; ar­ro­gance with­out suffi­cient un­der­stand­ing of oth­ers’ wis­dom.”

  • “Time-con­sum­ing and offputting for on­line in­ter­ac­tion, e.g. the EA Fo­rum.”

  • “Awk­ward blur­ring of per­sonal and pro­fes­sional. In-per­son events mainly feel like work.”

  • “Peo­ple say­ing crazy stuff on­line in the name of EA makes it harder to ap­peal to the peo­ple we want.”

  • “Ob­nox­ious, in­tel­lec­tu­ally ar­ro­gant and/​or un­wel­com­ing peo­ple—I can’t take in­ter­ested but normie friends to par­ti­ci­pate, [be­cause EA mee­tups and so­cial events] cause aliena­tion with them.”

  • “That the part of the com­mu­nity I’m a part of feels so fo­cused on talk­ing about EA top­ics, and less on get­ting to know peo­ple, hav­ing fun, etc.”

  • “Ten­sion be­tween the gate­keep­ing func­tions in­volved in com­mu­nity build­ing work and not want­ing to dis­ap­point peo­ple; peo­ple crit­i­ciz­ing my org for not pro­vid­ing all the things they want.”

  • “To me, the com­mu­nity feels a bit young and over­con­fi­dent: it seems like some­times be­ing “weird” is over­val­ued and com­mon sense is un­der­val­ued. I think this is re­lated to us be­ing a younger com­mu­nity who haven’t learned some of life’s les­sons yet.”

  • “Peo­ple be­ing judg­men­tal on lots of differ­ent axes: some ex­pec­ta­tion that ev­ery­one do all the good things all the time, so I feel judged about what I eat, how close I am with my cowork­ers (e.g. peo­ple think­ing I shouldn’t live with col­leagues), etc.”

  • “Some as­pects of LessWrong cul­ture (es­pe­cially the norm that the say­ing po­ten­tially true things tactlessly tends to re­li­ably get more up­votes than com­plaints about tact). By this, I *don’t* mean com­plaints about any group of peo­ple’s ac­tual opinions. I just don’t like cul­tures where it’s so­cially com­mend­able to sig­nal harsh­ness when it’s pos­si­ble to make the same points more em­pa­thet­i­cally.”

Most re­sponses (both those above, and those which re­spon­dents asked we not share) in­cluded one or more of the fol­low­ing four “themes”:

  • Peo­ple in EA, or the move­ment as a whole, seem­ing ar­ro­gant/​overconfident

  • Peo­ple in EA en­gag­ing in rude/​so­cially awk­ward behavior

  • The EA com­mu­nity and its or­ga­ni­za­tions not be­ing suffi­ciently pro­fes­sional, or failing to set good stan­dards for work/​life balance

  • Weird ideas tak­ing up too much of EA’s en­ergy, be­ing too visi­ble, etc.

Below, we’ve charted the num­ber of times we iden­ti­fied each theme:

Com­mu­nity Mistakes

What are some mis­takes you’re wor­ried the EA com­mu­nity might be mak­ing? If in five years we re­ally re­gret some­thing we’re do­ing to­day, what is it most likely to be?

  • “The risk of catas­trophic nega­tive PR/​scan­dal based on non-work as­pects of in­di­vi­d­ual/​com­mu­nity be­hav­ior.”

  • “Restrict­ing move­ment growth to fo­cus too closely on the in­ner cir­cle.”

  • “Over­fund­ing or fo­cus­ing too closely on AI work.”

  • “Mak­ing too big a bet on AI and hav­ing it turn out to be a damp squib (which I think is likely). Be­ing short­ter­mists in move­ment growth — push­ing peo­ple into di­rect work rather than build­ing skills or do­ing move­ment growth. Not pay­ing enough at­ten­tion to PR or other X-risks to the EA move­ment.”

  • “Not figur­ing out how to trans­late our wor­ld­view into dom­i­nant cul­tural regimes.”

  • “Fo­cus­ing too much on a nar­row set of ca­reer paths.”

  • “Still be­ing a com­mu­nity (for which EA is ill-suited) ver­sus a pro­fes­sional as­so­ci­a­tion or similar.”

  • “Not be­ing am­bi­tious enough, and not be­ing crit­i­cal enough about some of the as­sump­tions we’re mak­ing about what max­i­mizes long-term value.”

  • “Not fo­cus­ing way more on stu­dent groups; not mak­ing it eas­ier for lead­ers to com­mu­ni­cate (e.g. via a group Slack that’s ac­tu­ally used); fo­cus­ing so much on the UK com­mu­nity.”

  • “Not hav­ing an an­swer for what peo­ple with­out elitist cre­den­tials can do.”

  • “Not work­ing hard enough on di­ver­sity, or en­gage­ment with out­side per­spec­tives and ex­per­tise.”

  • “Not find­ing more /​ bet­ter pub­lic faces for the move­ment. It would be great to find one or two peo­ple who would make great pub­lic in­tel­lec­tual types, and who would like to do it, and get them con­sis­tently speak­ing /​ writ­ing.”

  • “Care­less out­reach, es­pe­cially in poli­ti­cally risky coun­tries or ar­eas such as policy; ill-thought-out pub­li­ca­tions, in­clud­ing on­line.”

  • “Not think­ing ar­gu­ments through care­fully enough, and there­fore be­ing wrong.”

  • “I’m very un­cer­tain about the cur­rent meme that EA should only be spread through high-fidelity 1-on-1 con­ver­sa­tions. I think this is likely to lead to a de­mo­graphic prob­lem and ul­ti­mately to group­think. I think we might be too quick to dis­miss other forms of out­reach.”

  • “I think a lot of the prob­lems I see could be nat­u­ral grow­ing pains, but some pos­si­bil­ities:

    • (a) we are over­con­fi­dent in a par­tic­u­lar Bayesian-util­i­tar­ian in­tel­lec­tual framework

    • (b) we are too in­su­lar and not mak­ing enough of an effort to hear and weigh the views of others

    • (c) we are not work­ing hard enough to find ways of hold­ing each other and our­selves ac­countable for do­ing great work.”


The most com­mon theme in these an­swers seems to be the de­sire for EA to be more in­clu­sive and wel­com­ing. Re­spon­dents saw a lot of room for im­prove­ment on in­tel­lec­tual di­ver­sity, hu­mil­ity, and out­reach, whether to dis­tinct groups with differ­ent views or to the gen­eral pop­u­la­tion.

The sec­ond-most com­mon theme con­cerned stan­dards for EA re­search and strat­egy. Re­spon­dents wanted to see more work on im­por­tant prob­lems and a fo­cus on think­ing care­fully with­out draw­ing con­clu­sions too quickly. If I had to sum up these re­sponses, I’d say some­thing like: “Let’s hold our­selves to high stan­dards for the work we pro­duce.”

Over­all, re­spon­dents gen­er­ally agreed that EA should:

  • Im­prove the qual­ity of its in­tel­lec­tual work, largely by en­gag­ing in more self-crit­i­cism and challeng­ing some of its prior as­sump­tions (and by pro­mot­ing norms around these prac­tices).

  • Be more di­verse in many ways — in the peo­ple who make up the com­mu­nity, the in­tel­lec­tual views they hold, and the causes and ca­reers they care about.

Hav­ing read these an­swers, my im­pres­sion is that par­ti­ci­pants hoped that the com­mu­nity con­tinues to foster the kind­ness, hu­mil­ity, and open­ness to new ideas that peo­ple as­so­ci­ate with the best parts of EA, and that we make changes when that isn’t hap­pen­ing. (This spirit of in­quiry and hu­mil­ity was quite preva­lent at the event; I heard many vari­a­tions on “I wish I’d been think­ing about this more, and I plan to do so once the Fo­rum is over.”)

Over­all Commentary

Once again, we’d like to em­pha­size that these re­sults are not meant to be rep­re­sen­ta­tive of the en­tire EA move­ment, or even the views of, say, the thou­sand peo­ple who are most in­volved. They re­flect a small group of par­ti­ci­pants at a sin­gle event.

Some weak­nesses of the sur­vey:

  • Many re­spon­dents likely an­swered these ques­tions quickly, with­out do­ing se­ri­ous anal­y­sis. Some re­sponses will thus rep­re­sent gut re­ac­tions, though oth­ers likely rep­re­sent deeply con­sid­ered views (for ex­am­ple, if a re­spon­dent had been think­ing for years about is­sues re­lated to a par­tic­u­lar ques­tion).

  • The sur­vey in­cluded 33 peo­ple from a range of or­ga­ni­za­tions, but not all re­spon­dents an­swered each ques­tion. The av­er­age num­ber of an­swers across mul­ti­ple-choice or quan­ti­ta­tive ques­tions was 30. (All qual­i­ta­tive re­sponses have been listed, save for re­sponses from two par­ti­ci­pants who asked that their an­swers not be shared.)

  • Some ques­tions were open to mul­ti­ple in­ter­pre­ta­tions or mi­s­un­der­stand­ings. We think this is es­pe­cially likely for “bot­tle­neck” ques­tions, as we did not ex­plic­itly state that each ques­tion was meant to re­fer to a stage in the “fun­nel” model.