Thanks to Alexander Gordon-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Julia Wise, Rob Wiblin, Sky Mayhew, and Will MacAskill for assisting in various parts of this project, from finalizing survey questions to providing feedback on the final post.
***
Clarification on pronouns: âWeâ refers to the group of people who worked on the survey and helped with the writeup. âIâ refers to me; I use it to note some specific decisions I made about presenting the data and my observations from attending the event.
***
This post is the second in a series of posts where we aim to share summaries of the feedback we have received about our own work and about the effective altruism community more generally. The first can be found here.
Overview
Each year, the EA Leaders Forum, organized by CEA, brings together executives, researchers, and other experienced staffers from a variety of EA-aligned organizations. At the event, they share ideas and discuss the present state (and possible futures) of effective altruism.
This year (during a date range centered around ~1 July), invitees were asked to complete a âPriorities for Effective Altruismâ survey, compiled by CEA and 80,000 Hours, which covered the following broad topics:
The resources and talents most needed by the community
How EAâs resources should be allocated between different cause areas
Bottlenecks on the communityâs progress and impact
Problems the community is facing, and mistakes we could be making now
This post is a summary of the surveyâs findings (N = 33; 56 people received the survey).
Hereâs a list of organizations respondents worked for, with the number of respondents from each organization in parentheses. Respondents included both leadership and other staff (an organization appearing on this list doesnât mean that the orgâs leader responded).
80,000 Hours (3)
Animal Charity Evaluators (1)
Center for Applied Rationality (1)
Centre for Effective Altruism (3)
Centre for the Study of Existential Risk (1)
DeepMind (1)
Effective Altruism Foundation (2)
Effective Giving (1)
Future of Humanity Institute (4)
Global Priorities Institute (2)
Good Food Institute (1)
Machine Intelligence Research Institute (1)
Open Philanthropy Project (6)
Three respondents work at organizations small enough that naming the organizations would be likely to de-anonymize the respondents. Three respondents donât work at an EA-aligned organization, but are large donors and/âor advisors to one or more such organizations.
What this data does and does not represent
This is a snapshot of some views held by a small group of people (albeit people with broad networks and a lot of experience with EA) as of July 2019. Weâre sharing it as a conversation-starter, and because we felt that some people might be interested in seeing the data.
These results shouldnât be taken as an authoritative or consensus view of effective altruism as a whole. They donât represent everyone in EA, or even every leader of an EA organization. If youâre interested in seeing data that comes closer to this kind of representativeness, consider the 2018 EA Survey Series, which compiles responses from thousands of people.
Talent Needs
What types of talent do you currently think [your organization //â EA as a whole] will need more of over the next 5 years?(Pick up to 6)
This question was the same as a question asked to Leaders Forum participants in 2018 (see 80,000 Hoursâ summary of the 2018 Talent Gaps survey for more).
Hereâs a graph showing how the most common responses from 2019 compare to the same categories in the 2018 talent needs survey from 80,000 Hours, for EA as a whole:
And for the respondentâs organization:
The following table contains data on every category (you can see sortable raw data here):
Notes:
Two categories in the 2019 survey were not present in the 2018 survey; these cells were left blank in the 2018 column. (These are âPersonal background...â and âHigh level of knowledge and enthusiasm...â)
Because of differences between the groups sampled, I made two corrections to the 2018 data:
The 2018 survey had 38 respondents, compared to 33 respondents in 2019. I multiplied all 2018 figures by 33â38 and rounded them to provide better comparisons.
After this, the sum of 2018 responses was 308; for all 2019 responses, 351. Itâs possible that this indicates a difference in how many things participants thought were important in each year, but it also led to some confusing numbers (e.g. a 2019 category having more responses than its 2018 counterpart, but a smaller fraction of the total responses). To compensate, I multiplied all 2018 figures by 351â308 and rounded them.
These corrections roughly cancelled out, with the 2018 sums reduced by roughly 1%, but I opted to include and mention them anyway. Such is the life of a data cleaner.
While the groups of respondents in 2018 and 2019 overlapped substantially, there were some new survey-takers this year; shifts in perceived talent needs could partly reflect differences in the views of new respondents, rather than only a shift in the views of people who responded in both years.
Some skills were named as important more often in 2019 than 2018. Those that saw the greatest increase (EA as a whole + respondentâs organization):
Economists and other quantitative social scientists (+8)
One-on-one social skills and emotional intelligence (+8)
The ability to figure out what matters most /â set the right priorities (+6)
Movement building (e.g. public speakers, âfacesâ of EA) (+6)
âSome combination of humility (willing to do trivial-seeming things) plus taking oneself seriously.â
âMore executors; more people with different skills/âabilities to what we already have a lot of; more people willing to take weird, high-variance paths, and more people who can communicate effectively with non-EAs.â
âI think management capacity is particularly neglected, and relates strongly to our ability to bring in talent in all areas.â
Commentary
The 2019 results were very similar to those of 2018, with few exceptions. Demand remains high for people with skills in management, prioritization, and research, as well as experts on government and policy.
Differences between responses for 2018 and 2019:
Operations, the area of most need in the 2018 survey, is seen as a less pressing need this year (though it still ranked 6th). This could indicate that weâve begun to succeed at closing the operations skill bottleneck.
However, more respondents perceived a need for operations talent for their own organizations than for EA as a whole. It might be the case that respondents perceive that the gap has closed more for other organizations than it actually has.
This year saw an increase in perceived need for movement-building skills and for âone-on-one skills and emotional intelligenceâ. Taken together, these categories seem to indicate a greater focus on interpersonal skills.
Cause Priorities
Known causes
This year, we asked a question about how to ideally allocate resources across cause areas. (We asked a similar question last year, but with categories that were different enough that comparing the two years doesnât seem productive.)
The question was as follows:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Philâs funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
This table shows the same data as above, with median and quartile data in addition to means. (If you ordered responses from least to greatest, the âlower quartileâ number would be one-fourth of the way through the list [the 25th percentile], and the âupper quartileâ number would be three-fourths of the way through the list [the 75th percentile].)
Other comments (known causes)
â7% split between narrow long-termist work on non-GCR issues (e.g. S-risks), 7% to other short-termist work like scientific researchâ
â3% to reducing suffering risk in carrying out our other workâ
â15% to explore various other cause areas; 7% on global development and economic growth (as opposed to global *health*); 3% on mental health.â
Our commentary (known causes)
Though many cause areas are not strictly focused on either the short-term or long-term future, one could group each of the specified priorities into one of three categories:
Near-term future: Global health, farm animal welfare, wild animal welfare
Long-term future: Positively shaping AI (shorter or longer timelines), biosecurity and pandemic preparedness, broad longtermist work, other extinction risk mitigation
Meta work: Building the EA community, research on cause priorisation
With these categories, we can sum each cause to get a sense for the average fraction of EA resources respondents think should go to different areas:
Short-term: 23.5% of resources
Long-term: 54.3%
Meta: 20.3%
(Because respondents had the option to suggest additional priorities, these answers donât add up to 100%.)
While long-term work was generally ranked as a higher priority than short-term or meta work, almost every attendee supported allocating resources to all three areas.
Cause X
What do you estimate is the probability (in %) that there exists a cause which ought to receive over 20% of EA resources (time, money, etc.), but currently receives little attention?
Of 25 total responses:
Mean: 42.6% probability
Median: 36.5%
Lower quartile: 20%
Upper quartile: 70%
Other comments (Cause X):
âIâll interpret the question as follows: âWhat is the probability that, in 20 years, we will think that we should have focused 20% of resources on cause X over the years 2020-2024?ââł (Respondentâs answer was 33%)
âThe probability that we find the cause within the next five years: 2%â (Respondentâs answer to the original question was 5% that the cause existed at all)
â~100% if we allow narrow bets like âtechnology X will turn out to pay off soon.â With more restriction for foreseeability from our current epistemic standpoint 70% (examples could be political activity, creating long-term EA investment funds at scale, certain techs, etc). Some issues with what counts as âlittleâ attention.ââ (We logged this as 70% in the aggregated data)
â10%, but thatâs mostly because I think itâs unlikely we could be sure enough about something being best to devote over 20% of resources to it, not because I donât think weâll find new effective causes.â
âDepends how granularly you define cause area. I think within any big overarching cause such as âmaking AI go wellâ we are likely (>70%) to discover new angles that could be their own fields. I think itâs fairly unlikely (<25%) that we discover another cause as large /â expansive as our top few.â (Because this answer could have been interpreted as any of several numbers, we didnât include it in the average)
âI object to calling this âcause Xâ, so Iâm not answering.â
Finally, since it turned out that no single response reached a mean of 20%, it seems likely that 20% was too high a bar for âCause Xâ â that would make it a higher overall priority for respondents than any other option. If we ask this question again next year, weâll consider lowering that bar.
Organizational constraints
Funding constraints
Overall, how funding-constrained is your organization?
(1 = how much things cost is never a practical limiting factor for you; 5 = you are considering shrinking to avoid running out of money)
Talent constraints
Overall, how talent-constrained is your organization?
(1 = you could hire many outstanding candidates who want to work at your org if you chose that approach, or had the capacity to absorb them, or had the money; 5 = you canât get any of the people you need to grow, or you are losing the good people you have)
Note: Responses from 2018 were taken on a 0-4 scale, so I normalized the data by adding 1 to all scores from 2018.
Other constraints noted by respondents
Including the 1-5 score if the respondent shared one:
âConstraints are mainly internal governance and university bureaucracy.â (4)
âBureaucracy from our university, and wider academia; management and leadership constraints.â (3)
âResearch management constrained. We would be able to hire more researchers if we were able to offer better supervision and guidance on research priorities.â (4)
âConstrained on some kinds of organizational capacity.â (4)
âConstraints on time, management, and onboarding capacity make it hard to find and effectively use new people.â (4)
âNeed more mentoring capacity.â (3)
âManagement capacity.â (5)
âLimited ability to absorb new people (3), difficulty getting public attention to our work (3), and limited ability for our cause area in general to absorb new resources (2); the last of these is related to constraints on managerial talent.â
âWeâre doing open-ended work for which it is hard to find the right path forward, regardless of the talent or money available.â
âWeâre currently extremely limited by the number of people who can figure out what to do on a high level and contribute to our overall strategic direction.â
âNot wanting to overwhelm new managers. Wanting to preserve our culture.â
âLimited management capacity and scoped work.â
âManagement-constrained, and itâs difficult to onboard people to do our less well-scoped work.â
âLack of a permanent CEO, meaning a hiring and strategy freeze.â
âWe are bottlenecked by learning how to do new types of work and training up people to do that work much more than the availability of good candidates.â
âOnboarding capacity is low (especially for research mentorship)â
âInstitutional, bureaucratic and growth/âmaturation constraints (2.5)â
Commentary
Respondentsâ views of funding and talent constraints have changed very little within the last year. This may indicate that established organizations have been able to roughly keep up with their own growth (finding new funding/âpeople at the pace that expansion would require). We would expect these constraints to be different for newer and smaller organizations, so the scores here could fail to reflect how EA organisations as a whole are constrained on funding and talent.
Management and onboarding capacity are by far the most frequently-noted constraints in the âotherâ category. They seemed to overlap somewhat, given the number of respondents who mentioned them together.
Bottlenecks to EA impact
What are the most pressing bottlenecks that are reducing the impact of the EA community right now?
These options are meant to refer to different stages in a âfunnelâ model of engagement. Each represents movement from one stage to the next. For example, âgrabbing the interest of people who we reachâ implies a bottleneck in getting people who have heard of effective altruism to continue following the movement in some way. (Itâs not clear that the options were always interpreted in this way.)
These are the options respondents could choose from:
Reaching more people of the right kind (note: this term was left undefined on the survey; in the future, weâd want to phrase this as something like âreaching more people aligned with EAâs valuesâ)
Grabbing the interest of people who we reach, so that they come back (i.e. not bouncing the right people)
More people taking moderate action (e.g. making a moderate career change, taking the GWWC pledge, convincing a friend, learning a lot about a cause) converted from interested people due to better intro engagement (e.g. better-written content, ease in making initial connections)
More dedicated people (e.g. people working at EA orgs, researching AI safety/âbiosecurity/âeconomics, giving over $1m/âyear) converted from moderate engagement due to better advanced engagement (e.g. more in-depth discussions about the pros and cons of AI) (note: in the future, weâll probably avoid giving specific cause areas in our examples)
Increase the impact of existing dedicated people (e.g. better research, coordination, decision-making)
Other notes on bottlenecks:
âIt feels like a lot of the thinking around EA is very centralized.â
âI think âreaching more peopleâ and ânot bouncing people of the right kindâ would look somewhat qualitatively different from the status quo.â
âIâm very tempted to say âreaching the right peopleâ, but I generally think we should try to make sure the bottom of the funnel is fixed up before we do more of that.â
âHypothesis: As EA subfields are becoming increasingly deep and specialized, itâs becoming difficult to find people who arenât intimidated by all the understanding required to develop the ambition to become experts themselves.â
âI think poor communications and lack of management capacity turn off a lot of people who probably are value-aligned and could contribute a lot. I think those two factors contribute to EAs looking weirder than we really are, and pose a high barrier to entry for a lot of outsiders.â
âA more natural breakdown of these bottlenecks for me would be about the engagement/âendorsement of certain types of people: e.g. experts/âprestigious, rank and file contributors, fans/âlaypeople. In this breakdown, I think the most pressing bottleneck is the first category (experts/âprestigious) and I think itâs less important whether those people are slightly involved or heavily involved.â
Problems with the EA community/âmovement
Before getting into these results, Iâll note that we collected almost all survey responses before the event began; many sessions and conversations during the event, inspired by this survey, covered ways to strengthen effective altruism. It also seemed to me, subjectively, as though many attendees were cheered by the communityâs recent progress, and generally optimistic about the future of EA. (I was onsite for the event and participated in many conversations, but I didnât attend most sessions and I didnât take the survey.)
CEAâs Ben West interviewed some of this surveyâs respondents â as well as other employees of EA organizations â in more detail. His writeup includes thoughts from his interviewees on the most exciting and promising aspects of EA, and weâd recommend reading that alongside this data (since questions about problems will naturally lead to answers that skew negative).
Here are some specific problems people often mention. Which of them do you think are most significant? (Choose up to 3)
What do you think is the most pressing problem facing the EA community right now?
âI think the cluster around vetting and training is significant. Ditto demographic diversity.â
âI think a lot of social factors (many of which are listed in your next question: we are a very young, white, male, elitist, socially awkward, and in my opinion often overconfident community) turn people off who would be value aligned and able to contribute in significant ways to our important cause areas.â
âPeople interested in EA being risk averse in what they work on, and therefore wanting to work on things that are pretty mapped out and already thought well of in the community (e.g. working at an EA org, EtG), rather than trying to map out new effective roles (e.g. learning about some specific area of government which seems like it might be high leverage but about which the EA community doesnât yet know much).â
âThings for longtermists to do other than AI and bio.â
âGiving productive and win-generating work to the EAs who want jobs and opportunities for impact.â
âFailure to reach people who, if we find them, would be very highly aligned and engaged. Especially overseas (China, India, Arab world, Spanish-speaking world, etc). â
âHard to say. I think itâs plausibly something related to the (lack of) accessibility of existing networks, vetting constraints, and mentorship constraints. Or perhaps something related to inflexibility of organizations to change course and throw all their weight into certain problem areas or specific strategies that could have an outsized impact.â
âRelationship between EA and longtermism, and how it influences movement strategy.â
âPerception of insularity within EA by relevant and useful experts outside of EA.â
âGroupthink.â
âNot reaching the best people well.â
Answers from one respondent:
(1) EA community is too centralized (leading to groupthink)
(2) the community has some unhealthy and non-inclusive norms around ruthless utility maximization (leading to burnout and exclusion of people, especially women, who want to have kids)
(3) disproportionate focus on AI (leading to overfunding in that space and a lot of people getting frustrated because they have trouble contributing in that space)
(4) too tightly coupled with the Bay Area rationalist community, which has a bad reputation in some circles
What personally most bothers you about engaging with the EA community?
âI dislike a lot of online amateurism.â
âAbrasive people, especially online.â
âUsing rationalist vocabulary.â
âThe social skills of some folks could be improved.â
âInsularity, lack of diversity.â
âToo buzzword-y (not literally that, but the thing behind it).â
âPerceived hostility towards suffering-focused views.â
âPeople arenât maximizing enough; theyâre too quick to settle for âpretty goodâ.â
âBeing associated with âends justify the meansâ type thinking.â
âHubris; arrogance without sufficient understanding of othersâ wisdom.â
âTime-consuming and offputting for online interaction, e.g. the EA Forum.â
âAwkward blurring of personal and professional. In-person events mainly feel like work.â
âPeople saying crazy stuff online in the name of EA makes it harder to appeal to the people we want.â
âObnoxious, intellectually arrogant and/âor unwelcoming peopleâI canât take interested but normie friends to participate, [because EA meetups and social events] cause alienation with them.â
âThat the part of the community Iâm a part of feels so focused on talking about EA topics, and less on getting to know people, having fun, etc.â
âTension between the gatekeeping functions involved in community building work and not wanting to disappoint people; people criticizing my org for not providing all the things they want.â
âTo me, the community feels a bit young and overconfident: it seems like sometimes being âweirdâ is overvalued and common sense is undervalued. I think this is related to us being a younger community who havenât learned some of lifeâs lessons yet.â
âPeople being judgmental on lots of different axes: some expectation that everyone do all the good things all the time, so I feel judged about what I eat, how close I am with my coworkers (e.g. people thinking I shouldnât live with colleagues), etc.â
âSome aspects of LessWrong culture (especially the norm that the saying potentially true things tactlessly tends to reliably get more upvotes than complaints about tact). By this, I *donât* mean complaints about any group of peopleâs actual opinions. I just donât like cultures where itâs socially commendable to signal harshness when itâs possible to make the same points more empathetically.â
Most responses (both those above, and those which respondents asked we not share) included one or more of the following four âthemesâ:
People in EA, or the movement as a whole, seeming arrogant/âoverconfident
People in EA engaging in rude/âsocially awkward behavior
The EA community and its organizations not being sufficiently professional, or failing to set good standards for work/âlife balance
Weird ideas taking up too much of EAâs energy, being too visible, etc.
Below, weâve charted the number of times we identified each theme:
Community Mistakes
What are some mistakes youâre worried the EA community might be making? If in five years we really regret something weâre doing today, what is it most likely to be?
âThe risk of catastrophic negative PR/âscandal based on non-work aspects of individual/âcommunity behavior.â
âRestricting movement growth to focus too closely on the inner circle.â
âOverfunding or focusing too closely on AI work.â
âMaking too big a bet on AI and having it turn out to be a damp squib (which I think is likely). Being shorttermists in movement growth â pushing people into direct work rather than building skills or doing movement growth. Not paying enough attention to PR or other X-risks to the EA movement.â
âNot figuring out how to translate our worldview into dominant cultural regimes.â
âFocusing too much on a narrow set of career paths.â
âStill being a community (for which EA is ill-suited) versus a professional association or similar.â
âNot being ambitious enough, and not being critical enough about some of the assumptions weâre making about what maximizes long-term value.â
âNot focusing way more on student groups; not making it easier for leaders to communicate (e.g. via a group Slack thatâs actually used); focusing so much on the UK community.â
âNot having an answer for what people without elitist credentials can do.â
âNot working hard enough on diversity, or engagement with outside perspectives and expertise.â
âNot finding more /â better public faces for the movement. It would be great to find one or two people who would make great public intellectual types, and who would like to do it, and get them consistently speaking /â writing.â
âCareless outreach, especially in politically risky countries or areas such as policy; ill-thought-out publications, including online.â
âNot thinking arguments through carefully enough, and therefore being wrong.â
âIâm very uncertain about the current meme that EA should only be spread through high-fidelity 1-on-1 conversations. I think this is likely to lead to a demographic problem and ultimately to groupthink. I think we might be too quick to dismiss other forms of outreach.â
âI think a lot of the problems I see could be natural growing pains, but some possibilities:
(a) we are overconfident in a particular Bayesian-utilitarian intellectual framework
(b) we are too insular and not making enough of an effort to hear and weigh the views of others
(c) we are not working hard enough to find ways of holding each other and ourselves accountable for doing great work.â
Commentary
The most common theme in these answers seems to be the desire for EA to be more inclusive and welcoming. Respondents saw a lot of room for improvement on intellectual diversity, humility, and outreach, whether to distinct groups with different views or to the general population.
The second-most common theme concerned standards for EA research and strategy. Respondents wanted to see more work on important problems and a focus on thinking carefully without drawing conclusions too quickly. If I had to sum up these responses, Iâd say something like: âLetâs hold ourselves to high standards for the work we produce.â
Overall, respondents generally agreed that EA should:
Improve the quality of its intellectual work, largely by engaging in more self-criticism and challenging some of its prior assumptions (and by promoting norms around these practices).
Be more diverse in many ways â in the people who make up the community, the intellectual views they hold, and the causes and careers they care about.
Having read these answers, my impression is that participants hoped that the community continues to foster the kindness, humility, and openness to new ideas that people associate with the best parts of EA, and that we make changes when that isnât happening. (This spirit of inquiry and humility was quite prevalent at the event; I heard many variations on âI wish Iâd been thinking about this more, and I plan to do so once the Forum is over.â)
Overall Commentary
Once again, weâd like to emphasize that these results are not meant to be representative of the entire EA movement, or even the views of, say, the thousand people who are most involved. They reflect a small group of participants at a single event.
Some weaknesses of the survey:
Many respondents likely answered these questions quickly, without doing serious analysis. Some responses will thus represent gut reactions, though others likely represent deeply considered views (for example, if a respondent had been thinking for years about issues related to a particular question).
The survey included 33 people from a range of organizations, but not all respondents answered each question. The average number of answers across multiple-choice or quantitative questions was 30. (All qualitative responses have been listed, save for responses from two participants who asked that their answers not be shared.)
Some questions were open to multiple interpretations or misunderstandings. We think this is especially likely for âbottleneckâ questions, as we did not explicitly state that each question was meant to refer to a stage in the âfunnelâ model.
EA Leaders Forum: Survey on EA priorities (data and analysis)
Thanks to Alexander Gordon-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Julia Wise, Rob Wiblin, Sky Mayhew, and Will MacAskill for assisting in various parts of this project, from finalizing survey questions to providing feedback on the final post.
***
Clarification on pronouns: âWeâ refers to the group of people who worked on the survey and helped with the writeup. âIâ refers to me; I use it to note some specific decisions I made about presenting the data and my observations from attending the event.
***
This post is the second in a series of posts where we aim to share summaries of the feedback we have received about our own work and about the effective altruism community more generally. The first can be found here.
Overview
Each year, the EA Leaders Forum, organized by CEA, brings together executives, researchers, and other experienced staffers from a variety of EA-aligned organizations. At the event, they share ideas and discuss the present state (and possible futures) of effective altruism.
This year (during a date range centered around ~1 July), invitees were asked to complete a âPriorities for Effective Altruismâ survey, compiled by CEA and 80,000 Hours, which covered the following broad topics:
The resources and talents most needed by the community
How EAâs resources should be allocated between different cause areas
Bottlenecks on the communityâs progress and impact
Problems the community is facing, and mistakes we could be making now
This post is a summary of the surveyâs findings (N = 33; 56 people received the survey).
Hereâs a list of organizations respondents worked for, with the number of respondents from each organization in parentheses. Respondents included both leadership and other staff (an organization appearing on this list doesnât mean that the orgâs leader responded).
80,000 Hours (3)
Animal Charity Evaluators (1)
Center for Applied Rationality (1)
Centre for Effective Altruism (3)
Centre for the Study of Existential Risk (1)
DeepMind (1)
Effective Altruism Foundation (2)
Effective Giving (1)
Future of Humanity Institute (4)
Global Priorities Institute (2)
Good Food Institute (1)
Machine Intelligence Research Institute (1)
Open Philanthropy Project (6)
Three respondents work at organizations small enough that naming the organizations would be likely to de-anonymize the respondents. Three respondents donât work at an EA-aligned organization, but are large donors and/âor advisors to one or more such organizations.
What this data does and does not represent
This is a snapshot of some views held by a small group of people (albeit people with broad networks and a lot of experience with EA) as of July 2019. Weâre sharing it as a conversation-starter, and because we felt that some people might be interested in seeing the data.
These results shouldnât be taken as an authoritative or consensus view of effective altruism as a whole. They donât represent everyone in EA, or even every leader of an EA organization. If youâre interested in seeing data that comes closer to this kind of representativeness, consider the 2018 EA Survey Series, which compiles responses from thousands of people.
Talent Needs
What types of talent do you currently think [your organization //â EA as a whole] will need more of over the next 5 years? (Pick up to 6)
This question was the same as a question asked to Leaders Forum participants in 2018 (see 80,000 Hoursâ summary of the 2018 Talent Gaps survey for more).
Hereâs a graph showing how the most common responses from 2019 compare to the same categories in the 2018 talent needs survey from 80,000 Hours, for EA as a whole:
And for the respondentâs organization:
The following table contains data on every category (you can see sortable raw data here):
Notes:
Two categories in the 2019 survey were not present in the 2018 survey; these cells were left blank in the 2018 column. (These are âPersonal background...â and âHigh level of knowledge and enthusiasm...â)
Because of differences between the groups sampled, I made two corrections to the 2018 data:
The 2018 survey had 38 respondents, compared to 33 respondents in 2019. I multiplied all 2018 figures by 33â38 and rounded them to provide better comparisons.
After this, the sum of 2018 responses was 308; for all 2019 responses, 351. Itâs possible that this indicates a difference in how many things participants thought were important in each year, but it also led to some confusing numbers (e.g. a 2019 category having more responses than its 2018 counterpart, but a smaller fraction of the total responses). To compensate, I multiplied all 2018 figures by 351â308 and rounded them.
These corrections roughly cancelled out, with the 2018 sums reduced by roughly 1%, but I opted to include and mention them anyway. Such is the life of a data cleaner.
While the groups of respondents in 2018 and 2019 overlapped substantially, there were some new survey-takers this year; shifts in perceived talent needs could partly reflect differences in the views of new respondents, rather than only a shift in the views of people who responded in both years.
Some skills were named as important more often in 2019 than 2018. Those that saw the greatest increase (EA as a whole + respondentâs organization):
Economists and other quantitative social scientists (+8)
One-on-one social skills and emotional intelligence (+8)
The ability to figure out what matters most /â set the right priorities (+6)
Movement building (e.g. public speakers, âfacesâ of EA) (+6)
The skills that saw the greatest total decrease:
Operations (-16)
Other math, quant, or stats experts (-6)
Administrators /â assistants /â office managers (-5)
Web development (-5)
Other comments on talent needs
âSome combination of humility (willing to do trivial-seeming things) plus taking oneself seriously.â
âMore executors; more people with different skills/âabilities to what we already have a lot of; more people willing to take weird, high-variance paths, and more people who can communicate effectively with non-EAs.â
âI think management capacity is particularly neglected, and relates strongly to our ability to bring in talent in all areas.â
Commentary
The 2019 results were very similar to those of 2018, with few exceptions. Demand remains high for people with skills in management, prioritization, and research, as well as experts on government and policy.
Differences between responses for 2018 and 2019:
Operations, the area of most need in the 2018 survey, is seen as a less pressing need this year (though it still ranked 6th). This could indicate that weâve begun to succeed at closing the operations skill bottleneck.
However, more respondents perceived a need for operations talent for their own organizations than for EA as a whole. It might be the case that respondents perceive that the gap has closed more for other organizations than it actually has.
This year saw an increase in perceived need for movement-building skills and for âone-on-one skills and emotional intelligenceâ. Taken together, these categories seem to indicate a greater focus on interpersonal skills.
Cause Priorities
Known causes
This year, we asked a question about how to ideally allocate resources across cause areas. (We asked a similar question last year, but with categories that were different enough that comparing the two years doesnât seem productive.)
The question was as follows:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Philâs funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
This table shows the same data as above, with median and quartile data in addition to means. (If you ordered responses from least to greatest, the âlower quartileâ number would be one-fourth of the way through the list [the 25th percentile], and the âupper quartileâ number would be three-fourths of the way through the list [the 75th percentile].)
Other comments (known causes)
â7% split between narrow long-termist work on non-GCR issues (e.g. S-risks), 7% to other short-termist work like scientific researchâ
â3% to reducing suffering risk in carrying out our other workâ
â15% to explore various other cause areas; 7% on global development and economic growth (as opposed to global *health*); 3% on mental health.â
Our commentary (known causes)
Though many cause areas are not strictly focused on either the short-term or long-term future, one could group each of the specified priorities into one of three categories:
Near-term future: Global health, farm animal welfare, wild animal welfare
Long-term future: Positively shaping AI (shorter or longer timelines), biosecurity and pandemic preparedness, broad longtermist work, other extinction risk mitigation
Meta work: Building the EA community, research on cause priorisation
With these categories, we can sum each cause to get a sense for the average fraction of EA resources respondents think should go to different areas:
Short-term: 23.5% of resources
Long-term: 54.3%
Meta: 20.3%
(Because respondents had the option to suggest additional priorities, these answers donât add up to 100%.)
While long-term work was generally ranked as a higher priority than short-term or meta work, almost every attendee supported allocating resources to all three areas.
Cause X
What do you estimate is the probability (in %) that there exists a cause which ought to receive over 20% of EA resources (time, money, etc.), but currently receives little attention?
Of 25 total responses:
Mean: 42.6% probability
Median: 36.5%
Lower quartile: 20%
Upper quartile: 70%
Other comments (Cause X):
âIâll interpret the question as follows: âWhat is the probability that, in 20 years, we will think that we should have focused 20% of resources on cause X over the years 2020-2024?ââł (Respondentâs answer was 33%)
âThe probability that we find the cause within the next five years: 2%â (Respondentâs answer to the original question was 5% that the cause existed at all)
â~100% if we allow narrow bets like âtechnology X will turn out to pay off soon.â With more restriction for foreseeability from our current epistemic standpoint 70% (examples could be political activity, creating long-term EA investment funds at scale, certain techs, etc). Some issues with what counts as âlittleâ attention.ââ (We logged this as 70% in the aggregated data)
â10%, but thatâs mostly because I think itâs unlikely we could be sure enough about something being best to devote over 20% of resources to it, not because I donât think weâll find new effective causes.â
âDepends how granularly you define cause area. I think within any big overarching cause such as âmaking AI go wellâ we are likely (>70%) to discover new angles that could be their own fields. I think itâs fairly unlikely (<25%) that we discover another cause as large /â expansive as our top few.â (Because this answer could have been interpreted as any of several numbers, we didnât include it in the average)
âI object to calling this âcause Xâ, so Iâm not answering.â
Finally, since it turned out that no single response reached a mean of 20%, it seems likely that 20% was too high a bar for âCause Xâ â that would make it a higher overall priority for respondents than any other option. If we ask this question again next year, weâll consider lowering that bar.
Organizational constraints
Funding constraints
Overall, how funding-constrained is your organization?
(1 = how much things cost is never a practical limiting factor for you; 5 = you are considering shrinking to avoid running out of money)
Talent constraints
Overall, how talent-constrained is your organization?
(1 = you could hire many outstanding candidates who want to work at your org if you chose that approach, or had the capacity to absorb them, or had the money; 5 = you canât get any of the people you need to grow, or you are losing the good people you have)
Note: Responses from 2018 were taken on a 0-4 scale, so I normalized the data by adding 1 to all scores from 2018.
Other constraints noted by respondents
Including the 1-5 score if the respondent shared one:
âConstraints are mainly internal governance and university bureaucracy.â (4)
âBureaucracy from our university, and wider academia; management and leadership constraints.â (3)
âResearch management constrained. We would be able to hire more researchers if we were able to offer better supervision and guidance on research priorities.â (4)
âConstrained on some kinds of organizational capacity.â (4)
âConstraints on time, management, and onboarding capacity make it hard to find and effectively use new people.â (4)
âNeed more mentoring capacity.â (3)
âManagement capacity.â (5)
âLimited ability to absorb new people (3), difficulty getting public attention to our work (3), and limited ability for our cause area in general to absorb new resources (2); the last of these is related to constraints on managerial talent.â
âWeâre doing open-ended work for which it is hard to find the right path forward, regardless of the talent or money available.â
âWeâre currently extremely limited by the number of people who can figure out what to do on a high level and contribute to our overall strategic direction.â
âNot wanting to overwhelm new managers. Wanting to preserve our culture.â
âLimited management capacity and scoped work.â
âManagement-constrained, and itâs difficult to onboard people to do our less well-scoped work.â
âLack of a permanent CEO, meaning a hiring and strategy freeze.â
âWe are bottlenecked by learning how to do new types of work and training up people to do that work much more than the availability of good candidates.â
âOnboarding capacity is low (especially for research mentorship)â
âInstitutional, bureaucratic and growth/âmaturation constraints (2.5)â
Commentary
Respondentsâ views of funding and talent constraints have changed very little within the last year. This may indicate that established organizations have been able to roughly keep up with their own growth (finding new funding/âpeople at the pace that expansion would require). We would expect these constraints to be different for newer and smaller organizations, so the scores here could fail to reflect how EA organisations as a whole are constrained on funding and talent.
Management and onboarding capacity are by far the most frequently-noted constraints in the âotherâ category. They seemed to overlap somewhat, given the number of respondents who mentioned them together.
Bottlenecks to EA impact
What are the most pressing bottlenecks that are reducing the impact of the EA community right now?
These options are meant to refer to different stages in a âfunnelâ model of engagement. Each represents movement from one stage to the next. For example, âgrabbing the interest of people who we reachâ implies a bottleneck in getting people who have heard of effective altruism to continue following the movement in some way. (Itâs not clear that the options were always interpreted in this way.)
These are the options respondents could choose from:
Reaching more people of the right kind (note: this term was left undefined on the survey; in the future, weâd want to phrase this as something like âreaching more people aligned with EAâs valuesâ)
Grabbing the interest of people who we reach, so that they come back (i.e. not bouncing the right people)
More people taking moderate action (e.g. making a moderate career change, taking the GWWC pledge, convincing a friend, learning a lot about a cause) converted from interested people due to better intro engagement (e.g. better-written content, ease in making initial connections)
More dedicated people (e.g. people working at EA orgs, researching AI safety/âbiosecurity/âeconomics, giving over $1m/âyear) converted from moderate engagement due to better advanced engagement (e.g. more in-depth discussions about the pros and cons of AI) (note: in the future, weâll probably avoid giving specific cause areas in our examples)
Increase the impact of existing dedicated people (e.g. better research, coordination, decision-making)
Other notes on bottlenecks:
âIt feels like a lot of the thinking around EA is very centralized.â
âI think âreaching more peopleâ and ânot bouncing people of the right kindâ would look somewhat qualitatively different from the status quo.â
âIâm very tempted to say âreaching the right peopleâ, but I generally think we should try to make sure the bottom of the funnel is fixed up before we do more of that.â
âHypothesis: As EA subfields are becoming increasingly deep and specialized, itâs becoming difficult to find people who arenât intimidated by all the understanding required to develop the ambition to become experts themselves.â
âI think poor communications and lack of management capacity turn off a lot of people who probably are value-aligned and could contribute a lot. I think those two factors contribute to EAs looking weirder than we really are, and pose a high barrier to entry for a lot of outsiders.â
âA more natural breakdown of these bottlenecks for me would be about the engagement/âendorsement of certain types of people: e.g. experts/âprestigious, rank and file contributors, fans/âlaypeople. In this breakdown, I think the most pressing bottleneck is the first category (experts/âprestigious) and I think itâs less important whether those people are slightly involved or heavily involved.â
Problems with the EA community/âmovement
Before getting into these results, Iâll note that we collected almost all survey responses before the event began; many sessions and conversations during the event, inspired by this survey, covered ways to strengthen effective altruism. It also seemed to me, subjectively, as though many attendees were cheered by the communityâs recent progress, and generally optimistic about the future of EA. (I was onsite for the event and participated in many conversations, but I didnât attend most sessions and I didnât take the survey.)
CEAâs Ben West interviewed some of this surveyâs respondents â as well as other employees of EA organizations â in more detail. His writeup includes thoughts from his interviewees on the most exciting and promising aspects of EA, and weâd recommend reading that alongside this data (since questions about problems will naturally lead to answers that skew negative).
Here are some specific problems people often mention. Which of them do you think are most significant? (Choose up to 3)
What do you think is the most pressing problem facing the EA community right now?
âI think the cluster around vetting and training is significant. Ditto demographic diversity.â
âI think a lot of social factors (many of which are listed in your next question: we are a very young, white, male, elitist, socially awkward, and in my opinion often overconfident community) turn people off who would be value aligned and able to contribute in significant ways to our important cause areas.â
âPeople interested in EA being risk averse in what they work on, and therefore wanting to work on things that are pretty mapped out and already thought well of in the community (e.g. working at an EA org, EtG), rather than trying to map out new effective roles (e.g. learning about some specific area of government which seems like it might be high leverage but about which the EA community doesnât yet know much).â
âThings for longtermists to do other than AI and bio.â
âGiving productive and win-generating work to the EAs who want jobs and opportunities for impact.â
âFailure to reach people who, if we find them, would be very highly aligned and engaged. Especially overseas (China, India, Arab world, Spanish-speaking world, etc). â
âHard to say. I think itâs plausibly something related to the (lack of) accessibility of existing networks, vetting constraints, and mentorship constraints. Or perhaps something related to inflexibility of organizations to change course and throw all their weight into certain problem areas or specific strategies that could have an outsized impact.â
âRelationship between EA and longtermism, and how it influences movement strategy.â
âPerception of insularity within EA by relevant and useful experts outside of EA.â
âGroupthink.â
âNot reaching the best people well.â
Answers from one respondent:
(1) EA community is too centralized (leading to groupthink)
(2) the community has some unhealthy and non-inclusive norms around ruthless utility maximization (leading to burnout and exclusion of people, especially women, who want to have kids)
(3) disproportionate focus on AI (leading to overfunding in that space and a lot of people getting frustrated because they have trouble contributing in that space)
(4) too tightly coupled with the Bay Area rationalist community, which has a bad reputation in some circles
What personally most bothers you about engaging with the EA community?
âI dislike a lot of online amateurism.â
âAbrasive people, especially online.â
âUsing rationalist vocabulary.â
âThe social skills of some folks could be improved.â
âInsularity, lack of diversity.â
âToo buzzword-y (not literally that, but the thing behind it).â
âPerceived hostility towards suffering-focused views.â
âPeople arenât maximizing enough; theyâre too quick to settle for âpretty goodâ.â
âBeing associated with âends justify the meansâ type thinking.â
âHubris; arrogance without sufficient understanding of othersâ wisdom.â
âTime-consuming and offputting for online interaction, e.g. the EA Forum.â
âAwkward blurring of personal and professional. In-person events mainly feel like work.â
âPeople saying crazy stuff online in the name of EA makes it harder to appeal to the people we want.â
âObnoxious, intellectually arrogant and/âor unwelcoming peopleâI canât take interested but normie friends to participate, [because EA meetups and social events] cause alienation with them.â
âThat the part of the community Iâm a part of feels so focused on talking about EA topics, and less on getting to know people, having fun, etc.â
âTension between the gatekeeping functions involved in community building work and not wanting to disappoint people; people criticizing my org for not providing all the things they want.â
âTo me, the community feels a bit young and overconfident: it seems like sometimes being âweirdâ is overvalued and common sense is undervalued. I think this is related to us being a younger community who havenât learned some of lifeâs lessons yet.â
âPeople being judgmental on lots of different axes: some expectation that everyone do all the good things all the time, so I feel judged about what I eat, how close I am with my coworkers (e.g. people thinking I shouldnât live with colleagues), etc.â
âSome aspects of LessWrong culture (especially the norm that the saying potentially true things tactlessly tends to reliably get more upvotes than complaints about tact). By this, I *donât* mean complaints about any group of peopleâs actual opinions. I just donât like cultures where itâs socially commendable to signal harshness when itâs possible to make the same points more empathetically.â
Most responses (both those above, and those which respondents asked we not share) included one or more of the following four âthemesâ:
People in EA, or the movement as a whole, seeming arrogant/âoverconfident
People in EA engaging in rude/âsocially awkward behavior
The EA community and its organizations not being sufficiently professional, or failing to set good standards for work/âlife balance
Weird ideas taking up too much of EAâs energy, being too visible, etc.
Below, weâve charted the number of times we identified each theme:
Community Mistakes
What are some mistakes youâre worried the EA community might be making? If in five years we really regret something weâre doing today, what is it most likely to be?
âThe risk of catastrophic negative PR/âscandal based on non-work aspects of individual/âcommunity behavior.â
âRestricting movement growth to focus too closely on the inner circle.â
âOverfunding or focusing too closely on AI work.â
âMaking too big a bet on AI and having it turn out to be a damp squib (which I think is likely). Being shorttermists in movement growth â pushing people into direct work rather than building skills or doing movement growth. Not paying enough attention to PR or other X-risks to the EA movement.â
âNot figuring out how to translate our worldview into dominant cultural regimes.â
âFocusing too much on a narrow set of career paths.â
âStill being a community (for which EA is ill-suited) versus a professional association or similar.â
âNot being ambitious enough, and not being critical enough about some of the assumptions weâre making about what maximizes long-term value.â
âNot focusing way more on student groups; not making it easier for leaders to communicate (e.g. via a group Slack thatâs actually used); focusing so much on the UK community.â
âNot having an answer for what people without elitist credentials can do.â
âNot working hard enough on diversity, or engagement with outside perspectives and expertise.â
âNot finding more /â better public faces for the movement. It would be great to find one or two people who would make great public intellectual types, and who would like to do it, and get them consistently speaking /â writing.â
âCareless outreach, especially in politically risky countries or areas such as policy; ill-thought-out publications, including online.â
âNot thinking arguments through carefully enough, and therefore being wrong.â
âIâm very uncertain about the current meme that EA should only be spread through high-fidelity 1-on-1 conversations. I think this is likely to lead to a demographic problem and ultimately to groupthink. I think we might be too quick to dismiss other forms of outreach.â
âI think a lot of the problems I see could be natural growing pains, but some possibilities:
(a) we are overconfident in a particular Bayesian-utilitarian intellectual framework
(b) we are too insular and not making enough of an effort to hear and weigh the views of others
(c) we are not working hard enough to find ways of holding each other and ourselves accountable for doing great work.â
Commentary
The most common theme in these answers seems to be the desire for EA to be more inclusive and welcoming. Respondents saw a lot of room for improvement on intellectual diversity, humility, and outreach, whether to distinct groups with different views or to the general population.
The second-most common theme concerned standards for EA research and strategy. Respondents wanted to see more work on important problems and a focus on thinking carefully without drawing conclusions too quickly. If I had to sum up these responses, Iâd say something like: âLetâs hold ourselves to high standards for the work we produce.â
Overall, respondents generally agreed that EA should:
Improve the quality of its intellectual work, largely by engaging in more self-criticism and challenging some of its prior assumptions (and by promoting norms around these practices).
Be more diverse in many ways â in the people who make up the community, the intellectual views they hold, and the causes and careers they care about.
Having read these answers, my impression is that participants hoped that the community continues to foster the kindness, humility, and openness to new ideas that people associate with the best parts of EA, and that we make changes when that isnât happening. (This spirit of inquiry and humility was quite prevalent at the event; I heard many variations on âI wish Iâd been thinking about this more, and I plan to do so once the Forum is over.â)
Overall Commentary
Once again, weâd like to emphasize that these results are not meant to be representative of the entire EA movement, or even the views of, say, the thousand people who are most involved. They reflect a small group of participants at a single event.
Some weaknesses of the survey:
Many respondents likely answered these questions quickly, without doing serious analysis. Some responses will thus represent gut reactions, though others likely represent deeply considered views (for example, if a respondent had been thinking for years about issues related to a particular question).
The survey included 33 people from a range of organizations, but not all respondents answered each question. The average number of answers across multiple-choice or quantitative questions was 30. (All qualitative responses have been listed, save for responses from two participants who asked that their answers not be shared.)
Some questions were open to multiple interpretations or misunderstandings. We think this is especially likely for âbottleneckâ questions, as we did not explicitly state that each question was meant to refer to a stage in the âfunnelâ model.