Getting People Excited About More EA Careers: A New Community Building Challenge

Summary

The discussion around the current EA org hiring landscape has highlighted that some people may disproportionately favour working at explicit EA organisations over other EA careers. Current community building may contribute to this effect in two ways: by rewarding people for wanting to work at EA orgs disproportionately and by providing less assistance in finding work at non-EA organisations. This may lead to reduced motivation for people who fail to secure jobs at explicit EA organisations, and talent misallocation. Strengthening ‘core’ community building within professional groups and finding new signals to quickly demonstrate EA commitment and understanding may help to make a larger variety of careers exciting to committed EAs and thus reduce the effect.

Introduction

It has become evident in the past few months that many highly qualified applicants are unable to land a job at explicitly EA-motivated organisations. The recent EA Forum post of an unsuccessful applicant has provided an illustrative example of the the situation and received strong resonance.

In the discussion that followed several commenters stated a significant preference to work at EA organisations rather than pursuing further skills building or relevant work at other institutions. For example Max Daniel writes:

“I echo the impression that several people I’ve talked to—including myself—were or are overly focussed on finding a job at a major EA org. This applies both in terms of time spent and number of applications submitted, and in terms of more fuzzy notions such as how much status or success is associated with roles. I’m less sure if I disagreed with these people about the actual impact of ‘EA jobs’ vs. the next best option, but it’s at least plausible to me that (relative to my own impression) some of them overvalue the relative impact of ‘EA jobs’.”

The OP of the post also shared this impression:

“My other top options [...] felt like “now I have to work really hard for some time, and maybe later I will be able to contribute to X-risk reduction”. So still like I haven’t quite made it. In contrast, working for a major EA org (in my imagination) felt like “Yes, I have made it now. I am doing super valuable, long-termism-relevant work”.”

It is great news that many people are very excited to work at explicit EA organisations. However, there is a sense that jobs at EA-motivated employers are disproportionately desirable to some relative to the expected impact of these options. In the discussion there seemed to be little disagreement with the claim that there are many options at non-EA employers that could well be competitive with the jobs at explicit EA organisations in terms of expected value. I.e. there are many career paths outside EA organisations that would be competitive even if you have secured a job offer from a major EA org, and even adjusting for the risk that you end up being less successful in that path than you’d hope. However, there still appears to be a substantial preference on a more ‘fuzzy’ level to work at these explicit EA organisations. To be clear: it is awesome that working at EA orgs is desirable to many, but concerning that they may find other EA careers less appealing for the wrong reasons.

If this is the case then it may lead to a misallocation of talent within the community, and reduced motivation for many other EA career paths as they are perceived (or felt) to be less appealing. In this post I want to explore whether some aspects of the EA community contribute to this effect and how community building efforts could help reduce it. In other words, I want to start a conversation on how we can make EA roles outside explicit EA organisations similarly exciting and emotionally appealing to committed EAs.

There are many more issues that arise from the current hiring situation that I am not looking into here (What to do with all the people? Does the high number of applicants affect the counterfactuals for working at an EA org? What emotional effects does a very meritocratic and competitive community have on some members?) which may well be more important, as I am sure others will write more on these (see also: 1 2 3).

The cause

Some reasons for this “EA org bias” have been discussed in the comments (e.g. here and here). Surely an individual’s preferences for a role at an EA org will be influenced by many factors. However, I want to highlight two aspects where the current community may disproportionately favour careers at EA orgs.

Social incentives favour working at explicit EA organisations

In recent years EA community building efforts have focused on building a core community of very motivated and well coordinated individuals. As a side effect, there is an increased need to quickly determine how committed to EA a community member is, and how sophisticated their understanding of the core concepts is.

For example, if I were looking for collaborators on a project within, say, biosecurity or politics, I would need to know if this collaborator shared the same motivation and commitment as I do before sharing confidential details about the project. Likewise, group leaders may spend more effort on members that they perceive to be very committed to EA and have a sound understanding of the concepts, for example by inviting them to more advanced discussion groups, retreats or referring them to others in the community.

Working at explicit EA organisations, or at least telling others that you want to, is a fantastic way to signal both commitment to and understanding of EA. For example, if you go to EA Global and introduce yourself as an FHI researcher, you’ll have an easier time finding potential collaborators than if you work in a non-EA think tank, even if you are equally committed and nuanced in your understanding of EA concepts, and possibly even if your think tank is more influential than FHI. If you tell others in your local group you’re applying to CEA and turn down your offer from Bridgewaters, they’ll probably rank you highly in their impact report and safely consider you a community building ‘success’. If you had ‘just’ started an economics PhD, they might be less sure.

It is noteworthy that the effect goes both ways: community builders will use the desire to work at an EA organisation as a quick proxy for commitment & understanding, and members will use it as a fast way to signal these.

Relatedly, if you’re very excited about EA, you’ll probably want to spend time with people who are at the cutting edge and discuss the ideas. After reading about the ‘core’ and ‘periphery’ EA community, and hearing that EA is very much trust- and network-based, it’s understandable many want to become part of this elusive ‘core’ community. However, the only clear access point to this core community is working at an EA organisation.

Finally, I suspect there is some sense that you sometimes feel like having to ‘justify’ your career choices to other community members if you’re going down some more speculative paths. The easiest way to avoid having to do this is to follow a standard EA career path, such as working at an explicitly EA-motivated employer.

It requires less independent work to want to apply to EA organisations

Another quote from the discussion:

“Somehow, EA positions were the positions I heard about. These were the positions that 80000 hours emailed me about, that I got invitations to apply for, that I found on the websites I was visiting anyway, and so on. Of course, the bar for applying for such a position is lower than if you first have to find positions yourself.”

I actually think 80,000 Hours does a decent job at including many different roles on their job board and linking to relevant resources in their career reviews. However it’s still the case that it requires significant additional effort for some career paths. If you want to work at EA orgs, then you have to learn all about EA. If you want to work at non-EA orgs, you’ll have to both learn all about EA to know what to look for and then learn all about this other field you’re getting into to figure out what the actually good opportunities are. Plus, if you’re unsure which EA org to work for lots of people will have good advice. This looks different if you’re looking for the best option e.g. within biosecurity.

I don’t think this is an issue of people being lazy. After learning all the EA concepts, thinking hard about cause prioritisation, doing tests on what types of work would fit you etc. it is not a small ask to do a big survey of this non-EA field to figure out the most promising opportunities. Having a mindset of looking for the very best option does not make this task less daunting.

At the same time, if community members build expertise in these fields early they may have a particularly large impact by helping others to follow them in these paths.

Current community building probably does not help to reduce the effect. For example, speaker events or discussion groups are more likely to focus on questions around cause prioritisation and community building (domain of EA orgs), and rarely around how to land a marketing job in a big corporation or an economics PhD. Moreover, some students will stop going to career-based societies, which will in turn mean they know less about non-EA careers. I suspect most EA career advice is now given by group leaders (regardless whether full-time or volunteer), who also know a lot more about EA organisations than promising professors in academia or excellent corporations to work at for skills building.

The problem

The described situation may, among other things, lead to two related negative effects:

  1. Reduced motivation. If someone incorrectly thinks or feels working at an explicit EA organisation is much higher impact or more desirable than the alternatives, being rejected by these organisations may be particularly painful and dissatisfying. Being less excited about the other options may then also reduce their motivation to work hard in these careers, reducing their overall impact. Or, in the worse case, they might not work hard to figure out the next best option at all as they suspect ‘real’ EA careers are out of their reach anyways and their impact does not matter much.

  2. Talent misallocation. If someone would be very excited to work at an EA organisation, this might result in them being less excited in taking other training opportunities. For example, imagine a budding cause prioritisation researcher who is rejected in 2019 by top EA research organisations and thus chooses to work at a different explicit EA organisation. However, this organisation focuses on a different cause area or does not provide good mentorship. It seems the budding researcher could be better off doing a great economics or philosophy PhD with a leading professor if she didn’t have this preference for working at an EA organisation.

A major caveat here is that it’s very unclear to me how significant this “EA org bias” actually is, i.e. how pervasive it is among applicants and community members, and how strongly individuals experience it. In reality the situation will rarely be as clear cut and it will be hard to judge for each individual case whether there is in fact an “EA org bias” at work. I believe my final conclusions are probably robustly good regardless but it’s hard to sense how urgent or important resolving the issue is.

Below I list four examples of areas that I perceive to be presently undervalued among recent highly dedicated EA university graduates. Each probably deserves more justification than I give, but I hope they can illustrate the discussion. Also, I have actually no idea what the majority of highly dedicated EA university graduates are thinking (apart from being one myself), so I may get this totally wrong.

  • Graduate studies with great professors or at top universities in subjects like economics and philosophy (for cause prioritisation research), public policy, international relations and psychology (for policy) or synthetic biology and epidemiology (for biorisk). 80,000 Hours has been talking a lot about this being a good idea, though my impression is it has resonated less than the advice about working for EA organisations.

  • Skills building in non-EA organisations such as start-ups, management consultancies or big corporations to develop operations, marketing and/​or management skills. For example, a significant fraction of current CEA staff has had substantial professional experience before joining.

  • Earning to give seems to be unpopular among highly dedicated EA university graduates. I share the concern put forward repeatedly by several here in this forum (e.g. in this thread) that EtG has somehow become ‘uncool’. However, quantitative trading is still one of 80,000 Hours priority paths. The career path also has a remarkably high upside potential, making it in principle competitive with most other options.

  • A final category includes speculative, non-standard or niche career paths. If people disproportionately aim for standard roles at EA organisations, we may be missing out on unconventional opportunities that might have high expected impact or exploration value, or contradict mainstream EA community beliefs. The more discussed examples include expertise in nanotechnology, geoengineering or attempts at institutional reform. I suspect there are more interesting examples that, by their very nature of being niche examples, I do not know about.

As a side note, we do not appear to have this problem in some other areas, like technical AI safety. Machine learning PhDs or jobs at leading AI research organisation (e.g. DeepMind) seems to be clear paths, can adsorb a lot of people and have a lot of community status associated. I’m unsure how people interested in AI policy or China feel, as these have received significant attention but the concrete career paths seem a bit less clear than EA orgs or technical AI safety.

Some ideas to improve

Strong professional ‘core’ communities

If working at explicit EA organisations is currently perceived to be the only way to join the ‘core’ community, the natural response is to build and foster more ‘core’ EA communities around people working in other fields.

An example where this may already be partly in place is EA London’s finance community. A graduate can look forward to taking earning to give roles in London and hope to meet key donors in the community and have interesting discussions there.

However, this is not yet the case if you work for the WHO to become a biosecurity specialist, if you’re working in a tech start-up in Berlin for skills building or if you’re doing a geoengineering PhD at a university outside an EA hub.

Surely local EA groups are trying to partly provide for this, and they will be part of the answer. However, it seems unlikely we’ll have a sufficient density of community members in more than a small number of places.

A possible way of looking at this is to see the goal of getting people to take EA jobs as composed of ‘push’ and ‘pull’ factors. The ‘push’ is the original community building that makes people want to contribute in the first place, and it is what a lot of community building efforts have focused on so far. The comparatively neglected ‘pull’ side is where we make EA careers as attractive as possible.

EA organisations appear to be good at the ‘pull’ side of things, and that is great. Building strong communities around people working in not explicitly EA organisations could be one way to improve the ‘pull’ strength of these career paths.

Surely this is not the only benefit of building good professional networks, but I suspect an underappreciated aspect of it.

An obstacle is that, unlike in local groups, it seems like nobody’s ‘job’ to worry about the EA community within their professional sphere. I hope that some people will take initiative anyways, and luckily I can think of a few people who are already doing this to some extent.

Finding new signals for commitment & understanding

There is a number of ways we can create new signals in the community. Some ideas include

  • Retreats. Retreats have become a popular activity of local groups. CEA also runs retreats for certain groups occasionally, but apart from the AI safety camp I am not aware of more initiatives that target professional groups. I would imagine these would be useful e.g. among people working in government or policy, PhD students in econmics/​philosophy looking to do cause prio work, people working at corporations or start-ups to build their skills, and many more fields. Being invited to attend such a retreat, or knowing about them and deciding to come, could serve as a decent signal. The main issue, again, seems to be that right now few people would feel responsible to organise such an event (unlike in local groups, which usually have some sort of hierarchy in place).

  • Workshops. These are similar to retreats, but maybe more appealing. For example, our graduate students looking for cause prioritisation work could spend a few weeks each year working together on their cause prio ideas.

  • Fellowships. I’m thinking along the lines of Open Phil’s AI fellows program. The GPI fellowships or Open Phil’s grants for people working on GCBRs seem like a great ideas along these lines in other areas. However, most GPI fellowships are only available to Oxford students (though the Global Priorities Fellows are another excellent example), and Open Phil’s program didn’t aim to solicit applications from people who had funding secured independently. There seems to be a lot of opportunity to do interesting things here, though I am aware that if the existing organisations were to provide these fellowships this would come with substantial opportunity costs.

  • Relevant side projects. EA-relevant projects may also be a way to build close connections with collaborators within and outside of EA organisations, create some direct value and demonstrate your commitment and understanding. For example, graduate students with relevant subject expertise may be able to write relevant analyses within their area of expertise, or people training in operations roles may be able to help with community events or infrastructure.

These are more thought as a starting point for discussion than a complete analysis. I’m sure others will have more good ideas.

Many thanks to Alex Barry, James Aung, Jan Brauner and Eve McCormick for providing feedback on earlier drafts of this post.