After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
(I am writing this post under a pseudonym because I don’t want potential future non-EA employers to find this with a quick google search. Initially my name could be found on the CV linked in the text, but after this post was shared much more widely than I had expected, I got cold feet and removed it.)
In the past 12 months, I applied for 20 positions in the EA community. I didn’t get any offer. At the end of this post, I list all those positions, and how much time I spent in the application process. Before that, I write about why I think more posts like this could be useful.
Please note: The positions were all related to long-termism, EA movement building, or meta-activities (e.g. grant-making). To stress this again, I did not apply for any positions in e.g. global health or animal welfare, so what I’m going to say might not apply to these fields.
Costs of applications
Applying has considerable time-costs. Below, I estimate that I spent 7-8 weeks of full-time work in application processes alone. I guess it would be roughly twice as much if I factored in things like searching for positions, deciding which positions to apply for, or researching visa issues. (Edit: Some organisations reimburse for time spent in work tests/trials. I got paid in 4 of the 20 application processes. I might have gotten paid in more processes if I had advanced further).
At least for me, handling multiple rejections was mentally challenging. Additionally, the process may foster resentment towards the EA community. I am aware the following statement is super in-accurate and no one is literally saying that, but sometimes this is the message I felt I was getting from the EA community:
“Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.”
Why I think more posts like this would have been useful for me
Overall, I think it would have helped me to know just how competitive jobs in the EA community (long-termism, movement building, meta-stuff) are. I think I would have been more careful in selecting the positions I applied for and I would probably have started exploring other ways to have an impactful career earlier. Or maybe I would have applied to the same positions, but with less expectations and less of a feeling of being a total loser that will never contribute anything towards making the world a better place after being rejected once again 😊
Of course, I am just one example, and others will have different experiences. For example, I could imagine that it is easier to get hired by an EA organisation if you have work experience outside of research and hospitals (although many of the positions I applied for were in research or research-related).
However, I don’t think I am a very special case. I know several people who fulfil all of the following criteria:
- They studied/are studying at postgraduate level at a highly competitive university (like Oxford) or in a highly competitive subject (like medical school)
- They are within the top 5% of their course
- They have impressive extracurricular activities (like leading a local EA chapter, having organised successful big events, peer-reviewed publications while studying, …)
- They are very motivated and EA aligned
- They applied for at least 5 positions in the EA community and got rejected in 100% of the cases.
I think I also fulfil all these criteria. Here is my CV roughly at the time when I was doing the applications. It sports such features as ranking 16th out of around 6000 German medical students, and 8 peer-reviewed publications while studying.
Without further ado, here are all the …
Positions I got rejected from in the last 12 months
I also include the stage that I was rejected at and how much time I had invested in the application process (Mostly work tests, but also researching organisations, adapting personal statements, preparing for interviews. I am counting “lost productivity” here, so I am also counting travel time weighted at around 50%).
Position – how far I got– how much time I invested in the application
Chief of Staff at Will MacAskill’s Office – stage 2/2: didn’t get an offer after 2 days worktrial – 32 h
OpenPhil Research Analyst – stage 2⁄4 (?): rejected after conversation notes work test – 22 h
OpenPhil biosecurity early career researcher grant – stage 1/1: no grant – 40 h
EA grants evaluator (CEA) – stage 2/X: rejected after first interview – 7 h
FHI Research scholar programme – stage 2/3: rejected after second work test – 50 h
Effective giving uk researcher – Stage 3⁄3 (?): no offer after what I think was the final interview – 15 h
LEAN manager – Stage 2/?: rejected after work test – 6 h
CEA operations specialist – stage 3/?: rejected after interview – 9 h
CEA local group specialist – stage 2/?: rejected after work-test – 12 h
2x FHI academic project manager (GovAI and research scholar programme) – Stage 2/2: no offer after final interview – 10 h each
Toby Ord research assistant – Stage 2/?: rejected after work test − 12 h
Center for Health Security research analyst – initiative application and interview, but they decided not to hire at all – 10 h
Nuclear Threat initiative researcher – initiative application, never heard back – 1 h
CSER biosecurity postdoc – stage 1/? – 3 h
CSER academic project manager – stage 1/? – 2 h
GPI Head of Research Operations – Stage 4/4: No offer after in-person work trial – 32 h
And here are additional positions I applied for but then did not complete the application process:
COO Ought – stopped following up after first stage because of visa issues – 4 h (but much more if you count me researching said visa issues)
Researcher Veddis – I decided not to go on the final stage work trial – 6 h
Program Manager/Investigator at BERI– I decided not to do the final stage work test – 15 h
- EA needs consultancies by 28 Jun 2021 15:18 UTC; 257 points) (
- SHOW: A framework for shaping your talent for direct work by 12 Mar 2019 17:16 UTC; 215 points) (
- Update from Open Philanthropy’s Longtermist EA Movement-Building team by 10 Mar 2022 19:37 UTC; 200 points) (
- High absorbency career paths by 11 Apr 2022 14:07 UTC; 197 points) (
- Mid-career people: strongly consider switching to EA work by 26 Apr 2022 11:22 UTC; 196 points) (
- EA needs a hiring agency and Nonlinear will fund you to start one by 17 Jan 2022 14:51 UTC; 166 points) (
- Results from the First Decade Review by 13 May 2022 15:01 UTC; 163 points) (
- Introducing High Impact Professionals by 19 Oct 2021 8:27 UTC; 152 points) (
- How good it is to donate and how hard it is to get a job by 16 Apr 2024 21:45 UTC; 150 points) (
- Things I Learned at the EA Student Summit by 27 Oct 2020 19:03 UTC; 148 points) (
- Introducing Training for Good (TFG) by 6 Oct 2021 16:59 UTC; 142 points) (
- Illegible impact is still impact by 13 Feb 2020 21:45 UTC; 134 points) (
- Try working on something random with someone cool by 18 May 2022 6:23 UTC; 130 points) (
- EA is vetting-constrained by 9 Mar 2019 1:25 UTC; 129 points) (
- Advice for early-career people seeking jobs in EA by 20 Jun 2024 14:44 UTC; 128 points) (
- On failing to get EA jobs: My experience and recommendations to EA orgs by 22 Apr 2024 21:19 UTC; 126 points) (
- Gaps and opportunities in the EA talent & recruiting landscape by 26 Aug 2022 21:15 UTC; 121 points) (
- What to do with people? by 6 Mar 2019 11:04 UTC; 118 points) (
- Mastermind Groups: A new Peer Support Format to help EAs aim higher by 31 May 2022 18:37 UTC; 108 points) (
- Effective Altruism and Meaning in Life by 18 Mar 2019 6:12 UTC; 106 points) (
- The availability bias in job hunting by 30 Apr 2022 14:53 UTC; 105 points) (
- Why do social movements fail: Two concrete examples. by 4 Oct 2019 19:56 UTC; 102 points) (
- What’s surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals by 30 Mar 2023 21:48 UTC; 101 points) (
- There are *a bajillion* jobs working on plant-based foods right now by 12 Jul 2019 16:13 UTC; 99 points) (
- Thoughts on 80,000 Hours’ research that might help with job-search frustrations by 16 Apr 2019 18:51 UTC; 99 points) (
- You should write about your job by 19 Jul 2021 1:26 UTC; 95 points) (
- The career and the community by 21 Mar 2019 12:35 UTC; 93 points) (
- Why EA meta, and the top 3 charity ideas in the space by 6 Jan 2021 15:47 UTC; 88 points) (
- A Framework for Thinking about the EA Labor Market by 8 May 2019 19:33 UTC; 83 points) (
- 11 Oct 2021 11:04 UTC; 82 points) 's comment on The Cost of Rejection by (
- EA Forum: Data analysis and deep learning by 12 May 2020 17:39 UTC; 82 points) (
- Mediocre EAs: career paths and how do they engage with EA? by 11 Apr 2024 8:36 UTC; 81 points) (
- Mediocre EAs: career paths and how do they engage with EA? by 11 Apr 2024 8:36 UTC; 81 points) (
- Doing good is a privilege. This needs to change if we want to do good long-term. by 3 Aug 2022 15:44 UTC; 79 points) (
- EA is becoming increasingly inaccessible, at the worst possible time by 22 Jul 2022 15:40 UTC; 77 points) (
- Getting a feel for changes of karma and controversy in the EA Forum over time by 7 Apr 2021 7:49 UTC; 75 points) (
- The BEAHR: Dust off your CVs for the Big EA Hiring Round! by 24 Mar 2022 8:56 UTC; 74 points) (
- Plan Your Career on Paper by 23 Sep 2021 15:04 UTC; 74 points) (
- The Case for the EA Hotel by 31 Mar 2019 12:34 UTC; 73 points) (
- Getting into an EA-aligned organisation mid-career by 8 Aug 2023 8:11 UTC; 72 points) (
- What’s wrong with the EA-aligned research pipeline? by 14 May 2021 18:38 UTC; 70 points) (
- EA needs to understand its “failures” better by 24 May 2022 14:24 UTC; 67 points) (
- Getting People Excited About More EA Careers: A New Community Building Challenge by 10 Mar 2019 7:35 UTC; 66 points) (
- Improving the EA-aligned research pipeline: Sequence introduction by 11 May 2021 17:57 UTC; 63 points) (
- [Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration by 19 Dec 2019 15:49 UTC; 62 points) (LessWrong;
- [Part 1] Amplifying generalist research via forecasting – models of impact and challenges by 19 Dec 2019 18:16 UTC; 60 points) (
- The Case for The EA Hotel by 31 Mar 2019 12:31 UTC; 57 points) (LessWrong;
- Six Takeaways from EA Global and EA Retreats by 16 Dec 2021 21:14 UTC; 55 points) (
- Coaching: Reduce Struggle and Develop Talent by 1 Oct 2021 15:16 UTC; 55 points) (
- EA jobs provide scarce non-monetary goods by 20 Mar 2019 20:56 UTC; 55 points) (
- Paths to Impact for EA Working Professionals by 9 Jun 2022 15:33 UTC; 55 points) (
- [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges by 19 Dec 2019 15:50 UTC; 55 points) (LessWrong;
- [Closed] AMA about roles at 80,000 Hours by 8 Aug 2024 16:20 UTC; 54 points) (
- A List of Things For People To Do by 8 Mar 2019 11:34 UTC; 52 points) (
- An update on Operations Camp 2019 by 19 Sep 2019 7:59 UTC; 52 points) (
- Money Can’t (Easily) Buy Talent by 22 Jan 2021 2:56 UTC; 51 points) (
- My experience on a summer research programme by 22 Sep 2019 9:54 UTC; 50 points) (
- 4 Mar 2019 17:42 UTC; 50 points) 's comment on Unsolicited Career Advice by (
- Why don’t all EA-suggested organizations disclose salary in job descriptions? by 28 Sep 2022 22:42 UTC; 49 points) (
- Dealing with Network Constraints (My Model of EA Careers) by 28 Feb 2019 1:34 UTC; 49 points) (
- $100 Prize to Best Argument Against Donating to the EA Hotel by 27 Mar 2019 17:03 UTC; 49 points) (
- Privacy as a Blind Spot: Are There Long Term Harms in Using Facebook, Google, Slack etc.? by 16 Jan 2021 17:15 UTC; 48 points) (
- EA Survey 2019 Series: Careers and Skills by 7 Jan 2020 21:13 UTC; 46 points) (
- EA Research Organizations Should Post Jobs on PhilJobs.org by 2 May 2019 19:37 UTC; 45 points) (
- Latest Research and Updates for February 2019 by 28 Feb 2019 15:07 UTC; 45 points) (
- Which EA orgs provide feedback on test tasks? by 30 Jan 2022 20:25 UTC; 44 points) (
- From Layoff to Co-founding in a Breathtaking Two Months by 26 Sep 2023 7:35 UTC; 44 points) (
- There and back again: reflections from leaving EA (and returning) by 17 Mar 2024 22:59 UTC; 43 points) (
- Be careful with (outsourcing) hiring by 17 Oct 2022 20:30 UTC; 40 points) (
- Matrix Moments by 30 Jan 2022 22:20 UTC; 40 points) (
- Unsolicited Career Advice by 4 Mar 2019 9:22 UTC; 40 points) (
- More writeups! by 7 Feb 2020 3:10 UTC; 40 points) (LessWrong;
- Keeping everyone motivated: a case for effective careers outside of the highest impact EA organizations by 22 Aug 2019 6:43 UTC; 39 points) (
- Is it no longer hard to get a direct work job? by 25 Nov 2021 22:03 UTC; 38 points) (
- 18 May 2022 23:12 UTC; 38 points) 's comment on Some potential lessons from Carrick’s Congressional bid by (
- A guide to improving your odds at getting a job in EA by 19 Mar 2019 13:11 UTC; 37 points) (
- 1 Apr 2019 0:28 UTC; 36 points) 's comment on EA Forum Prize: Winners for February 2019 by (
- Replaceability Concerns and Possible Responses by 2 Aug 2020 18:19 UTC; 34 points) (
- [Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration by 19 Dec 2019 16:36 UTC; 32 points) (
- What is the point of EA outreach? by 23 Nov 2023 9:13 UTC; 32 points) (
- Brief Presentation and Considerations for an EA Common Application by 4 May 2022 15:36 UTC; 31 points) (
- A naive analysis on if EA is Talent constrained by 23 Mar 2020 23:09 UTC; 30 points) (
- 22 Jan 2021 10:32 UTC; 29 points) 's comment on Money Can’t (Easily) Buy Talent by (
- Announcing A Volunteer Research Team at EA Israel! by 18 Jan 2020 17:55 UTC; 28 points) (
- 18 Mar 2019 7:18 UTC; 28 points) 's comment on Effective Altruism and Meaning in Life by (
- Should EA have a career-focused “Do the most good” pledge? by 20 Jul 2021 13:47 UTC; 28 points) (
- The Community Manifesto by 31 Aug 2022 23:44 UTC; 26 points) (
- Career Advice For The Everyday Effective Altruist by 29 Sep 2021 7:00 UTC; 26 points) (
- Profiting-to-Give: harnessing EA talent with a new funding model by 4 Mar 2019 9:22 UTC; 26 points) (
- 15 Jul 2022 23:15 UTC; 24 points) 's comment on Is it still hard to get a job in EA? Insights from CEA’s recruitment data by (
- A list of technical EA projects by 21 Dec 2021 5:05 UTC; 23 points) (
- Backyard EA: a podcast proposal by 21 Oct 2022 8:21 UTC; 21 points) (
- 19 Mar 2021 17:15 UTC; 21 points) 's comment on EA capital allocation is an inner ring by (
- 11 Aug 2022 16:33 UTC; 19 points) 's comment on Let’s not glorify people for how they look. by (
- Is there a job board with measures of impact where users can add jobs? by 6 May 2019 23:04 UTC; 19 points) (
- 4 Jan 2022 20:32 UTC; 18 points) 's comment on What to do with people? by (
- EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) by 11 Mar 2019 14:41 UTC; 17 points) (
- Jobs at EA-organizations are overpaid, here is why by 8 Jun 2022 17:20 UTC; 16 points) (
- Complex value & situational awareness by 16 Apr 2019 18:42 UTC; 15 points) (
- 1 Sep 2021 22:28 UTC; 14 points) 's comment on Frank Feedback Given To Very Junior Researchers by (
- 8 Sep 2021 19:03 UTC; 14 points) 's comment on Buck’s Quick takes by (
- 23 Jan 2021 7:31 UTC; 14 points) 's comment on Money Can’t (Easily) Buy Talent by (
- 20 Mar 2019 2:07 UTC; 14 points) 's comment on Announcement: Join the EA Careers Advising Network! by (
- 6 Apr 2022 7:30 UTC; 13 points) 's comment on I’m Offering Free Coaching for Software Developers in the EA community by (
- 3 Jun 2022 20:00 UTC; 11 points) 's comment on Announcing a contest: EA Criticism and Red Teaming by (
- Meta charity focused on earning to give? by 24 Oct 2024 13:21 UTC; 11 points) (
- 23 Aug 2022 8:04 UTC; 11 points) 's comment on Perhaps the highest leverage meta-skill: an EA guide to hiring by (
- 10 Mar 2021 19:46 UTC; 11 points) 's comment on Why Hasn’t Effective Altruism Grown Since 2015? by (
- 8 May 2019 1:02 UTC; 10 points) 's comment on Is EA unscalable central planning? by (
- 11 Mar 2019 22:08 UTC; 9 points) 's comment on EA is vetting-constrained by (
- Hiring: The Ignored Resource of Rejected EA Job Candidates by 30 Aug 2022 4:29 UTC; 9 points) (
- 24 Jul 2022 4:29 UTC; 8 points) 's comment on Reasons I’ve been hesitant about high levels of near-ish AI risk by (
- 19 Apr 2020 14:00 UTC; 8 points) 's comment on Making Impact Purchases Viable by (
- 19 Aug 2020 21:36 UTC; 8 points) 's comment on The case of the missing cause prioritisation research by (
- 16 Feb 2022 13:13 UTC; 8 points) 's comment on Hiring EA Developers by (
- 24 Mar 2022 18:49 UTC; 7 points) 's comment on The BEAHR: Dust off your CVs for the Big EA Hiring Round! by (
- 15 May 2021 1:33 UTC; 7 points) 's comment on EA is a Career Endpoint by (
- Is EA a community of elites? by 1 Mar 2019 6:24 UTC; 7 points) (
- 31 May 2019 14:13 UTC; 6 points) 's comment on Drowning children are rare by (
- 16 Mar 2021 4:52 UTC; 6 points) 's comment on AMA: Holden Karnofsky @ EA Global: Reconnect by (
- 7 Jan 2021 2:24 UTC; 4 points) 's comment on Vaidehi Agarwalla’s Quick takes by (
- 6 Feb 2021 0:21 UTC; 4 points) 's comment on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. by (
- 25 Oct 2021 8:47 UTC; 4 points) 's comment on Introducing High Impact Professionals by (
- 26 Mar 2021 6:35 UTC; 4 points) 's comment on Propose and vote on potential EA Wiki entries by (
- 1 Apr 2019 4:34 UTC; 4 points) 's comment on The Case for the EA Hotel by (
- 7 Oct 2022 21:35 UTC; 3 points) 's comment on Why don’t people post on the Forum? Some anecdotes by (
- 26 Feb 2019 10:26 UTC; 3 points) 's comment on Can the EA community copy Teach for America? (Looking for Task Y) by (
- 30 Apr 2022 20:46 UTC; 2 points) 's comment on Open Thread: Spring 2022 by (
- 20 Mar 2019 2:07 UTC; 2 points) 's comment on Announcement: Join the EA Careers Advising Network! by (
- 11 Apr 2022 15:14 UTC; 1 point) 's comment on Four categories of effective altruism critiques by (
- [Opzionale] Per approfondire “Dalla teoria alla pratica” by 18 Jan 2023 11:56 UTC; 1 point) (
- 8 Oct 2022 16:43 UTC; 1 point) 's comment on Case Study of EA Global Rejection + Criticisms/Solutions by (
- 8 Oct 2022 19:05 UTC; 1 point) 's comment on Case Study of EA Global Rejection + Criticisms/Solutions by (
- 29 Jun 2022 17:55 UTC; 0 points) 's comment on Announcing a contest: EA Criticism and Red Teaming by (
- Ben Hoffman & Holden Karnofsky by 20 Mar 2021 4:07 UTC; 0 points) (
- 22 Jan 2023 21:55 UTC; -11 points) 's comment on Be careful with (outsourcing) hiring by (
- EA capital allocation is an inner ring by 19 Mar 2021 4:06 UTC; -57 points) (
Adding some more data from my own experience last year.
Personally, I’m glad about some aspects of it and struggled with others, and there are some things I wish I had done differently, at least in hindsight. But here I just mean to quickly provide data I have collected anyway in a ‘neutral’ way, without implying anything about any particular application.
Total time I spent on ‘career change’ in 2018: at least 220h, of which at least about 101h were for specific applications. (The rest were things like: researching job and PhD opportunities; interviewing people about their jobs and PhD programs; asking people I’ve worked with for input and feedback; reflection before I decided in January to quit my previous job at the EA Foundation by April.) This does neither include 1 week I spent in in San Francisco to attend EAG SF and during which I was able to do little other work nor 250h of self-study that seems robustly useful but which I might not have done otherwise. (Nor 6 full weeks plus about 20h afterwards I spent doing an internship at an EA org, which overall I’m glad I did but might not have done otherwise.)
Open Phil Research Analyst—rejected after conversation notes test − 16h [edit: worth noting that they offered compensation for the time spent on the trial task]
OpenAI Fellows program—after more than 6 months got a rejection email encouraging me to apply again within the next 12 months − 5h [plus 175h studying machine learning including 46h on a project I tried to do specifically for that application—I count none of this as application cost because I think it was quite robustly useful]
BERI project manager application—rejected immediately (the email was ambiguous between a regular desk reject and them actually not hiring at all for that role for now) − 1h
Travelling to EAG SF from Germany to get advice on my career and find out about jobs - ~1 full week plus something between USD 1,000 and 5,000, which was between 10% and 50% of my liquid runway
CEA Summer Research Fellowship [NB this was a 6-week internship, not a full-time role] - got an offer and accepted − 4.5h
2nd AI safety camp (October) [NB the core of this was a 1-2 week event organized by ‘grassroots’ efforts, and nothing that comes with funding above covering expenses] - got an offer and accepted − 1.2h
FHI Research Scholars Programme—got an offer and accepted [this is what I’m doing currently] − 30h
AI Impacts researcher—withdrew my application after the 1st interview because I accepted the FHI RSP offer − 44h [NB this was because I ‘unilaterally’ spent way more time to create a work sample than anyone had asked me to do, and in a quite inefficient way; I think one could have done an application in 1-5h if one had had a shovel-ready work sample. Again I’m excluding an additional 64h of teaching myself basic data processing and visualization skills with R because I think they are robustly useful.]
[I did manual time tracking so there might be some underestimation, with the error varying a lot between applications. A systematic error is that I never logged time spent in job interviews, but this is overall negligible.]
(I feel slightly nervous about sharing this. But I think the chance that it contributes to identifying if there are valuable changes to make in the overall talent/job landscape and messaging is well worth the expected cost; and also that as someone with a fixed-term but full-time job at an EA org I’m well-positioned to take some risks.)
One thing that might be worth noting: I was only able to invest that many resources because of things like (i) having had an initial runway of more than $10,000 (a significant fraction of which I basically ‘inherited’ / was given to me for things like academic excellence that weren’t very effortful for me), (ii) having a good relationship to my sufficiently well-off parents that moving back in with them always was a safe backup option, (iii) having access to various other forms of social support (that came with real costs for several underemployed or otherwise struggling people in my network).
I do think current conditions mean that we ‘lose’ more people in less comfortable positions than we otherwise would.
+1 to noting that the current recruitment configuration strongly favors elite (& highly privileged) applicants.
Yeah, this is one reason Open Phil pays people for doing our remote work tests, so that people who don’t happen to have runway/similar can still go through our process. Possibly more EA orgs should do this if they aren’t already.
I’d like to make this into a norm, but it does also pose a barrier for funding constrained EA organizations by increasing the costs of hiring.
I think it’s fine to be a “norm, if you can afford it.”
If you can’t afford it, doesn’t that suggest that earning to give might not be such a bad choice after all?
Yes. Earning to give is a good choice and I’ve not suggested otherwise.
(Peter has been one of several people continuing to argue “earning to give is undervalued, most orgs could still do useful things with more funding”.)
Just a thank you for sharing, it can be scary to share your personal background like this but it’s extremely helpful for people looking into EA careers.
What do you mean by “lose”? If they stop applying to EA orgs, but take another reasonably impactful job, I’d see it as potentially positive—I don’t want people to spend so much time applying for EA org jobs!
I think there are at least two effects where the world loses impact: (i) People in less privileged positions not applying for EA jobs; sometimes one of these would actually have been the best candidate. (ii) More speculatively (in the sense that I can’t point to a specific example, though my prior is this effect is very likely to be non-zero), people in less privileged positions might realize that it’s not possible for them to apply for many of the roles they perceived to be described as highest-impact and this might reduce their EA motivation/dedication in general, and make them feel unwelcome in the community.
I emphatically agree that them taking another potentially impactful job is positive. In fact, as I said in another comment, I wish there was more attention on and support for identifying and promoting such jobs.
I absolutely agree that losing out on less-privileged colleagues would be a detriment to EA! I just think it would be better for those individuals and the world if they start working sooner, rather than spending months applying for jobs at EA organisations.
Something that seems to be missing from this (very valuable) conversation is that many people also spend months looking for non-EA jobs that they have a personal fit for. I’m mainly aware of people with science PhDs, either applying for industry jobs or applying for professorships. It is not uncommon for this to be a months long process with multiple 10s of applications, as being reported here for EA job searching. The case of where this goes faster in industry jobs tends to be because the applicant is well established as having a key set of skills that a company needs and/or a personal network connection with people involved in hiring at the company. Some academics get lucky just applying for a few professorships, but others apply to 50+ jobs, which easily takes 100+ hours, perhaps many more. And in both cases you spend lots of time over the preceding years learning about the job search process, how to write cover letters, teaching statements, etc.
I definitely feel some of this myself, even from being “less privileged” only in the sense that my degree is from a state university. (On most dimensions I am very privileged.)
Also I’m from the Midwest, and I feel like there’s a subtle coastal > Midwest dynamic that’s at play. (Really a subset of a larger coastal > anywhere-that-isn’t-coastal dynamic)
I really like the specific numbers people are posting. I’ll add my own (rough estimates) from the ~5 months I spent applying to roles in 2018.
Context: In spring 2018, I attended an event CEA ran for people with an interest in operations, because Open Phil referred me to them; this is how I wound up deciding to apply to most of the roles below. Before attending the operations event, I’d started two EA groups, one of which still existed, and spent ~1 year working 5-10 hours/week as a private consultant for a small family foundation, doing a combination of research and operations work. All of the below experiences were specific to me; others may have gone through different processes based on timing, available positions, prior experience with organizations, etc.
CEA (applied to many positions, interviewed for all of them at once, didn’t spend much additional time vs. what I’d have done if I just applied to one)
~4 hours of interview time before the work trial, including several semi-casual conversations with CEA staff at different events about roles they had open.
~2-hour work trial task, not very intense compared to Open Phil’s tasks
1.5-week work trial at CEA; there were approximately as many open positions as there were work trial candidates, and I’m not sure anyone went through a trial of this length and wasn’t hired (though this might have happened). I was paid at a standard hourly rate for this, so it came out to ~$1500.
Open Phil (research)
They reused my conversation notes and charity evaluation test from a previous GiveWell application (those took me ~8 hours total, so perhaps I should count it as ~4 hours per application)
The first interview took ~30 minutes (and was more of a Q&A for my benefit, not something that required too much preparation).
The next work trial was ~12 hours (I worked until almost the maximum time permitted; we were instructed not to spend more than this, and to submit an incomplete application if we ran out of time).
The second interview was ~75 minutes, and pretty intense, but not something I was asked to study for in a particular way.
When you include the resume + initial submission, this adds up to 18-22 hours (depending on how you count the reused conversation notes), for which I was paid $1100 ($300 for notes, $800 for work test). That was better than my freelance writing rate at the time, so while it was time-consuming, it wasn’t totally unsustainable.
These hours were spread out over months of waiting time, which wasn’t ideal, but given the many hundreds of people who applied, I’m not surprised the process took a while (I’d guess that staff spent something like 500 hours grading research tests and conducting follow-up interviews with the last round of candidates, which is a full month of work for three people).
Open Phil (operations)
Started with a ~45-minute informational interview (mostly for me to ask questions, didn’t require much prep)
Work test in the 2-8-hour recommended range, paid at $24/hour for up to 8 hours (which read to me as a strong signal of “don’t spend more time than this”, though I understand the pressure to keep going). It took me 2.5 hours for the four-page assignment; it was an email rather than a research report, so a bit less stressful to finish.
I joined that hiring process fairly late, and someone else was hired before I got any further. When a new position opened a few months later, Open Phil asked me to come in for a one-day visit, and they were flexible enough that I was able to combine this with another trip to the Bay for interviews (the price of flexibility, of course, is that everything takes longer for each applicant—it’s a tough tradeoff).
The visit was a full day, but didn’t involve much “work”, per se; there were ~3 hours of interviews, with the rest of the time spent on between-interview breaks, casual lunch with other operations staff (no interviews), and a visit to the daily morning meeting for ops staff.
Total: Counting travel as “half time”, 10-12 hours.
MIRI (operations)
One-hour interview to learn more about the position; I was also asked some questions, but this was more of a screening for “do you understand what MIRI does, and why”.
Initial one-hour test as part of their standard recruiting process for all staff (30 minutes of quantitative reasoning, 30 minutes of logic puzzles). I don’t think this was a stage in and of itself, but I could be wrong (I think I was always going to do the work test).
~4 hours of work tests in the MIRI office, plus a ~one-hour interview with a MIRI staffer I’d likely have worked with as part of the role (very casual, mostly me asking questions).
Total: 8 hours with travel as “half time” (this was done as part of a trip I made to the Bay to work through several interviews). I was paid $120 for my time on the work tests.
Ought (COO role)
Three interviews of ~4 hours total, which were fairly “work-like” (I was answering more questions than I asked, or discussing trial tasks)
~3 hours spent on two trial tasks; no time recommendations were given, but the work was light (“think about this brief technical article and be ready to explain it”, “think about whether we should hire someone with this resume”—I wasn’t submitting any writing, just discussing the assignments in my interviews)
I didn’t get to the “work trial” stage for this position (though I don’t know whether there was one—they may have just trialed their one favorite candidate).
Total: ~7 hours of work, all remote, and I was paid $250. Ought gets bonus points for giving me very good feedback on the ways in which my last trial task wasn’t up to par.
Vox (journalist and engagement manager positions, Future Perfect)
~7 hours on an initial work assignment, plus a 30-minute phone screening for me to ask questions. The journalist assignment took roughly the same amount of time as the engagement manager assignment.
I didn’t move beyond that in the process. Amusingly, the only non-EA organization I applied to led to my doing the greatest amount of unpaid work.
AI Impacts (operations/research role, it’s a tiny organization and I’d have been a jack-of-all-trades)
~2 hours of initial interviews before being offered a work trial
Because the role was nebulous, I wound up planning my own work trial together with AI Impacts staff. I estimated that the work would take ~20 hours total (paid at $30/hour), but wound up accepting a CEA position before starting in on the tasks.
CHAI (communications/executive assistant role)
Two interviews of ~2.5 hours total (of this, 1.5 hours was talking to Stuart Russell, which was much more exciting for me than for him).
...and that’s it. I received an offer (I think they had very few candidates) without a further work trial.
BERI (project manager)
Two interviews of ~1.5 hours total, nothing beyond that (no offer)
I also had some exploratory conversations with people at a couple of other organizations, but accepted the CEA position before getting to a formal interview.
All told, if I throw in ~5 hours for updating my resume and writing a few brief “cover letter” notes (huge props to the orgs I applied to for not requiring formal cover letters), I spent ~70 hours interviewing (with travel at half time) and was paid $1530 (outside the CEA work trial, which was another 60 hours and $1500). I’m not sure how to think about time costs from travel, but I got to meet a lot of interesting people and eat some free meals along the way.
I didn’t find any process especially aggravating, though there were small adjustments I’d suggest for some organizations (mostly the small ones that hadn’t done much interviewing). I think I was compensated fairly, and most of my interviews were genuinely useful to me, both for learning about the particular organization and for getting a better sense of my own strengths, weaknesses, and goals.
I agree with some of the criticism on this page, but I also want to point out some really good things EA orgs do with hiring:
Cross-referencing!
Open Phil passed me along to CEA as a possible operations candidate when they hired someone before I finished making it through their pipeline, and I wouldn’t have applied for most of these positions if they hadn’t done so.
Open Phil also reused my GiveWell tests so I didn’t have to write new conversation notes.
CHAI passed notes from one of my interviews to BERI when I was still applying for the latter role.
After I volunteered at a CEA event after the operations retreat, they passed my name to MIRI, who hired me to work on operations for some of their retreats, which helped me learn about their open position. These organizations talk to each other, and in my experience, that’s been a good thing.
No cover letters! (I said this once before, but it’s worth saying again.)
Compensation for time spent on work trials! It’s possible that orgs should also compensate for interviews, but the work-trial payments put EA leagues ahead of some other industries. I certainly never got paid for any of the cover letters I wrote in college, or the hours of math tests and debate prep I had to do while applying for jobs at investment firms.
Not having everything be interview-based! I don’t interview well, and spent a lot of time in college wondering whether I’d just screwed something up in an interview without noticing. My work trials, on the other hand, are concrete and visible, and if I don’t get accepted to a position, there’s at least a chance that I can learn something by reviewing my work.
For the cross-referencing, did they ask your permission first? Hopefully so. Otherwise, there can be the awkward situation where one does not actually want to work at the organization to which one has been referred.
Yes, they asked my permission first.
I strongly prefer cover letters because they give me the opportunity to frame myself in the way that I think I should be seen.
I really wish we (as an EA community) didn’t work so hard to accidentally make earning to give so uncool. It’s a job that is well within the reach of anyone, especially if you don’t have unrealistic expectations of how much money you need to make and donate to feel good about your contributions. It’s also a very flexible career path and can build you good career capital along the way.
Sure talent gaps are pressing, but many EA orgs also need more money. We also need more people looking to donate, as the current pool of EA funding feels very over-concentrated in the hands of too few decision-makers right now.
I also wish we didn’t accidentally make donating to AMF or GiveDirectly so uncool. Those orgs could continually absorb the money of everyone in EA and do great, life-saving work.
(Also, not to mention all the career paths that aren’t earning to give or “work in an EA org”...)
+1 for pointing out the hazard of having funding concentrated in the hands of a very few decision makers
>Also, not to mention all the career paths that aren’t earning to give or “work in an EA org”
While I share your concern about the way earning to give is portrayed, I think this issue might be even more pressing.
FWIW, without having thought systematically about this, my intuition is to agree. I’d be particularly keen to see:
More explicit models for what trainable skills and experiences are useful for improving the long-term future, or will become so in the future (as new institutions such as CSET are being established).
More actionable advice on how to train these skills.
My gut feeling is that in many places we could do a better job at utilizing skills and experiences people can get pretty reliably in the for-profit world, academia, or from other established ‘institutions’.
I’m aware this is happening to some extent already, e.g. GPI trying to interface with academia or 80K’s guide on US policy. I think both are great!
NB this is different from the idea that there are many other career paths that would be high-impact to stay in indefinitely. I think this is also true, but at least if one has a narrow focus on the long-term future I feel less sure if there are ‘easy wins’ left here.
(An underlying disagreement here might be: Is this feasible, or are we just too much bottlenecked by something like what Carrick Flynn has called ‘disentanglement’. Very crudely, I tend to agree that we’re bottlenecked by disentanglement but that there are still some improvements we can make along the above lines. A more substantive underlying question might be how important domain knowledge and domain-specific skills are for being able to do disentanglement, where my impression is that I place an unusually high value on them whereas other EAs are closer to ‘the most important thing is to hang out with other EAs and absorb the epistemic norms, results, and culture’.)
I agree—I just felt like it was well covered already by Luke’s comments.
Nice point.
‘I also wish we didn’t accidentally make donating to AMF or GiveDirectly so uncool.’
This reminds me of the pattern where we want to do something original, so we don’t take the obvious solution.
I think that earning-to-give and donating to AMF and GiveDirectly is very cool. (I did this full-time for a while, and now advise a private foundation whose funders also do this full-time.)
In fact, I can’t think of any people I’ve met within EA who don’t think doing this is very cool, and I can only think of a few who would clearly “rank” ETG below other types of work in terms of “coolness”. The most common reaction I’ve heard to people who discussed their choice to pursue ETG or direct work outside of EA (for example, studying public health with an eye toward biosecurity or neglected tropical diseases) hasn’t been “okay, good for you, too bad you don’t work at an EA org”. It’s been “that’s really wonderful, congratulations!”
(I do know that some people have heard something closer to the first reaction, which is disappointing—and part of the reason I’m so forcefully expressing my beliefs here.)
Note that “coolness” is not the same as “impact”; personally, I think it’s likely that working at GiveWell is probably higher-impact than earning-to-give and donating $10,000/year. But that doesn’t mean that working at GiveWell is cooler. In both cases, someone is devoting their life to helping others in a way that aligns with my core values in life.
The fact that the GiveWell person passed an extra work trial (assuming they both applied to GiveWell—maybe they didn’t, and the ETG person just really likes their job!) is trivial compared to the overarching fact of “holy cow, you’re both using your lives to improve other lives, it doesn’t get much cooler than that”.
I’d feel exactly the same way about someone whose life didn’t lead them down the “fancy four-year degree” plan and who donates $1000/year because that’s really all they can spare. When it comes to my internal view of “coolness”, it’s actually the thought that counts, as long as the thought involves carefully considering the best ways to use one’s resources.
I’m really glad that’s been your experience and I acknowledge that maybe my experience isn’t typical.
My experience has been more pessimistic. Honestly, I usually encounter conversations that feel more like this:
Bob: “Hi, I can donate $10,000 a year to the EA movement. GiveWell says that could save 4-5 lives a year, and it’s quite possible we could even find better giving opportunities than GiveWell top charities. This is super exciting!”
Alice: “Pff, $10K/yr isn’t really that much. We don’t need that. You should do direct work instead.”
Bob: “Ok, how about I research biosecurity?”
Alice: “Nah, you’d probably mess that up. We should just let FHI handle that. We can’t talk about this further because of infohazards.”
...Obviously this is dramatized for effect, but I’ve never seen a community so excited to turn away money.
In addition to what Peter describes, if we do a simple content analysis of forum threads or blog posts in the last 3 or so years, ETG feels like it’s become invisible. Long term EAs like you and me most likely do still think it’s cool because when we became EAs it was a huge part of it and probably a big part of what drew us in (in my case, certainly—I became an EA the year GWWC was launched). But that doesn’t mean that this is the subtext that newer EAs are getting. I feel like the opposite is true, and I find that deeply concerning.
Oof, 8 weeks of effort to get 0⁄20 positions is pretty brutal. It’s easy to see how that would feel like your “Hey you!…” paragraph. And while I suspect you’re a bit of an outlier in time spent and positions applied for, I also think you’re pointing at something true about the current situation re: job openings at EA-motivated employers, as evidenced by how many upvotes this post has gotten, some of the comments on this page, and the data I’ve got as a result of managing Open Phil’s 2018 recruitment round of Research Analysts, during which we had to say “no” to tons of applicants with quite impressive resumes.
I’ve been writing up some reflections on that recruiting round, which I hope to share soon. One of my takeaways is something like “The base of talent out there is strong, and Open Phil’s current ability to deploy it is weak.” In that way we might be an extreme opposite of Teach for America, and I suspect many other EA-motivated orgs are as well.
Anyway, I plan to say more on these topics when I share my “reflections” post, but in the meantime I just want to say I’m sorry that you spent so much time applying to EA orgs and got no offers. Also, setting the time investment aside, it’s also just emotionally difficult to get an “Unfortunately, we’ve decided…” email, let alone receive 20 of them in a row.
A couple other random notes for now:
- A colleague of mine has heard some EAs — perhaps motivated by considerations like those in this post — saying stuff like “maybe I shouldn’t even try to apply because I don’t want to waste orgs’ time.” In case future potential Open Phil applicants end up reading this comment, let it be known that we don’t think it’s a waste of our time to process applications. If we don’t have the staff capacity to process all the applications we receive, we can always just drop a larger fraction of applicants at each stage. But if someone never applies, we have no opportunity at all to figure out how good a fit they might be. Also, what we’re looking for is pretty unclear (especially to potential applicants), and so e.g. some of our recent hires are people who told us they probably wouldn’t have bothered applying if we hadn’t proactively encouraged them to apply. Of course, an applicant could be worried about whether applying is worth their time, and that’s a different matter.
- I think it would’ve been good to mention that some of these organizations pay applicants for some/all of the time they spend on the application process. (Hopefully Open Phil isn’t the only one?)
BTW my “reflections on the 2018 RA recruiting round” post is now up, here.
MIRI (and other EA orgs, I’d wager) would strongly second “we don’t think it’s a waste of our time to process applications. If we don’t have the staff capacity to process all the applications we receive, we can always just drop a larger fraction of applicants at each stage.”
I second the rest of Luke’s comment too. That run of applications sounds incredibly rough. The account above makes me wonder if we could be doing a better job of communicating expectations to people applying for jobs early in the process. It’s much, much easier to avoid setting misleadingly low or misleadingly high expectations when the information can be personalized and there’s an active back-and-forth, vs. in a blog post.
Hey :-)
Regarding me being a bit of an outlier: Yes, I think so as well. I personally don’t know anyone who applied for quite as many positions. I still don’t think I am a *very* special case. I also got several private messages in response to this post, of people saying they had made similar experiences.
Regarding compensation: I was lucky enough to have decent runway, so the financial aspect wasn’t crucial for me, so I just forgot including it. I will edit that now.
Of course, the one who writes the post about it is likely to be the outlier rather than the median.
I doubt you’re an outlier to be honest. Though I may swing more pessimistic than average.
Just want to add my voice to the many who have already said: thank you for sharing this. It must have taken some courage.
My own experience has been similar (though I’m far less qualified than the OP).
I’ve recently graduated from one of the top ~10 universities worldwide, after investing heavily in EA throughout my studies. While a student, EA was the biggest thing in my life. I read a lot, and several of my EA peers told me I stood out as particularly well-informed about EA topics, especially long-termist ones. Eventually I contributed some of my own research too. I also invested enormous amounts of time in student EA projects. Many people, including ones I thought well-informed about the talent landscape, fully expected that I would go work for an ‘EA organisation’. Naively, I believed it too.
Over the last seven months, I’ve made over 20 unsuccessful job applications (I keep a spreadsheet). This has increased the severity of my depression and anxiety. Over time, I began to shed my identity as an EA, no doubt as a self-defence mechanism. Now I’m very disillusioned about my ability to contribute to the long-termist project.
Thanks for sharing. As someone who spends a lot of time trying to fill EA meta/longtermist talent gaps — e.g. by managing Open Phil RA recruiting, helping to match the strongest applicants we don’t hire to other openings, and by working on field-building in AI strategy/policy (e.g. CSET) — hearing stories like yours is unnerving.
What changes to the landscape, or hiring processes, or whatever, do you think would’ve made the most difference in your case?
I’m also curious to hear your reaction to my comment elsewhere about available paths:
(My answers might be very different to the ones that anonymousthrowaway might give).
Even if not a direct answer to your question, maybe it helps to illuminate the dynamics that contributed towards me making that many applications before moving on to other EA things. I personally did NOT think that jobs at EA organisations had clearly higher expected value than my other top options (which were working in biosecurity basically anywhere, or upskilling in machine learning via a master’s degree). (There were very few exceptions, like Chief of Staff for Will or RA at OpenPhil, which I thought were outstandingly good).
Then why did I apply to so many positions?
1. I thought EA organisations were really talent-starved and needed me. I also thought that it would be easy to get a job. This is clearly my fault, and I think there would have been ways for me to reach a better knowledge of the situation. But I went to several EAGs, talked to dozens of people, read most of 80000 hours advice, and I think it was definitely quite easy to come away from these with the impression I had.
2. I got pretty far in a few application processes, so that encouraged me to continue applying.
3. Somehow, EA positions were the positions I heard about. These were the positions that 80000 hours emailed me about, that I got invitations to apply for, that I found on the websites I was visiting anyway, and so on. Of course, the bar for applying for such a position is lower than if you first have to find positions yourself. 80000 hours never emailed me about a position in biosecurity.
4. EA was the thing I knew most about. I knew a lot about the EA-sphere anyway. So it is easier to evaluate to which organisation to apply to, than say in biosecurity (which is a vast, scary field that I have very little knowledge about). If I apply to OpenPhil, that is definitely at least good. If I pick a random biosecurity organisation, I have to do my homework to figure out if it is promising or not.
5. Psychologically: My other top options (biosecurity anywhere, most likely not long-term-focused, or upskilling in ML) felt like “now I have to work really hard for some time, and maybe later I will be able to contribute to X-risk reduction”. So still like I haven’t quite made it. In contrast, working for a major EA org (in my imagination) felt like “Yes, I have made it now. I am doing super valuable, long-termism-relevant work”.
6. Working at an EA organisation was the only thing that would contribute towards the long-termist agenda that I could hope to do RIGHT NOW. For the other things, I would need upskilling. So if discount rates on work are high, that makes EA orgs more attractive. (However, I don’t believe that discount rates on “valuable work in general” are anywhere as high as the rates often cited in the EA-sphere, so that did not make me think that jobs at EA orgs are clearly better than my alternatives).
Finally, I think that the fact that I thought it would be easy to get hired by an EA organisation really is quite crucial. My points 3.,4.,5.,6. really mainly became relevant against this background. It’s the difference between
“I could apply for an EA org, or do this other thing where I have to look for options first, have less of a clue about the field, which is more inconvenient and psychologically more challenging.”
“I could apply for an EA org. It’s pretty unlikely that I will get hired. Might as well try it a few times, but in the meantime, let’s see what options there are in biosecurity”
Now, without clearly defining WHAT it even is that we want to improve, here are a few ways that I think this could be improved. (a rather lose connection of thoughts)
To improve 1):
Communication:
Write more posts like the OP :-)
Improve communication about talent constraints (already happening)
Sadly, I don’t have many very concrete suggestions here. But I do think this is crucial, and could make the difference between the two points of view in quotation marks above.
Several things that can be tweaked in application processes, mostly related to more transparency (there are some orgs who are doing this very well. I thought e.g. this job description was good):
Say how many applicants you had last time
If you send personalised invitations, include how many people you are sending personalised invitations to.
Clearly state upfront how involved the application processes will be. If you don’t know, then give your best guess and a longest-case scenario.
To improve 3):
Hard to say. There is probably currently really no capacity for that, but if 80000 hours would have emailed me about positions in biosecurity, I would have applied. Having presented one position and only needing to evaluate if it is good is much easier than having to find a position within a large, scary field. They probably really don’t have the capacity for that (which includes figuring out which positions in biosecurity are good), but maybe as a long-term vision.
To improve 4):
Good career guides would be very valuable. This doesn’t even need to be from 80000 hours, but might come from somebody who has researched something for their own career. Maybe we can have EA grants for people writing up their findings? A good career guide for biosecurity, especially one that acknowledges that countries other than the US exist ( :-) ), would have been so, so great.
Regarding biosecurity roles from 80,000 Hours: While they don’t seem to have any jobs in that field being actively promoted on their job board, they do maintain a list of “organizations we recommend” in biosecurity on this page, which might be useful for getting a head start on learning about these orgs’ work.
Thanks, that all makes sense to me. Will think more about this. Also still curious to hear replies from others here.
[deleted]
Thanks for +1ing the above comment. I’d be keen to hear your reply to this comment, too.
I feel for you :(
It would really suck if this is just a temporary supply/demand imbalance. I could even imagine us having the opposite problem in a few years’ time, if EA organizations grow exponentially and find that the EA talent pool gets used up (due to a mixture of people getting hired and people getting discouraged). After all, only ~3 years ago 80k was emphasizing we should focus more on talent gaps and less on funding gaps, and now we have stories like yours.
Just wanted to say I appreciate this post and I feel you. I had a very similar feeling when applying to receive coaching from 80K. I think they’ve improved their message since I applied, but at the time I didn’t fully realize the type of person they were taking was literally the going to Harvard, in the middle of a machine learning Phd type.
I also feel like generally there’s a huge under-utilization of talent in EA. I see so many cases of smart, motivated people who may not have literally the best resume ever, but could be extremely helpful *somewhere*. It’s strange considering the emphasis on talent constraints.
I don’t have an easy solution unfortunately and it does seem to be a really hard problem, but the Teach for America like funnel that I believe Nick Beckstead recommended and was brought up in a recent EA forum post seems really promising to me.
In a similar vein, I don’t mean to pick on 80K—of course they do great work—but I’m confused why they don’t just hire more career coaches. I realize they want to remain lean and have a really strong team, but I think they only have 2 coaches. That number is going to leave a lot of promising EAs wandering in the dark for a while.
p.s. I also know of an EA that applied to EA orgs for a long time and got rejected by all of them. They now work at one of the FAANG companies.
Hi Kevin,
Howie from 80k here. Note that I’m just one staff member and don’t speak on behalf of the whole team, but I wanted to give some thoughts on your comment.
First of all, I appreciate the feedback and I’m sorry to hear that you had a frustrating experience applying for coaching from 80k. Unfortunately, we can only work with a small fraction of the people who apply. Our current coaching page tries to make this clear and I apologize if we’ve done a bad job communicating this (either currently or in the past).
I understand that this situation might be particularly frustrating in light of the EA community’s recent emphasis on ‘talent gaps.’ We think this term has ended up causing a lot of confusion and it’s partly our fault, so we wrote a post back in November that tries to clarify our thinking on this issue.
In particular, we believe that very specific skill sets are the biggest constraint for most of our priority problems, as opposed to ‘talent’ in general. To address this, we’ve increased our focus on what we call ‘priority paths,’ which are the best ways we know of for people to use their careers to free up these specific constraints. Our current priority in coaching is to help people who have demonstrated an interest and the ability to excel in (at least) one of these paths. This is partly because we think the need for people in these paths is particularly pressing but also because we’re most able to help people in areas that are our focus. We try to make sure that we talk to the people we think we’re best placed to help with coaching in other ways too, for example some of our advice and many of the connections we can make are particularly valuable for people who don’t already have lots of current links to other effective altruists.
You don’t literally need an ML PhD from Harvard to be accepted but we agree that there are a lot of very talented people who we don’t end up advising. In our experience, these people aren’t “wandering in the dark;” many of them are already on their way to incredibly high impact careers. Our aim is also for the research and content we put out (both in writing and through the podcast) to be useful to the vast majority of our audience that doesn’t receive formal coaching.
We do hope to add to our coaching capacity sometime this year. That said, we continue to endorse our decision to hire relatively slowly. As you say, we think it’s important to stay ‘lean’ at this stage in our development. Hiring and training are time intensive and directly trade off against our capacity to develop and improve our various products. Moreover, we’ve found that it’s quite hard to hire qualified coaches. Coaches need to have a very broad and deep understanding of EA, as well as being skilled advisors, which means most good candidates have several other excellent career options. Our users often make major career decisions based on our coaching, so we think it’s essential to maintain a very high bar for these positions.
Hope this helps to at least somewhat explain where we’re coming from.
Hi Howie,
I really wasn’t expecting a reply from an 80K staff member so I really appreciate taking time out to give myself and other readers more context. FWIW, I think generally the moves you describe make sense.
On “In our experience, these people aren’t ‘wandering in the dark;’”—this is just anecdotal evidence but I’d like to push back on that a bit: That phrase might be a bit of an exaggeration but I think personally in my own case and in a number of EAs I’ve come across, a 20-60 min session with an expert could have helped substantially, perhaps shaving off years of semi-time wasting.
For example in my own case, I had read many of 80K’s articles but hadn’t come across the more advanced advice of downgrading broad-based advocacy for the very top causes. I spent ~1 year working on media advocacy without fully knowing that—I’m not trying to blame 80K here, just that a short conversation with an expert who could really push back on my plan and point me in a particular direction I think would have been really useful. The specific advice I’m talking about is here: https://80000hours.org/articles/extinction-risk/#ways-to-contribute-that-are-harder-to-get-right-advocacy-and-for-profits
I’ve also seen many other smart, motivated people kind of defaulting to software engineering (which seems to have recently been generally downgraded), or kind of doing their own thing and generally being under-employed/under-utilized imo.
Perhaps as you note the tradeoff in training/hiring isn’t worth it and the current model is optimal, I’m not sure. Hopefully this anecdotal evidence is at least somewhat useful. Cheers!
I think part of what might be driving the difference of opinion here is that the type of EAs that need a 45 minute chat are not the type of EAs that 80k meets. If you work at 80k, you and most of the EAs you know: probably have dozens of EA friends, have casual conversations about EA, pick up informal knowledge easily, and can talk out your EA ideas with people who can engage. But the majority of people who call themselves EA probably don’t have many if any friends who work at EA organizations, donate lots, provide informal knowledge of EA, or who can seriously help you figure out how to have a high impact career.
A 45 minute discussion can therefore do a lot more good for someone outside the EA social circle than for someone who has friends who can have this conversation with them.
I agree that a quick and decisive input from someone very knowledgeable about EA and the topic involved would be very useful and save a lot of time and indecision for people evaluating career options.
I think we can provide a bit of this though through more engaged online communities around given topic areas. Not nearly as good as in person talks but people can at least get some general feedback on career ideas. I’m hoping to host an event later this year that will gather people interested in a cause area and use that as a catalyst to form a more cohesive online community. As far as I can tell (and in my experience) people tend not to engage much in an online community if they don’t really know the people well. Though it’s definitely true that some people are more than happy to engage with people they don’t know.
I don’t know how this could move forward but it seems like someone could potentially make a difference by engineering Facebook or Slack groups focused on certain cause areas to be more active places for general discussion and career advice. This would be so helpful for people who lack close contact with knowledgeable people in EA or within their cause area.
Strongly agreed. I really like Raemon’s analysis why it’s so hard to get EA careers: we’re network constrained. [This isn’t exactly how he frames it, more my take on his idea.]
Right now, EA operates very informally, relying heavily on the fact that the several hundred people working at explicitly EA orgs are all socially networked together to some degree. This social group was significantly inherited from LessWrong and Bay Area rationalism, and EA has had great success in co-opting it for EA goals.
But as EA grows beyond its roots, more people want in, and you can’t have a social network of ten thousand, let alone a million. So we have two options: (a) increase the bandwidth of the social network, or (b) stop relying so much on the social network.
(a) increasing bandwidth looks like exactly what you’re talking about: create ways for newcomers to EA to make EA friends, develop professional relationships with EAs, etc., by creating better online platforms and in person groups.
(b) not relying on personal relationships looks like becoming more corporate, relying on traditional credentials, scaling up until people actually stand a strong chance of landing jobs via open application, etc.
(a) seems to have clear benefits with no obvious harms, as long as it can be done, so it seems very much worth it for us to try.
Hi Aidan, I’m really late to this thread, but found it interesting. If you don’t mind coming back in time, could you clarify this:
“I think part of what might be driving the difference of opinion here is that the type of EAs that need a 45 minute chat are not the type of EAs that 80k meets.”
I imagine this is true for a lot of EA org staff. It sounded from Howie’s comment like it’s probably less true for coaches at 80K, though, compared to other EA org staff.
Howie’s comment:
“We try to make sure that we talk to the people we think we’re best placed to help with coaching in other ways too, for example some of our advice and many of the connections we can make are particularly valuable for people who don’t already have lots of current links to other effective altruists.”
I find the network constrained hypothesis interesting and am interested in exploring it, so I think clarifying our models here seems useful
I would also like to see more 80K coaches. Possibly you can offer light coaching option, where people can have a 30-minute time slot to ask questions openly.
I’ve had such valuable feedback when I’ve attended CFAR workshop, through which I’ve had the opportunity to talk to other EAs about positions I had in my mind. Through CFAR I’ve got to access to 1-on-1 sessions with Lynette Bye from EA Coaching, and she was also very helpful. All this expanded my EA circle and got me even more opportunities. I would undoubtedly reiterate 80K hours advice that attending conferences and joining the community can help immensely.
But, before doing all that I wished I could have a couple of quick sessions with a coach from EA/80K. Also attending some of the events don’t come cheap, and they require a significant time commitment.
Just wanted to flag – I’ve been surprised and sad about how frequently people delete accounts on the EA forum. This is a totally reasonable comment and I’m confused about why the author would have deleted their account within 40 minutes of posting it (as seems to be the case as-of-the-time I write this)
This is a bug :) I’ll contact forum maintainers. I’m active user and my name is Marko Bastovanovic
test comment
test comment
Do you know where Nick Beckstead mentioned the Teach for America model? I don’t think that was cited in the recent Forum post, and I’m curious about what kinds of work he thought people within that “funnel” would be doing.
It’s quoted here: https://forum.effectivealtruism.org/posts/uWWsiBdnHXcpr7kWm/can-the-ea-community-copy-teach-for-america
Search for “teach for America” in this transcript: https://80000hours.org/podcast/episodes/nick-beckstead-giving-billions/
Some related half-baked thoughts:
[Epistemic status: I appreciate that there are people who’ve thought about the EA talent landscape systematically and have access to more comprehensive information, e.g. perhaps some people at 80K or people doing recruiting for major EA orgs. I would therefore place significantly more weight on their impressions. I’m not one of these people. My thoughts are based on (i) having talked 10-100 hours with other EAs about related things over the last year, mostly in a non-focussed way, (ii) having worked full-time for 2 EA organizations (3 if one counts a 6-week internship), (iii) having hired 1-5 people for various projects at the Effective Altruism Foundation, (iv) having spent about 220h on changing my career last year, see another comment. I first heard of EA around October 2015, and have been involved in the community since April 2016. Most of that time I spent in Berlin, then over last summer and since October in Oxford.]
I echo the impression that several people I’ve talked to—including myself—were or are overly focussed on finding a job at a major EA org. This applies both in terms of time spent and number of applications submitted, and in terms of more fuzzy notions such as how much status or success is associated with roles. I’m less sure if I disagreed with these people about the actual impact of ‘EA jobs’ vs. the next best option, but it’s at least plausible to me that (relative to my own impression) some of them overvalue the relative impact of ‘EA jobs’. E.g. my own guess is that a machine learning graduate course is competitive with most ‘EA jobs’ one could do well without such an education. [I think this last belief of mine is somewhat unusual and at least some very thoughtful people in EA disagree with me about this.]
I think several people were in fact too optimistic about getting an ‘EA job’. It’s plausible they could have accessed information (e.g. do a Fermi estimate of how many people will apply for a role) that would have made them more pessimistic, but I’m not sure.
I know at least 2 people who unsuccessfully applied to a large number of ‘EA jobs’. (I’m aware there are many more.) I feel confident that they have several highly impressive relevant skills, e.g. because I’ve seen some of their writing and/or their CVs. I’m aware I don’t know the full distribution of their relevant skills, and that the people who made the hiring decisions are in a much better position to make them than I. I’m still left with a subjective sense of “wow, these people are really impressive, and I find it surprising that they could not find a job”. This contributes to (i) me feeling more pressure to perform well in and more doubtful about the counterfactual impact of my current role because I have a visceral sense that ‘the next best candidate would have been about as good as I or better’ / ‘it would in some sense be tragic or unfair if I don’t perform well’ (these aren’t endorsed beliefs, but still affect me) , (ii) me being more reluctant to introduce new people into the EA community because and I don’t want them to make frustrating experiences, (iii) me being worried that some of my friends and other community members will make frustrating experiences [which costs attention and life satisfaction but also sometimes time e.g. when talking with someone about their frustration—as an aside, I’d guess that the burden of emotional labor of the latter kind is disproportionately shouldered by relatively junior women in the community]. (None of these effects are very large. I don’t want to make this sound more dramatic than it is, but overall I think there are non-negligible costs even for someone like me who got one of the competitive jobs.)
I agree that identifying and promoting impactful roles outside of EA orgs may be both helpful for the ‘EA job market’ and impactful independently. I really like that the 80K job board sometimes includes such roles. I wonder if there is a diffusion of responsibility problem where identifying such jobs is no-one’s main goal and therefore doesn’t get done even if it would be valuable. [I also appreciate that this is really hard and costs a lot of time, and what I perceive to be 80K’s strategy on this, i.e. focussing on in-depth exploration of particularly valuable paths such as US AI policy, seems on the right track to me.]
I think communication around this is really hard in general, and something that is particularly tricky for people like me and most EAs that are young and have little experience with similar situations. I also think there are some unavoidable trade-offs between causing frustration and increasing the expected quality of applicants for important roles. I applaud 80K for having listened to concerns around this in the past and having taken steps such as publishing a clarifying article on ‘talent constraints’. I think as a community we can still do better, but I’m optimistic that the relevant actors will be able to do so and certain that they have good intentions. I’ve seen EA leaders have valuable and important conversations around this, but it’s not quite clear to me if anyone in particular ‘owns’ optimizing the EA talent landscape at large, and so again wonder if there is a diffusion of responsibility issue that prevents ‘easy wins’ such as better data/feedback collection from getting done (while also being open to the possibility that ‘optimizing the EA talent landscape’ is too broad or fuzzy for one person to focus on it).
Not sure I follow the part about how the kind of thing described in the original post makes you “more reluctant to introduce new people into the EA community.” There are lots of exciting things for EAs to do besides “apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers,” including “keep doing what you’re doing and engage with EA as an exciting hobby” and “apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren’t at one of a handful of explicitly EA-motivated orgs” and “do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not,” as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I’d be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you’ll be in extremely high demand and highly paid).
A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
Two additional possible reasons:
Many people in EA community believe it is easier to get a job at an EA organisation than it really is. People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.
This one is weirdly specific and only a minor point (so this comment should not be misconstrued as “the two main reasons people apply for (too) many positions at EA organisations”). I don’t know if this applies to many people, but I got quite a few heavily personalised invitations to apply for positions. I think I *heavily* over-weighted these as evidence that I would have a good chance in the application process. By now I see these invitations as very weak evidence at best, but when I got my first ones, I thought that means I’m half-way there. This was of course naive (and of course I wouldn’t think it means anything if I get a personal letter from a for-profit company). But I am not alone in that. I recently talked to a friend who said “By the way, I got a job offer now. Well, not really a job offer, but it is really close”. All they had gotten was a *very* personalised, well written invitation to apply. But I would guess quite a few people had gotten one (me included). One easy way for EA organisation to avoid inducing this undue optimism would be to transparently state to how many people they send personalised invitations to.
...
(PS: Your point 1 and 2 applied to me very much, but I didn’t get the impression of points 3-5 being the case (I didn’t think people consistently recommended EA orgs over other options))
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
[Deleted]
It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.
(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)
FWIW: I think I know of another example along these lines, although only second hand.
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>”. It’s good to know that this isn’t what always happens.
I have quantitative data on that :-)
I asked 10 people for career advice in a semi-structured way (I sent them the same document listing my options and asked them to provide rankings). These were all people I would think rank somewhere between “one of the top cause prioritization experts in the world” to “really, really knowledgeable about EA and very smart”.
6 out of 10 thought that research analyst for OpenPhil would be my best option. But after that, there was much less consensus on the second best option (among my remaining three top options). 3.5 people rated management at an EA organisation highest, 3 rated biosecurity highest, 3.5 rated an MSc in ML (with the aim of doing AI safety research) highest.
Of course, YOU were one of these ten people, so that might explain some of it :-).
I had many more informal discussions, and I didn’t think there was strong consensus either.
(Let me know if you need more data, I have many spreadsheets full of analysis waiting for you ;-) )
Sounds plausible. E.g. I’m pro “train up as a cybersecurity expert” but I know others have advised against.
In a nutshell, I’m worried that the people would not find the options you list exciting from their perspective, and instead would perceive not working in one of the 20 most competitive jobs at explicitly EA-motivated employers as some kind of personal shortcoming, hence the frustration.
I think the OP is evidence that his can happen e.g. because the author reports that
Note that I agree with you that in fact “[t]here are lots of exciting things for new EAs” including the options you’ve listed. However, even given this considered belief of mine, I think I was overly focussed on ‘EA jobs’ in a way that negatively affected my well-being.
Even when I consider that my guess is that I’m unusually susceptible to such psychological effects (though not extremely so, my crude guess would be ’80th to 99th percentile’), I’d expect some others to be similarly affected even if they agree—like I—about the impact of less competitive options.
Perhaps with “the kind of thing described in the original post” you meant specifically refer to the issue ‘people spend a lot of time applying for EA jobs’. Certainly a lot of the information in the OP and in one of my comments was about this. In that case I’d like to clarify that it’s not the time cost itself that’s the main cause of effects (i)-(iii) I described in the parent. In fact I somewhat regret to have contributed to the whole discussion perhaps being focused on time costs by providing more data exclusively about this. The core problem as I see it is how the OP, I, and I believe many others, think about and are psychologically affected by the current EA job market and the surrounding messaging. The objective market conditions (e.g. number of applicants for jobs) contribute to this, as do many aspects of messaging by EA orgs and EAs, as do things that have nothing to do with EA at all (e.g. people’s degree of neuroticism and other personality traits). I don’t have a strong view on which of these contributing factors is the best place to intervene.
Thanks for this, we can clearly do better. Some ideas:
When you recruit as an org, be 100% transparent on the number of applicants you have for a given position, so that people don’t overestimate their chances.
As a community, let’s do more to promote the creation of more impactful orgs (e.g. through communications and increased funding for early-stage initiatives)
Scale existing orgs by following best practices so that they can recruit more. Get experienced mentors on board if needed.
Stop talking as if there was a binary divide between what is “EA” and what is “non-EA”. This is a spectrum, and we should promote way more than just the usual ~20 EA orgs as good career options (it seems we are getting better at this, but there’s still a long way to go).
Make it easier for EAs to collaborate on projects (e.g. by creating an online project platform), so that they can still have an impact even when they can’t or won’t be hired in an org. This could also boost the creation of new orgs that could then hire later on.
I don’t really agree with your second and third point. Seeing this problem and responding by trying to create more ‘capital letter EA jobs’ strikes me as continuing to pursue a failing strategy.
What (in my opinion) the EA Community needs is to get away from this idea of channelling all committed people to a few organisations—the community is growing faster* than the organisations, and those numbers are unlikely to add up in the mid term.
Committing all our people to a few organisations seriously limits our impact in the long run. There are plenty of opportunities to have a large impact out there—we just need to appreciate them and pursue them. One thing I would like to see is stronger profession-specific networks in EA.
It’s catastrophic that new and long-term EAs now consider their main EA activity to be to apply for the same few jobs instead of trying to increase their donations or investing in non-‘capital letter EA’ promising careers.
But this is hardly surprising given past messaging. The only reason EA organisations can get away with having very expensive hiring rounds for the applicants is because there are a lot of strongly committed people out there willing to take on that cost. Organisations cannot get away with this in most of the for-profit sector.
*Though this might be slowing down somewhat, perhaps because of this ‘being an EA is applying unsuccessfully for the same few jobs’ phenomena.
I disagree that “organizations cannot get away with this in most of the for-profit sector”, at least when it comes to the kinds of for-profit jobs people in EA are likely to apply for.
I applied to ~10 different EA roles in 2018 (depending on how you count):
The longest process, from Open Phil, involved roughly the same number of rounds, and the same amount of time, as the most time-intensive job I applied to out of college (at a hedge fund). They paid me for my time; the hedge fund didn’t.
CEA was a round shorter than that, and involved maybe 6 total hours of work before my work trial (at which point I had a very good chance of being hired—also, I was paid at a reasonable rate during the trial).
Of the other positions where I reached the final round or got an offer, none took more time than the job I accepted out of college (at a software company); most were along the lines of “one work test, a short interview, and a longer interview or set of interviews on-site”. This seems to me like the standard in several high-skilled industries.
Out of roughly 20 jobs and internships I applied for in college (and reached an interview round for), none of them took less time than the median EA position for which I received an interview, usually because I spent several hours on a custom cover letter and other first-round materials before even getting an interview. As far as I know, most EA orgs don’t require cover letters, which seems really good.
Meanwhile, the hiring process for medical and legal positions, as far as I’ve heard from people I know in those industries, is often longer and less transparent than the EA process.
Is there an area of the for-profit sector that you think does especially well in keeping the hiring process brief and/or transparent for applicants, while still finding good people?
My husband is a software developer. He normally does a screening phone interview, a technical test (1-4 hours) and an in-person interview (which may involve other technical questions/tests). The whole process would take 4-8 hours.
I used to be a teacher. I normally did a job application and a teaching demonstration/interview. The whole process normally took 4-8 hours.
I can’t tell you if these processes were better or worse than EA org processes; I can only tell you that I now see 4-8 hours as a normal amount of time to spend applying and interviewing for a professional job.
When I applied to Google I did a phone interview and a full day of in-person interviews, plus a 1-hour conference call about how to do well in the second round. Lots of people devote significant time brushing up their coding interview skills as well; I only didn’t because things like Project Euler had brushed up those skills for me.
The job I took out of college included the tasks you mentioned, plus an overnight trip to the company for a series of interviews, which (if you log travel time as half of interview time) came out to something like 12 hours on top of the other tasks, or 16-20 hours total.
For an example from a different industry, the Vox Future Perfect work test was unpaid (unlike most EA work trials I’ve seen) and took me ~7 hours (I had a good amount of prior journalism experience and was familiar with the style they wanted). I don’t remember them giving any kind of guidance on how much time to spend, and I wouldn’t be surprised if other applicants spent much more.
As far as I know, this is pretty common for entry-level writing positions at publications (senior positions may rely more on reading work you’ve already done).
I agree with all of this, though I’d add that I think part of the problem is the recent denigration of earning to give, which is often all that someone realistically *can* do, at least in the short term.
Yes I agree 100% that merely trying to create more EA jobs won’t be enough, hence my 4th point. What I am suggesting is that we should both increase our internal capacity *and* change our message by making it clear that the work done at EA-branded orgs is only the tip of the iceberg when it comes to having an impact.
Thanks for your comments. I already have a draft for a follow-up post on how I think the EA community could improve. Hopefully I will have time to write it up soon. Your points all seem to be good suggestions (with the caveat Denise mentions) .
More emotional data point that I wish I’d seen people expressing when I was new:
If you are excited about EA, relatively new, and eager to get involved in a way that looks something like “make this my career” rather than “make this a notable interest in my life,” I think you should expect to feel a ton of significant pain, and reflect carefully on if that’s worth it for you.
My experience trying to “break in” to the EA movement’s professional class has at times made me feel like a worse person, less connected to my work, less confident in my ability to try my best, and basically more deeply miserable than any other professional situation I’ve been in by far. Other people I’ve met have described similar emotional experiences, or said their friends have had them.
If you’ve always had an easy/nice time at work, it can be hard to appreciate how bad this can feel up front.
I am not convinced that even people who succeed are particularly happy. I suspect it is happier on balance to have an ordinary job, read interesting things about altruism online, occasionally give talks at your local college, and give away 10% of your income to a highly effective charity.
If you don’t have a strong reason to believe you have an advantage here, and you’re not willing to substantially reduce your personal wellbeing for at least a year, I suggest being very wary of “all hands on deck” sorts of messaging and to put as few of your eggs in the “maybe I’ll be a professional EA” basket as possible.
Sorry to hear how much misery you’ve experienced. I’m curious to ask a follow-up question, but feel free to ignore if you aren’t comfortable answering.
In particular, I’m wondering whether “make [EA] my career” feels ~identical (to you) to “work at a handful of explicitly EA-motivated employers.” If it does, then maybe the messaging or energy or something in the EA community is pretty far from what I think it should be, which is more like what I said in another comment:
The problem (for people like me, and may those who enjoy it keep doing so), as I see it: this is an elite community. Which is to say, this is a community primarily shaped by people who are and have always been extremely ambitious, who tend to have very strong pedigrees, and who are socialized with the norms of the global upper/top professional class.
“Hey you could go work for Google as a machine learning specialist” sounds to a person like me sort of like “Hey you could go be an astronaut.” Sure, I guess it’s possible. “Hey you could work for a nice nonprofit with all these people who share your weird values about charity, and join their social graph!” sounds doable. Which makes it a lot more damaging to fail.
People like me who are standardized-test-top-tier smart but whose backgrounds are pretty ordinary (I am inspired to post this in part because I had a conversation with someone else with the exact same experience, and realized this may be a pattern) don’t tend to understand that they’ve traveled into a space of norms that is highly different than we’re used to, when we join the EA social community. It just feels like “Oh! Great! I’ve found my community of smart people who actually care about getting to work improving the world! Let’s roll up our sleeves together.”
Unfortunately, this can be a costly mistake. As soon as you start making moves that would feel natural in other contexts, like parlaying steady contract work into a regular job, you are likely to run into a very unpleasant brick wall.
Some examples of differences between elite culture and non-elite culture:
1. In elite culture, you’re expected to be very positive in professional settings. You’re expected to say “exciting” a lot, to call things “awesome,” and to thank people creatively and effusively. In non-elite culture, there is no such expectation, and displays of extreme enthusiasm about work don’t go over that well. Even at full enthusiasm-as-lived-experience you’re unlikely to display it in the same way as someone well-versed in elite culture norms. This may get you called a downer.
2. In elite culture, there’s a lot of flexibility, and people often have “runway” when hunting for jobs. So, for example, if someone asks you to take a two week trial period from elite culture, it may not even occur to them that this will require you to quit your job. They may then even admonish you for having quit, should they reject you.
3. In elite culture, lots of people talk about their productivity habits socially. There is a lot of social media posting about productivity techniques, self-help books, etc. Sometimes this can create a cargo cult effect where people feel like that’s what they’re missing, and they parrot the style and pursue lots of productivity boondoggles. I don’t think this tends to work.
Trying to break class barriers is very risky and often excruciating. It also tends to make you feel crazy, since you can feel bias creeping in against you, but you never know for sure if it’s not just perfect meritocracy correctly filtering someone weak like you away from Mount Olympus.
On the plus side, you can get used to it, stop trying to break in, and basically enjoy a position as a highly useful and well-supported element of the professional EA fringe. But at least for me it cost me a year and a half of severe depression. I wouldn’t wish that on anyone else.
Sorry to hear about your long, very difficult experience. I think part of what happened is that it did in fact get a lot harder to get a job at leading EA-motivated employers in the past couple years, but that wasn’t clear to many EAs (including me, to some extent) until very recently, possibly as recently as this very post. So while it’s good news that the EA community has grown such that these particular high-impact jobs can attract talent sufficient for them to be so competitive, it’s unfortunate that this change wasn’t clearer sooner, and posts like this one help with that, albeit not soon enough to help mitigate your own 1.5 years of suffering.
Also, the thing about some people not having runway is true and important, and is a major reason Open Phil pays people to take our remote work tests, and does quite a few things for people who do an in-person RA trial with us (e.g. salary, health benefits, moving costs, severance pay for those not made a subsequent offer). We don’t want to miss out on great people just because they don’t have enough runway/etc. to interact with our process.
FWIW, I found some of your comments about “elite culture” surprising. For context: I grew up in rural Minnesota, then dropped out of counseling psychology undergrad at the University of Minnesota, then worked at a 6-person computer repair shop in Glendale, CA. Only in the past few years have I begun to somewhat regularly interact with many people from e.g. top schools and top tech companies. There are aspects of interacting with such “elites” that I’ve had to learn on the fly and to some degree am still not great at, but in my experience the culture in those circles is still pretty different from the culture at major EA-motivated employers, even though many of the staff at EA-motivated employers are now people who e.g. graduated from schools like Oxford or Harvard. For example, it’s not my experience that people at major EA organizations are as effusively positive as many people in non-EA “elite” circles are. In fact, I would’ve described the culture at the EA organizations I interact with the most in sorta opposite terms, in that it’s hard to get them excited about things. E.g. if you tell one of my Open Phil RA colleagues about a new study in Nature on some topic they care about, a pretty common reaction is to shrug and say “Yeah but who knows if it’s true; most of the time we dig into a top-journal study, it completely falls apart.” Or if you tell people at most EA orgs about a cool-sounding global health or poverty-reduction intervention, they’ll probably say “Could be interesting, but very low chance it’ll end up looking as cost-effective as AMF or even GiveDirectly upon further investigation, so: meh.” Also, EA-motivated employers are generally not as “credentialist,” in my experience, as most “elite” employers (perhaps except for tech companies).
Finally, re: “you never know for sure if it’s not just perfect meritocracy correctly filtering [certain people out].” I can’t speak to your case in particular, but at least w.r.t. Open Phil’s RA recruiting efforts (which I’ve been managing since early 2018), I think I am sure it’s not a perfect meritocracy. We think our application process probably has a high false negative rate (i.e. rejecting people who are actually strong fits, or would be with 3mo of training), and it’s just very difficult to reduce the false negative rate without also greatly increasing the false positive rate. Just to make this more concrete: in our 2018 RA hiring round, if somebody scored really well on our stage-3 work test, we typically thought “Okay, decent chance this person is a good fit,” but when somebody scored medium/low on it, we often threw up our hands and said “No clue if this person is a good fit or not, there are lots of reasons they could’ve scored poorly without actually being a poor fit, I guess we just don’t get to know either way without us and them paying infeasibly huge time costs.” (So why not just improve that aspect of our work tests? We’re trying, e.g. by contracting several “work test testers,” but it’s harder than one might think, at least for such ill-defined “generalist” roles.)
Thanks for this context, and for your warm replies in general. I really do feel okay now, I just also feel like I want people to not fall through the same sorts of cracks that I did in the future. I should also be clear that in my experience, lots of people in EA are from a wide variety of backgrounds. But enough core EAs are from a more elite background that’s hard to detect up front, and the culture shock, at least for me, was massive.
I think EA culture is really good relative to general elite culture as I understand it. It’s still the coolest professional culture I’ve been a part of. But I think norms really are a bit different than what I’m used to, in ways I find hard to place, and beyond the ways that are deliberate and reflectively good.
On further reflection re: enthusiasm, I think it’s mostly a difference in enthusiasm around gratitude, specifically. People seem to display gratitude in a different way, which feels a lot more effusive.
I think the EA community is unusually meritocratic, though of course I agree it’s imperfect. I’m glad people are working on making it even better. The fact that it’s unusually meritocratic does make it a bigger emotional hazard, though: it’s easier to shrug off harsh judgments when you distrust the party making them.
In general I find this stuff very difficult to talk about, and feel low confidence and emotional about all of it. But it also feels very salient to me, so I have an impulse that it should be in the conversation somewhere. I hope that other people with painful experiences will share their stories and impressions, too.
>”The problem (for people like me, and may those who enjoy it keep doing so), as I see it: this is an elite community. Which is to say, this is a community primarily shaped by people who are and have always been extremely ambitious, who tend to have very strong pedigrees, and who are socialized with the norms of the global upper/top professional class.”
I wish this were shouted from the rooftops. Literally all the discourse around talent and jobs that I have come across to date in EA has frustrated me because of how this goes unremarked. As you say, many of the ideas that are discussed as the most natural and easy thing in the world are really like ‘go be an astronaut’ to normal humans. Having said that...
>”In elite culture, you’re expected to be very positive in professional settings. You’re expected to say “exciting” a lot, to call things “awesome,” and to thank people creatively and effusively. In non-elite culture, there is no such expectation, and displays of extreme enthusiasm about work don’t go over that well. Even at full enthusiasm-as-lived-experience you’re unlikely to display it in the same way as someone well-versed in elite culture norms. This may get you called a downer.”
I’m not sure I recognise this. I mean… my experience of every work place I’ve encountered, from being a barista through to LEAN manager, has been that there is pressure to be more positive and chirpy than I personally deem sincere or accurate. Reading this as a Brit I also wonder if you’re describing the American elite. I cautiously guess that this wouldn’t describe German workplaces very well either. But generally I do think that there are a heck of a lot of class factors involved here, and I often worry that the community isn’t adequately switched on to these.
I really appreciate you writing this. You are not the first person to consider doing so and I applaud you for actually doing it.
People interested in this topic should read this post about how EAF ran their hiring round. They seem to have a good setup: only 25% of applicants were asked to take a work test, and only 25% of those tested were invited to a work trial (the other 64 of 68 were estimated to have invested <10 hours).
I’d love to see other organizations within the EA community publish similar numbers, though I understand that it can be tricky (e.g. someone might learn they were the only person interviewed who didn’t make it to the next stage).
The following is a rough breakdown of the percentage of people who were not asked to move on to the next round in the Charity Science hiring process. These numbers assume one counterfactual hour of preparation for each interview and no preparation time outside of the given time limit for test tasks.
~3* hour invested (50%) - Cover letter/resume
~5 hours invested (20%) - Interview 1
~10 hours invested (15%) - Test task 1
~12 hours invested (5%) - Interview 2
~17 hours invested (5%) - Test task 2
~337 hours invested (2.5%) - paid 2-month work trial
Hired (2.5%)
So, 95% of those not hired spend 17 hours or less, 85% spend 14 hours or less, and 70% spend 5 hours or less.
*changed from 1 hour to 3 hours based on comments
1 hour for Cv/cover letter seems extremely optimistic...
For what it’s worth, I think I’m unusually slow at this but I personally couldn’t come close to writing a cover letter in an hour unless I had already written one for a nearly identical job (e.g. in college I applied for economics research assistants at most branches of the U.S. Federal Reserve; the cover letter for the Boston Fed and NY Fed were ~the same).
For a job I really cared about, I think three hours would be about the median if you don’t count all the procrastination because I despise writing cover letters.
I’m *not* claiming this is typical. Just thought it would be useful to make it clear that there’s a *really* wide range in how long job applications take applicants (perhaps depending primarily on level of neuroticism :P).
Ok given multiple people think this is off I have changed it to 3 hours to account for variation in application time.
It came from asking ~4 successful employees who where hired
~1h sounds like the time to make a CV and cover letter personalized for Charity Science starting from an at least semi-relevant CV and cover letter for a previous job application.
My sense is they already had a CV that required very minimal customization and spent almost all the time on the cover letter.
I second Howie’s observation that there is just a really wide range.
Not just depending on neuroticism and other job applications but also writing talent. I expect people from physical sciences to take longer and find it more of a pain. I take between one hour and ninety mins for a cover letter, and I have four CVs that I modify slightly. So I don’t often take more than two hours in total.
Great post. I’m sure writing this must have been tough, so thanks very much for sharing this.
I wonder how much of the interview/work stuff is duplicated between positions—if there’s a lot of overlap, then maybe it would be useful for someone to create the EA equivalent of TripleByte—run initial interviews/work projects with a third party organization to evaluate quality, pass along to most relevant EA jobs.
Thank you and respect for having written this. I really appreciate this, particularly you being open about this having been mentally challenging for you and the concrete data on time cost you provide.
Thanks for sharing this; strong upvoted! As someone running hiring rounds for EA orgs, this helped me better understand the perspective of applicants.
Same perspective here! Thank you for sharing.
I see a lot of people from EA orgs reply this way. It’s a good sign!
Same for Rethink. Definitely appreciate this post and tried to make the application process swift and yet as informative as possible on both ends
I ran my first hiring process to hire someone for an EA role last year and was amazed how long it took me. I’ve hired around 20 times in the past and only spent a couple weeks and 20-40h per role. Last year I spent 8 months and hundreds of hours. I reflected afterwards on why and can list a few hypotheses:
normally rely heavily on gut to build my shortlist. Did not feel comfortable doing this for this role as it felt like there were so many failure modes for a bad hire. Both ways that a hire could go badly and severity of impact for a hire going badly.
normally relying on intuition heavily is highly reversible. Worst case scenario I have to fire the candidate after probation, I’ll never see them again and no one knows them. I’m open with candidates that this is my policy and that they should be careful accepting an offer. In EA I felt like everyone knows everyone and a fired hire could cause significant reputational damage with a one-sided narrative. I don’t endorse this view as rational but the fear was definitely a factor in why I took so long.
I was hiring for a role that defies regular role definition. No one applying to the role had applied to a similar position before let alone worked in one. Potentially this was the largest factor and my other points are moot.
I wasn’t hiring someone to have similar skills to me, instead hiring someone to have the skills I don’t have. Normally I would judge experience, passion, intelligence, lateral thinking ability, ambition and team fit then let a team lead judge specific ability.
many candidates treated the process like a 2 way application the whole way through the process. This three off my intuitions and normally I would have dropped all candidates who weren’t signalling they were specifically very excited about my role. First call excluded.
many candidates’ conversations included career advice from me. This threw off my intuitions but I consider it time well spent in ll the cases where I spent over 2h
I worried a lot about how much time of others I was using. Assuming a candidate spent 4x more time than I did, I used over a thousand hours of people’s time.
ultimately I made offers to two candidates both of which I had had strong gut feelings about very early, which was rewarding but also highly frustrating.
The key thing I intend to change next time is being much faster. I didn’t feel like (for me) the extra process complexity and caution added that much insight and crucially, it threw off my intuition.
The main downside of reduced complexity seems to be the increased chance of a bad hire and the potential damage of firing them. I think next time I will return to my original method and be very transparent with the person I make an offer to that their 3 month probation is not just a formality, pointing them to this article as an explanation to why it’s not worth it for others for me to have a long drawn out process that may only slightly reduce the risk of a bad hire.
Caveat:
** I do not advocate anyone else doing this unless they are confident in their hiring intuitions. I also haven’t tried it yet and it may go terribly. **
Thank you to the OP for posting. Illuminating!
“ultimately I made offers to two candidates both of which I had had strong gut feelings about very early, which was rewarding but also highly frustrating.”—I hope this comment doesn’t come across as incredibly mean, but, are you getting that from notes made at the time? When I find myself thinking “this is what I thought we’d do all along”, I start to suspect I’ve conveniently rewritten my memories of what I thought. Do you have a sense of how many candidates you had similar strong positive gut feelings about?
Thank you for a very helpful comment!
haha—good question. And yes, from notes.
I wonder whether this is just a result of people on both sides of the application process knowing each other in a social context.
If the candidate knows they will interact with people making the hiring decision in the future, they might not want them to feel bad about rejecting them. The people making the hiring decision might arguably feel less bad about not hiring someone if the candidate wasn’t that excited. Lack of excitement also allows the candidate to save face if they get rejected, which also only matters because the candidate and the person making the hiring decision might interact socially in the future.
Thanks for this; it’s interesting to think about why gut intuitions didn’t carry over into the EA hiring process.
Wouldn’t this be ameliorated by providing a reference where you clearly state your views about the hire? I think most hiring managers understand that not every role is a fit for every person.
Huh, what were the skills you were trying to hire for? Could an advisor or board member with that skillset have been looped into the hiring process?
I think “three” should be “threw”
Meta note: I believe this is now the most upvoted EA forum post of all time by a wide margin. Seems like it struck a chord with a lot of people. It’s probably worthwhile for people to write follow-up posts exploring issues related to human capital allocation, since it is a pretty central challenge for the movement. Example prompts:
Brainstorming Task Y for someone with an absurdly impressive resume.
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement’s current bottlenecks?)
Consequentialist cluelessness and how it relates to funding speculative projects and early-stage organizations (some previous discussion here).
A meta point: A lot of the discussion here has focused on reducing the time spent applying. I think a more fundamental and important problem, based on the replies here and my own experiences, is that many, many EAs feel that either they’re working at a top EA org or they’re not contributing much. Since only a fraction of EAs can currently work at a top EA org due to supply vastly exceeding demand, even if the time spent applying goes down a lot, many EAs will end up feeling negatively about themselves and/or EA when they get rejected. See e.g. this post by Scott Alexander on the message he feels he gets from the community. A couple of excerpts below:
I agree that if it’s true that “many EAs feel that either they’re working at a top EA org or they’re not contributing much,” then that is much worse than anything about application time cost and urgently needs to be fixed. I’ve never felt that way about EA org work vs. alternatives, so I may have just missed that this is a message many people are getting.
E.g. Scott’s post also says:
…and my reply is “Yes, talent-constrained also means those other things, and it’s a big problem if that was unclear to a noticeable fraction of the community.”
FWIW I suspect there’s also something a bit more subtle going on than overly narrow misunderstandings of “talent-constrained,” e.g. something like Max Daniel’s hypothesis.
I think the votes for the old posts are not directly comparable with those in the new forum, since previously individuals could not give more than one upvote to a post. It may still be that this post would have been most upvoted of all time even if the new voting system would have been used for those old posts, however.
Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.
At time of writing, this post has 100 unique votes. Most are probably upvotes given its current karma (193).
Not 100% sure, but I don’t recall any post on the old Forum having 100 votes.
Promoting donations or Earnign to Give seems fine. I think we should stop promoting ‘EA is talent constrained’. There is a sense in which EA is ‘talent constrained’. But the current messaging around ‘EA is talent constrained’ consistently misleads people, even very informed people such as the OP and some of the experts who gave him advice. On the other hand EA can certainly absorb much more money. Many smaller orgs are certainly funding constrained. And at a minimum people can donate to Give Directly if the other giving oppurtunities are filled.
Couldn’t agree more!
+1, thank you for highlighting this.
I’d love to collaborate with folks on the cluelessness aspect of this.
I believe GPI is doing work on further specifying what we mean by cluelessness & developing a taxonomy of it.
I’m personally interested in better understanding on-the-ground implications of cluelessness, e.g. what does it imply about which areas to focus on presently? Some preliminary work in that direction here.
I’ve thought a lot about cluelessness, and I could give you feedback on something you’re thinking of writing.
Nice. I’ve already written a sequence on it (first post here) – curious for your thoughts on it!
Also, I think Richard Ngo’s working on a piece on the topic, building off my sequence & the academic work that Hilary Greaves has done.
I wrote some comments on your sequence:
Most near-term interventions likely won’t be pivotal for the far future, so we can ignore their long-term effects to cooperate with near-term focused value systems.
Fight ambiguity aversion.
Fight status quo bias.
Balance steering capacity with object-level action.
Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don’t think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don’t think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.
I stumbled across this today; haven’t had a chance to read it but it looks relevant.
CEA run the EA community fund to provide financial support EA community group leaders.
The key metric that CEA for evaluating the success of the groups they fund is the number of people from each local group who reach interview stage for high impact jobs, which largely means jobs within EA organisations. Bonus points available if they get the job.
This information feels like a relevant piece of the puzzle for anyone thinking through these issues. It could be (that in hindsight) CEA pushing chapter organisers to push people to focus on jobs in EA organisations in many ways might not be the best strategy.
Hi, thanks for the comment.
I’m the project lead on EA Community Building Grants, which I think is what your referring to here.
The accredited positions include those that are in EA organisations, but many of the accredited positions are not. For example, accredited positions include doing an economics Phd at a top university, working at Facebook AI research, and many positions listed on the 80,000 Hours jobs board.
Based on the data submitted by groups so far, I’d expect the majority of positions we accredit to not be positions at EA organisations, though we’re still in the early stages of receiving and analysing data here.
Our previous update post provides some more context on our evaluation criteria.
https://forum.effectivealtruism.org/posts/RZrikMAuTwt4e9Fs4/ea-community-building-grants-update
I’m sorry to see so many orgs take 10+ hours to get you only partway through the process, let alone multiple 40+ hour processes. This is especially glaring compared to the very low number of orgs that rejected you in under 5 hours.
It sounds like many of these orgs would benefit (both you and themselves!) from improving their evaluations to reject people earlier in the process.
My team at Wave’s current technical interview process is under 10 hours over 4 stages (assuming you spend 1 hour on your cover letter and resume); the majority of rejections happen after less than 5 hours. The non-technical interview process is somewhat longer, but I would guess still not more than 15 hours and with the majority of applications being rejected in under 5 hours (the final interview is a full day).
Notably, we do two work samples, a 2hr one (where most applicants are rejected) and a 4-5hr one for the final interview. If I were interviewing for a non-technical role I’d insert a behavioral interview after the first work sample as well. These shorter interviews help us screen out many candidates before we waste a ton of their time. It’s hard for me to imagine needing 8+ hours for a work sample unless the role is extremely complex and requires many different skills.
Wave is trying to do a much easier assessment than EA orgs mostly are; lots of people have thoughts about hire to software engineers, and software engineering is a well established industry with lots of established wisdom about how orgs should be structured. EA jobs often have much less precedent and so we shouldn’t be surprised that we don’t know how to figure out as efficiently whether people are likely to be good fits.
I think the reason the OP had a high fraction of ‘long’ processes had more to do with him being a strong applicant who would get through a lot of the early filters. I don’t think a typical ‘EA org’ hiring round passes ~50% of its applicants to a work test.
This doesn’t detract from your other points re. the length in absolute terms. (The descriptions from OP and others read uncomfortably reminiscent of more senior academic hiring, with lots of people getting burned competing for really attractive jobs). There may be some fundamental trade-offs (the standard argument about ‘*really* important to get the right person, so we want to spent a lot of time assessing plausible candidates to pick the right one, false negatives at intermediate stages cost more than false positives, etc. etc.’), but an easy improvement (mentioned elsewhere) is to communicate as best as one can the likelihood of success (perhaps broken down by stage) so applicants can make a better-informed decision.
This is why I think Wave’s two-work-test approach is useful; even if someone “looks good on paper” and makes it through the early filters, it’s often immediately obvious from even a small work sample that they won’t be at the top of the applicant pool, so there’s no need for the larger sample.
Per Buck’s comment, I think identifying software engineering talent is a pretty different problem than identifying e.g. someone who is already a good fit for Open Phil generalist RA roles.
A large part of Wave’s engineer hiring process was aimed at assessing fit with the team & the mission (at least when I was there), which seems similar to part of the problem of hiring Open Phil RAs.
Nearly all of Open Phil’s RA hiring process is focused on assessing someone’s immediate fit for the kind of work we do (via the remote work tests), not (other types of) fit with the team and mission.
Not super clear on the distinction you’re drawing; I feel like a lot of “team fit” and “mission fit” flows from stuff like how similar the candidate’s epistemology & communication style are to the firm’s.
Seems like those sorts of things would also bear on a candidate’s immediate fit for the kind of work the firm does.
I think there are probably a few things that some EA orgs could improve and I hope to write a post about it soon. In the meantime, it might be useful to explain where some of these high numbers come from:
1. Un-timed work test (e.g. OpenPhil research analyst):
I think most EA orgs underestimate how much time a work test takes. Take for example the conversation notes test of OpenPhil’s application procedure. In the email instructions to the test, you will find the following line: “Historically, we think people have spent 2-8 hours on this assignment. ” But there is no indication of how much time you should/are allowed to spend. And since everyone knows that the process is really competitive, and your results keep on improving if you invest more time, many people invest a lot of time. I spent 16 hours on the task. I asked three other people how much time they had spent, and it was 8 h, 16 h, and 24 h.
2. Research proposals (e.g. FHI research scholar programme, OpenPhil biosecurity early career researcher grant):
Writing a research proposal just takes a lot of time. I spent 30 hours on my proposal for FHI. I know of 4 other people who applied. These are the times they spent on the proposal (full-time): one day, one week, one week, several weeks.
3. Trying to be really well prepared (my own fault, no one forced me to do that):
Knowing that the positions are competitive, I would often spend several hours preparing for (later-stage) interviews. E.g. when applying for the CEA local group specialist role, I spend 4-5 hours reading and thinking about CEA’s strategy in movement building.
4. Travel time:
As stated in the post, I counted travel time at 50%. And Oxford is really far off :-)
---------
So depending on how exactly Wave’s application process looks like, I might potentially have spent more than 10 hours on it as well :-)
Thanks for mentioning the thing about the conversation notes test. It was simply an oversight to not explicitly say “Please don’t spend more than X hours on this work test,” and I’ve now added such a sentence to our latest draft of those work test instructions. We had explicit time limits for our other two tests.
I would advocate for controllably timed work tests whenever possible. Simply saying “please don’t spend more than X hours on this work test” gives the opportunity to cheat by spending more time. Incentives for cheating are strong, because:
The tasks usually have tight time limits, so spending additional time will improve your results.
Applicants know the application process is highly competitive.
Applicants know that EA organisations put a lot of value on work test performance.
If you have enough applicants, some will cheat, and they will get a significant advantage. In rare cases, this may even deter people from applying. There was one position were I was planning to apply but then didn’t because they had a non-controllably timed worktest (I don’t want to cheat, somebody probably will cheat, and I am not super-well qualified for the position anyway so I would really need to shine in the work test → not worth applying). (I admit that this deterrence probably doesn’t happen often)
Great tools online for doing controllably timed work tests exist.
(I realize that it is not always possible to control the time limit, e.g. when the task is too long to be done in one sitting. I have no recommendation for what to do then, other than that I think Jonas Vollmer’s comment in this thread seems reasonable).
Huh. I’m really surprised that they find this useful. One of the main ways that Wave employees’ productivity has varied is in how quickly they can accomplish a task at a given level of quality, which varies by an order of magnitude between our best and worst candidates. (Or equivalently, how good of a job they can do in a fixed amount of time.) It seems like not time-boxing the work sample would make it much, much harder to make an apples-to-apples quality comparison between applicants, because slower applicants can spend more time to reach the same level of quality.
Two points that speak against this view a bit:
It seems easier to increase the efficiency of your work than the quality. All else equal, I’m tentatively more interested in people who can do very high-quality work inefficiently than people doing mediocre work quickly – because I expect that the the former are more likely to eventually do high-quality, highly efficient work.
Some people tend to get very nervous with timed tests and mess up badly; it seems good to give them the opportunity to prove themselves in a less stressful environment.
My current view is to ask for both timed and untimed tests, and make the untimed tests very simple/short (such that you could complete it in 20 minutes if you had to and there’s very little benefit to spending >2h on it).
In software engineering, I’ve found the exact opposite. It’s relatively easy for me to train people to identify and correct flaws in their own code–I point out the problems in code review and try to explain the underlying heuristics/models I’m using, and eventually other people learn the same heuristics/models. On the other hand, I have no idea how to train people to work more quickly.
(Of course there are many reasons why other types of work might be different from software eng!)
I expect that good software engineers are more likely to figure out for themselves how to be more efficient than they are to figure out how to increase their work quality. So it’s not obvious what to infer from “it’s harder for an employer to train people to work faster”—does it just mean that the employer has less need to train the slow, high quality worker?
Good point, agree it depends on the type of work.
I hadn’t noticed the discrepancy before between the conversation notes test and their other tests, which generally read something like this:
“This test should require somewhere between X and Y hours of work; please send us your work, even if it’s incomplete, after Y hours.”
Adjusting the notes test seems like a good step, or at least asking applicants how much time they spent, so that there’s a clear tradeoff between speed and thoroughness (maybe it’s the case that a slightly messy four-hour test gets as good a score as a better eight-hour test, and Open Phil would be happy to consider both, or something like that).
It’s much more understandable to me for the grants to have labor-intensive processes, since they can’t fire bad performers later so the effective commitment they’re making is much higher. (A proposal that takes weeks to write is still a questionable format IMO in terms of information density/ease of evaluation, but I don’t know much about grant-making, so this is weakly held.)
In case it isn’t clear: EA is funding constrained. (I can’t imagine it will ever not be. Have we ran out of imagination?) It is similarly difficult to raise funding as a budding EA organisation. The idea that EA is not funding constrained came from surveying established orgs that are indeed talent constrained, but in the meantime I could easily name 5 (even 10) startup EA charities that are perpetually seeking funding that they could instantly turn into jobs.
Even if that wasn’t the case, higher pay would allow orgs to attract better talent, outsource more, etc. There’s almost always a use for additional funding.
Especially if you compare the damage of too little with the damage of too much money, earning to give is still a great idea.
The fact that some organizations cannot get funded does not seem like strong evidence that EA as a whole is funding-constrained. Given that other organizations can raise large funds, an alternative explanation is that donors think that the expected impact of the organizations that cannot get funding is low. I also don’t think it follows from your argument that earning to give is a great idea.
I agree that a failure to get funded does not imply funding constraints, but I definitely do think that many EA orgs, especially early ones, could benefit from more people with money looking to donate. There tends to be a large information asymmetry where you need to establish a clear track record and/or have someone spend a lot of time evaluating you before you can get funded. This is hard for early organizations to make happen.
I also think there are other systematic failures in EA where the best orgs do not always get fully funded.
If the best projects already have enough money, and are hiring significantly less people than the total number of potentially full time EAs, its possible that funding the next “tier” of worse-than-best projects is worthwhile. And it’s not clear that we have the money to do that.
I would disagree with that line of reasoning—as donors, we should be seeking to channel money into the most effective places it can do good, not trying to spread out the opportunity to do good to different individuals within the EA movement.
So if donor A can create 10 utils by donating $1 to Org Z, or create 5 utils and one new EA job by donating $1 to Org Y, the choice seems to be clear. My understanding is that our current research suggests that this is the case. (I also agree with Arepo, however, about donors potentially being irrational.)
When people say all of the top orgs have enough money, my interpretation is that I can’t really create any value at all by donating to them. That is, donor A can create 0 utils by donating to $1 to Org Z, because doing so doesn’t actually allow Org Z to scale in a meaningful way.
If I also can’t work at Org Z, then donating to Org Y looks like my next best option.
That makes sense — on a second look, I misread your first comment. Absolutely agree that the community shouldn’t have a go big or go home mentality, ie it shouldn’t be seen as impossible to do good if you can’t get an ultra selective job at one of these organizations.
I agree with Stefan, but I do in fact think (for other reasons) that earning to give is a great idea, and one of the main approaches EAs should consider. I think it’s a smaller constraint overall than hiring, but still a very important constraint.
It’s not entirely obvious how that looks different from EA being funding constrained. No donors are perfectly rational and they surely tend to be irrational in relatively consistent ways, which means that some orgs having surplus funds is totally consistent with there not being enough money to fund all worthwhile orgs. (this essentially seems like a microcosm of the world having enough money to fix all its problems with ease, and yet there ever having been a niche for EA funding).
Also, if we take the estimates of the value of EA marginal hires on the survey from a couple of years back literally, EA orgs tend to massively underpay their staff compared to their value, and presumably suffer from a lower quality hiring pool as a result.
One quick recommendation for what applicants can do that might be useful for employers (speaking from an employer’s perspective): Proactively share previous work trials. Whenever applicants did this, this provided me with additional valuable information that helped me decide whether someone should advance to the next stage.
Is this allowed? Normally when I do work for a company, the copyright remains with that company.
I think you’re correct, but my impression is that most EA orgs will happily agree to this.
For one of the work tests I did for Open Phil, the instruction sheet specifically asked that the work test not be shared with anyone. That might have been intended as a temporary restriction, I’m not sure, but I’m not planning on sharing it unless I hear otherwise.
I would ask Open Phil whether they’d be okay with you sharing it with the organization to your applying to (ideally only once you’re past the first stage, and only if the other organization expressed interest).
As an employer, I still think about this post three years after it was published, and I regularly hear it referenced in conversations about hiring in EA. The experiences in it clearly resonated with a lot of people, as evidenced by the number of comments and up votes. I think it’s meaningfully influenced the frame of many hiring rounds at EA organizations over the past three years.
Were you able to get good feedback as to why you weren’t hired?
I got feedback for most of the roles where I reached the last stage of the application process, usually not for others. Usually the feedback was that my application was good (that I would have passed the hiring bar), but that someone else was better. I also got more specific feedback, but that was different every time (which makes some sense, since the positions were different, the people evaluating me were different, and I tried to not repeat mistakes that I had previously gotten feedback on).
Discussion between 80k & Peter Hurford in the comments here seems relevant.
Thank you for putting all of this together, I think it is a very useful post. I spent many years career coaching and advising people who were applying for jobs and I always stress this:
If you are not landing the job you want it is because of two main reasons:
a) You are not applying for the right jobs for you (you may be underqualified, overqualified, transitioning fields etc.)
b) Or you are in fact very well qualified but you are not good at presenting those qualifications to others, especially in a limited time and space.
Have you received any kind of feedback that would help you understand which was more common for you? If you think your case is a) then you need to find a different set of jobs to apply to—not better or worse, just different. If it is more b) then you may want to work on making sure that the reasons why you know you would be good at the job you are applying to translate into your resume, cover letter, interview etc.
I actually think it is quite common for people in the EA community to find themselves at a) because the jobs that are available to our community are very limited in numbers and scope. I think we need to expand the way we think about EA careers such that more people can find jobs that they enjoy which are also impactful.
You may also find yourself in b) because of cultural differences and bias. For example the job market in the US is very competitive and everyone is used to extremely inflating their resume and presenting themselves very confidently—which is not typical in some parts of Europe or Asia. Many recruiters and hiring managers also have bias against foreign applicants so part of the task is to present your qualities such that they come across accurately even at this disadvantage.
-
I once heard the advice that if you are a donor who finds an opportunity that is worth EA money, you should just donate to it, even if you think that you could find a better opportunity after more research. Because if everyone only looks for the very best available opportunity, everyone will have to spend much more time evaluating many projects, or evaluations will be less deep.
The same advice could be adapted to hiring in some cases. If you find an ops person who is good enough to do ops for some EA org, you should consider hiring them, even if you think you could find a better candidate after more search. Because otherwise orgs will have to spend more time evaluating candidates, and candidates will have to spend more time applying.
In other words, having a lower threshold for hiring could be cooperative with other EA orgs in some game-theoretical scenario. Of course, if we go too far in this direction, we will no longer have a good grasp on where the threshold for being hired is, and best people might not get hired. And there are other complications. But EA orgs could go a little bit in this direction.
Also, if there is a shortage of ops people within EA, and you find a person that is good for you org but wouldn’t apply to other orgs, you should be more willing to hire them, because you would increase the pool of EA ops people.
Having some experience with hiring, it might be some consolation that you did actually provide value to the EA orgs you applied to by giving them:
more practice hiring
more exposure to candidates to figure out who they want
a better intuitive grasp of the talent landscape
It’s unfortunate that this has such a large opportunity cost and that you bore so much of it, but the unfortunate reality on the hiring side for any org is that we often need to interview additional candidates we know we won’t hire because if we don’t we won’t know enough to trust the people we do hire are the right people. Of course we don’t know which candidates we will and won’t hire in advance (otherwise if we already knew it would be extremely unfair to everyone to do the interviews), but at least in this case your interview time helped EA focused organizations gain that knowledge rather than other orgs who you may be less aligned with values on and so less value their institutional learning from additional interviews that don’t result in hires.
Thanks for sharing, and thanks to everyone for adding their own perspectives to this discussion. I would like to offer my view, which is informed by having gone through difficult periods myself.
I think its a mistake to get too involved with the view that getting a job in an EA org is the only way to make a difference in the world, or indeed to ourselves. In order to stay healthy I think we need to realize that there are other things that matter in the world than EA, such as our relations and our wellbeing. If we get too emotionally attached to the idea of making a huge, cosmic difference to the world by working at an EA org, we can get carried away and forget that we also have a responsibility towards ourselves. As an added benefit, if we take care of ourselvces we are happier, more resilient, can take on more risk, and more willing to work towards a long-term goal, all of which are good personality traits to eventually have higher impact in life.
Agree about the importance of looking after yourself.
I get the feeling that it’s especially important to talk about mental health in the EA community. And not because it’s a promising area for effective altruism :), but because I think EAs are probably more vulnerable than average to mental health problems, like anxiety. Not only in the job hunting area, but I think it goes along with the whole concept of EA. If your main aim in life is to do the most good you can in the world, and you are extremely rigorous about making sure you’re living up to that, you’re opening yourself up to a lot of potential self-judgement.
My view at the moment is that I want to strike a balance between focusing on others (e.g. through effective altruism) and focusing on myself (e.g. through mindfulness / meditation / exercise / chilling out) and my immediate environment / relationships (e.g. through socialising ;) ). Agree about the added benefit too.
Just wanted to give a quick note of encouragement that earning to give for long term future or EA meta causes can be very impactful. According to a survey of community leaders last year, donations to the long term future and EA meta EA funds were usually considered even more cost effective than donations to the animal welfare and global poverty funds.
Lately I’ve been playing with the idea of EA being better held as a side project than as a professional pursuit.
As you note, it can be really difficult to carve out a career as a professional EA. Plus, being a professional brings with it incentive effects that seem hard to mitigate.
Agreed. I’ve always been quite hesitant to mix my income with my beliefs, because that dynamic makes it very hard to change your mind.
Thanks to OP and commenters for sharing their experiences! Very helpful!
Based on this feedback, seems like it would be valuable for 80K to (significantly?) cut back the degree to which they curate their job board and/or for someone to create a list of all open jobs that could be broadly defined as EA. Either (or both) of these steps should help to address the issue of too many candidates chasing too few jobs.
There actually is a listing like this: the EA Work Club (https://eawork.club/). Although it’s aiming at EA community jobs, rather than all possible jobs which could be defined as EA, which is maybe what you were after.
Is there a website/central place where these kinds of community-made resources and 80K’s list of recommended orgs (such as the one for global health on the 80k wiki) are linked? I have only ever seen them mentioned in comments, but I imagine someone on the 80K website or even on the forums could miss them.
I’m not sure if this is the kind of place you were thinking, but the EA work club is linked to on the 80,000 Hours Job Board page (https://80000hours.org/job-board/) - at the bottom under ‘Other places to find vacancies’
Hi Michelle, thanks for your response! I hadn’t noticed that at the bottom of the job board page. I think that’s a good place for it—but I was also thinking of the other resources of organizations outside of EA that could be good places to skills-build. I know there are a few unofficial job boards (like Tom Wein’s) on various cause areas that could be very useful if they were located more centrally.
Thanks Michelle, great to learn about this resource, for some reason I’d thought it was only volunteer stuff. Will start posting jobs there going forward and hope other employers will too.
I would still like to see something broader exist as well… That’s the resource I’d want if I were an EA job seeker, since it’d let each candidate use their own perspective on what counts.
The revamped EA Hub will be aiming to contribute to this problem.
Edit: comment and link removed as no longer relevant/applicable.
Thanks, helpful to know about!
It appears you are extremely good in your field (16th out of 6000). A heart surgeon in the US has a median salary of 448k dollars. As much as 80k claims that being a doctor is pointless, Imagine the ETG as a result of this. This is close to avg expected salaries of top traders (600k dollars) in the game.
Whoever downvoted this, would you care to inform me why? Is the data not accurate? Am I hating on 80k too much? what is it?
Thank you for sharing your experiences! I agree, this post is very valuable.
Your post is quite encouraging. I smiled while reading it when I came to this point “Yeah when we said that we need people, we meant capable people. Not you. You suck.” If people from educationally privileged countries like the US will find this difficult, what will be the fate of people like us who trained in Nigeria? Already you are educationally and positionally disadvantaged. I remember applying to 80000 hours for a one-on-one career conversation on how to make my career more impactful, I was surprised I got rejected.
I would suggest that the presence of such a high amount of talent means that projects like the EA hotel are more vital since this increases the amount of talent that can be deployed.
Disagree. Talent can be deployed in businesses, government, academia, and not-specifically-EA charities. You’d have to convince me that the work being done at the EA Hotel was better than top 1% of opportunities in those other areas before I thought, “Yes, it’s important for the community to find different ways to use all this excess talent.”
Edit: The EA Hotel could be worthwhile for other reasons; I just don’t think it should be used as a make-work program.
Perhaps that’s true, but I’m not sure if the EA Hotel residents are going to be in those top 1% opportunities otherwise. It’s not that they lack the talent, it’s just that it takes a certain kind of personality to be willing to do all the negative sum games* and goodharting and hoop-jumping to get those opportunities in the first place.
Taking myself as an example, I can’t stand subordinating myself to someone who seems to be unaligned or who seems to have bad judgment. I can’t stand competing. This means that my options are either to create my own work or to do something part-time minimum-effort and spend my life on something that isn’t my career.
Given the personal sacrifice of pursuing a career in EA, I wouldn’t be surprised if many EA’s are like that.
*like job applications
I’m not saying everyone should go into this, just that a portion should
This recent SMBC reminded me of this post ^^ http://www.smbc-comics.com/comic/computer