Just want to add my voice to the many who have already said: thank you for sharing this. It must have taken some courage.
My own experience has been similar (though I’m far less qualified than the OP).
I’ve recently graduated from one of the top ~10 universities worldwide, after investing heavily in EA throughout my studies. While a student, EA was the biggest thing in my life. I read a lot, and several of my EA peers told me I stood out as particularly well-informed about EA topics, especially long-termist ones. Eventually I contributed some of my own research too. I also invested enormous amounts of time in student EA projects. Many people, including ones I thought well-informed about the talent landscape, fully expected that I would go work for an ‘EA organisation’. Naively, I believed it too.
Over the last seven months, I’ve made over 20 unsuccessful job applications (I keep a spreadsheet). This has increased the severity of my depression and anxiety. Over time, I began to shed my identity as an EA, no doubt as a self-defence mechanism. Now I’m very disillusioned about my ability to contribute to the long-termist project.
Thanks for sharing. As someone who spends a lot of time trying to fill EA meta/longtermist talent gaps — e.g. by managing Open Phil RA recruiting, helping to match the strongest applicants we don’t hire to other openings, and by working on field-building in AI strategy/policy (e.g. CSET) — hearing stories like yours is unnerving.
What changes to the landscape, or hiring processes, or whatever, do you think would’ve made the most difference in your case?
I’m also curious to hear your reaction to my comment elsewhere about available paths:
There are lots of exciting things for EAs to do besides “apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers,” including “keep doing what you’re doing and engage with EA as an exciting hobby” and “apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren’t at one of a handful of explicitly EA-motivated orgs” and “do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not,” as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I’d be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you’ll be in extremely high demand and highly paid).
(My answers might be very different to the ones that anonymousthrowaway might give).
Even if not a direct answer to your question, maybe it helps to illuminate the dynamics that contributed towards me making that many applications before moving on to other EA things. I personally did NOT think that jobs at EA organisations had clearly higher expected value than my other top options (which were working in biosecurity basically anywhere, or upskilling in machine learning via a master’s degree). (There were very few exceptions, like Chief of Staff for Will or RA at OpenPhil, which I thought were outstandingly good).
Then why did I apply to so many positions?
1. I thought EA organisations were really talent-starved and needed me. I also thought that it would be easy to get a job. This is clearly my fault, and I think there would have been ways for me to reach a better knowledge of the situation. But I went to several EAGs, talked to dozens of people, read most of 80000 hours advice, and I think it was definitely quite easy to come away from these with the impression I had.
2. I got pretty far in a few application processes, so that encouraged me to continue applying.
3. Somehow, EA positions were the positions I heard about. These were the positions that 80000 hours emailed me about, that I got invitations to apply for, that I found on the websites I was visiting anyway, and so on. Of course, the bar for applying for such a position is lower than if you first have to find positions yourself. 80000 hours never emailed me about a position in biosecurity.
4. EA was the thing I knew most about. I knew a lot about the EA-sphere anyway. So it is easier to evaluate to which organisation to apply to, than say in biosecurity (which is a vast, scary field that I have very little knowledge about). If I apply to OpenPhil, that is definitely at least good. If I pick a random biosecurity organisation, I have to do my homework to figure out if it is promising or not.
5. Psychologically: My other top options (biosecurity anywhere, most likely not long-term-focused, or upskilling in ML) felt like “now I have to work really hard for some time, and maybe later I will be able to contribute to X-risk reduction”. So still like I haven’t quite made it. In contrast, working for a major EA org (in my imagination) felt like “Yes, I have made it now. I am doing super valuable, long-termism-relevant work”.
6. Working at an EA organisation was the only thing that would contribute towards the long-termist agenda that I could hope to do RIGHT NOW. For the other things, I would need upskilling. So if discount rates on work are high, that makes EA orgs more attractive. (However, I don’t believe that discount rates on “valuable work in general” are anywhere as high as the rates often cited in the EA-sphere, so that did not make me think that jobs at EA orgs are clearly better than my alternatives).
Finally, I think that the fact that I thought it would be easy to get hired by an EA organisation really is quite crucial. My points 3.,4.,5.,6. really mainly became relevant against this background. It’s the difference between
“I could apply for an EA org, or do this other thing where I have to look for options first, have less of a clue about the field, which is more inconvenient and psychologically more challenging.”
“I could apply for an EA org. It’s pretty unlikely that I will get hired. Might as well try it a few times, but in the meantime, let’s see what options there are in biosecurity”
Now, without clearly defining WHAT it even is that we want to improve, here are a few ways that I think this could be improved. (a rather lose connection of thoughts)
To improve 1):
Communication:
Write more posts like the OP :-)
Improve communication about talent constraints (already happening)
Sadly, I don’t have many very concrete suggestions here. But I do think this is crucial, and could make the difference between the two points of view in quotation marks above.
Several things that can be tweaked in application processes, mostly related to more transparency (there are some orgs who are doing this very well. I thought e.g. this job description was good):
Say how many applicants you had last time
If you send personalised invitations, include how many people you are sending personalised invitations to.
Clearly state upfront how involved the application processes will be. If you don’t know, then give your best guess and a longest-case scenario.
To improve 3):
Hard to say. There is probably currently really no capacity for that, but if 80000 hours would have emailed me about positions in biosecurity, I would have applied. Having presented one position and only needing to evaluate if it is good is much easier than having to find a position within a large, scary field. They probably really don’t have the capacity for that (which includes figuring out which positions in biosecurity are good), but maybe as a long-term vision.
To improve 4):
Good career guides would be very valuable. This doesn’t even need to be from 80000 hours, but might come from somebody who has researched something for their own career. Maybe we can have EA grants for people writing up their findings? A good career guide for biosecurity, especially one that acknowledges that countries other than the US exist ( :-) ), would have been so, so great.
Regarding biosecurity roles from 80,000 Hours: While they don’t seem to have any jobs in that field being actively promoted on their job board, they do maintain a list of “organizations we recommend” in biosecurity on this page, which might be useful for getting a head start on learning about these orgs’ work.
It would really suck if this is just a temporary supply/demand imbalance. I could even imagine us having the opposite problem in a few years’ time, if EA organizations grow exponentially and find that the EA talent pool gets used up (due to a mixture of people getting hired and people getting discouraged). After all, only ~3 years ago 80k was emphasizing we should focus more on talent gaps and less on funding gaps, and now we have stories like yours.
Just want to add my voice to the many who have already said: thank you for sharing this. It must have taken some courage.
My own experience has been similar (though I’m far less qualified than the OP).
I’ve recently graduated from one of the top ~10 universities worldwide, after investing heavily in EA throughout my studies. While a student, EA was the biggest thing in my life. I read a lot, and several of my EA peers told me I stood out as particularly well-informed about EA topics, especially long-termist ones. Eventually I contributed some of my own research too. I also invested enormous amounts of time in student EA projects. Many people, including ones I thought well-informed about the talent landscape, fully expected that I would go work for an ‘EA organisation’. Naively, I believed it too.
Over the last seven months, I’ve made over 20 unsuccessful job applications (I keep a spreadsheet). This has increased the severity of my depression and anxiety. Over time, I began to shed my identity as an EA, no doubt as a self-defence mechanism. Now I’m very disillusioned about my ability to contribute to the long-termist project.
Thanks for sharing. As someone who spends a lot of time trying to fill EA meta/longtermist talent gaps — e.g. by managing Open Phil RA recruiting, helping to match the strongest applicants we don’t hire to other openings, and by working on field-building in AI strategy/policy (e.g. CSET) — hearing stories like yours is unnerving.
What changes to the landscape, or hiring processes, or whatever, do you think would’ve made the most difference in your case?
I’m also curious to hear your reaction to my comment elsewhere about available paths:
(My answers might be very different to the ones that anonymousthrowaway might give).
Even if not a direct answer to your question, maybe it helps to illuminate the dynamics that contributed towards me making that many applications before moving on to other EA things. I personally did NOT think that jobs at EA organisations had clearly higher expected value than my other top options (which were working in biosecurity basically anywhere, or upskilling in machine learning via a master’s degree). (There were very few exceptions, like Chief of Staff for Will or RA at OpenPhil, which I thought were outstandingly good).
Then why did I apply to so many positions?
1. I thought EA organisations were really talent-starved and needed me. I also thought that it would be easy to get a job. This is clearly my fault, and I think there would have been ways for me to reach a better knowledge of the situation. But I went to several EAGs, talked to dozens of people, read most of 80000 hours advice, and I think it was definitely quite easy to come away from these with the impression I had.
2. I got pretty far in a few application processes, so that encouraged me to continue applying.
3. Somehow, EA positions were the positions I heard about. These were the positions that 80000 hours emailed me about, that I got invitations to apply for, that I found on the websites I was visiting anyway, and so on. Of course, the bar for applying for such a position is lower than if you first have to find positions yourself. 80000 hours never emailed me about a position in biosecurity.
4. EA was the thing I knew most about. I knew a lot about the EA-sphere anyway. So it is easier to evaluate to which organisation to apply to, than say in biosecurity (which is a vast, scary field that I have very little knowledge about). If I apply to OpenPhil, that is definitely at least good. If I pick a random biosecurity organisation, I have to do my homework to figure out if it is promising or not.
5. Psychologically: My other top options (biosecurity anywhere, most likely not long-term-focused, or upskilling in ML) felt like “now I have to work really hard for some time, and maybe later I will be able to contribute to X-risk reduction”. So still like I haven’t quite made it. In contrast, working for a major EA org (in my imagination) felt like “Yes, I have made it now. I am doing super valuable, long-termism-relevant work”.
6. Working at an EA organisation was the only thing that would contribute towards the long-termist agenda that I could hope to do RIGHT NOW. For the other things, I would need upskilling. So if discount rates on work are high, that makes EA orgs more attractive. (However, I don’t believe that discount rates on “valuable work in general” are anywhere as high as the rates often cited in the EA-sphere, so that did not make me think that jobs at EA orgs are clearly better than my alternatives).
Finally, I think that the fact that I thought it would be easy to get hired by an EA organisation really is quite crucial. My points 3.,4.,5.,6. really mainly became relevant against this background. It’s the difference between
“I could apply for an EA org, or do this other thing where I have to look for options first, have less of a clue about the field, which is more inconvenient and psychologically more challenging.”
“I could apply for an EA org. It’s pretty unlikely that I will get hired. Might as well try it a few times, but in the meantime, let’s see what options there are in biosecurity”
Now, without clearly defining WHAT it even is that we want to improve, here are a few ways that I think this could be improved. (a rather lose connection of thoughts)
To improve 1):
Communication:
Write more posts like the OP :-)
Improve communication about talent constraints (already happening)
Sadly, I don’t have many very concrete suggestions here. But I do think this is crucial, and could make the difference between the two points of view in quotation marks above.
Several things that can be tweaked in application processes, mostly related to more transparency (there are some orgs who are doing this very well. I thought e.g. this job description was good):
Say how many applicants you had last time
If you send personalised invitations, include how many people you are sending personalised invitations to.
Clearly state upfront how involved the application processes will be. If you don’t know, then give your best guess and a longest-case scenario.
To improve 3):
Hard to say. There is probably currently really no capacity for that, but if 80000 hours would have emailed me about positions in biosecurity, I would have applied. Having presented one position and only needing to evaluate if it is good is much easier than having to find a position within a large, scary field. They probably really don’t have the capacity for that (which includes figuring out which positions in biosecurity are good), but maybe as a long-term vision.
To improve 4):
Good career guides would be very valuable. This doesn’t even need to be from 80000 hours, but might come from somebody who has researched something for their own career. Maybe we can have EA grants for people writing up their findings? A good career guide for biosecurity, especially one that acknowledges that countries other than the US exist ( :-) ), would have been so, so great.
Regarding biosecurity roles from 80,000 Hours: While they don’t seem to have any jobs in that field being actively promoted on their job board, they do maintain a list of “organizations we recommend” in biosecurity on this page, which might be useful for getting a head start on learning about these orgs’ work.
Thanks, that all makes sense to me. Will think more about this. Also still curious to hear replies from others here.
[deleted]
Thanks for +1ing the above comment. I’d be keen to hear your reply to this comment, too.
I feel for you :(
It would really suck if this is just a temporary supply/demand imbalance. I could even imagine us having the opposite problem in a few years’ time, if EA organizations grow exponentially and find that the EA talent pool gets used up (due to a mixture of people getting hired and people getting discouraged). After all, only ~3 years ago 80k was emphasizing we should focus more on talent gaps and less on funding gaps, and now we have stories like yours.