I wanted to anonymously (for obvious reasons) share this article by Vox. “Job interviews are a nightmare—and only getting worse”
The strenuous, drawn out, and impersonal interview process described in this article is exactly what I have experience attempting to apply to EA jobs. Dozens of interviews and countless trials over weeks and weeks of not hearing back or knowing what is next. I am sharing this article today to draw the EA community to this issue in the hopes that more consideration can one day be given to applicants.
Thank you.
I wouldn’t call it predatory—in fact, every significant work test / trial I’ve done has been paid, which is remarkably progressive!
However, I empathize with your pain—interviewing for EA jobs is a rigorous and rather impersonal gambit. As far as I know, this is a feature not a bug. It’s frustrating but I try to cut them some slack. There are many applicants, EA orgs are almost always short-staffed and they’re trying to avoid bias. Most EAs want an EA job but these hiring processes are optimized to test this desire.
Knowing this, I don’t bother applying for an EA job unless I truly think that my application can be competitive and that I actually want the job (not a bad heuristic to follow in general).
I understand your frustration, and have myself been in your shoes a few times. I think that many employers/recruiters in EA are aware of these downsides, as I have seen a variety of posts by people discussing this in the past. Additionally, as Samuel points out in a separate comment, many if not all of the work-trials/etc. I’ve participated in have been compensated, which seems quite reasonable/non-predatory.
Unfortunately, for some positions/situations I don’t think there will be any process which satisfies everyone, as they always seem to have downsides. I can especially speak to my experience applying to positions in non-EA think tanks and elsewhere, where I’ve suspected that most of the interview/review processes are ridiculously subjective or plainly ineffective. Setting aside the process of selecting applicants for proceeding to the interview stage (which I suspect is probably under-resourced/flawed), I’ve had multiple interviews where I came away thinking “are you seriously telling me that’s how they evaluate candidates? That’s how they determine if someone is a good researcher? Do they not apply any scrutiny to my claims / are my peers just getting away with total BS here [as I’ve heard someone imply on at least one occasion]? Do they not want to know any more concrete details about the relevant positions or projects even after I said I could describe them in more detail?”
Many of the EA-org interviews I’ve done may not feel “personal,” but I’ll gladly take objectivity and skill-testing questions over smiles and “tell me your strengths and weaknesses.”
That being said, I do sympathize with you, and I do tend to find that it’s much more frustrating to be turned down by an EA org after so much effort, but in the end I still think I would prefer to see this kind of deeper testing/evaluation more often.
Sorry you had this experience. I would love it if you named and shamed the specific orgs! It would be a public service.
It all depends who you get; I had a good experience with 80,000 Hours’ trial, because the person running the hiring campaign tried to keep us in the loop as much as possible—even telling me how many other candidates were still in the running at each stage. That was great because it helped set my expectations.
Credit to Bella Forristal for that. Yay transparency!
Which are the orgs with a rubbish application experience, if you can say without de-anonymising yourself?
<3
This should be the low-hanging fruit on transparency; every organization should post the expected hiring process (including work trial duration / compensation) up front.
Sometimes plans change, but any unsuccessful applicant asked to do extra interviews etc. over what was stated should be compensated for them.