I am a software engineer who is considering applying for EA-aligned roles as a career move in the not-too-distant future (Still deciding between going for AI safety or just trying to do a similar type of SWE job I already do, but in an effective org) and the thing I found most surprising in this article was:
Is your bottle neck “people don’t apply”?
This is the most common problem for EA orgs, as far as I know.
From the developer side, I read articles like this one (https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really) and my conclusion was “Despite being well above average as an engineer according to objective career metrics like titles over time and compensation, as someone not at a FAANG level, I probably won’t meet the bar to get hired at an EA organisation.” This may be one reason why EA orgs say “If in doubt, apply”, but it’s still a bit daunting.
I think this is the most useful part to try to get rid of.
If someone (the EA orgs / the devs / someone) could make it way less daunting for devs to apply (even if they’d be rejected), I think that would be hugely valuable.
Do you agree?
Any idea how to approach that problem (even if no immediate solutions come to mind, how could they be found?)
I think the daunting part is the “being rejected” part, more than any actual difficulty in applications. I don’t think making the process 30 seconds instead of five minutes would have made me any more likely to pull the trigger. I’ve sent in a few applications anyway because I wanted to check my current ability against the needs of the organisations, and the process itself was pretty fast.
This may not be generalisable across other people (and I’m not the kind of person who really needs it, since I did send in the applications anyway), but I see two parts to rejection.
1) The social aspect of “Oh no, rejection by a human being” which is unreasonably strong for most people. (There’s a reason why asking someone out is terrifying for a lot of people) This can also manifest as “I don’t want to waste someone’s time if I’m way below the standard”.
2) The psychological aspect of failing at something.
Of this, I suspect 1 is stronger than 2 for most individuals. A potential solution to this might be some sort of automated screen as a first round, such that individuals who fail it never actually get rejected by a human, and individuals who succeed now have enough buy-in and signal of their suitability to be more likely to progress to the next step. At the very least, I can imagine some people would say “Well, I’m sure I’m not <org> material, but it would be nice to take the test and see where I stand!” but they wouldn’t want to waste an actual human’s time by sending in an application in similar circumstances. And some of those people might be closer to <org> material than they think.
For this to work, you would need:
* A very clear idea of what the standard is * Encouragement that if someone meets this standard, you want them to apply * A way for candidates to disqualify themselves without ever talking to a human.
Anthropic’s call to action had at least two and a half of these. The standard wasn’t 100% objective in the sense that I can unambiguously pass/fail it right now, but it’s pretty damn close.
(I wonder if this could work with grants too, with questions with clear acceptance criteria and encouragement that if someone meets these acceptance criteria, they have met the threshold that they should apply for a grant)
Of course, this comes with its own difficulties—an official public automated test is easier to game, whereas an objective standard like “If you can complete 3 of 4 problems in a LeetCode competition within the time limit, talk to us” is less authoritative and thus less effective. So I’m not sure what the best way to go about doing this is, or if it would be effective across a bunch of not-me people.
I am a software engineer who is considering applying for EA-aligned roles as a career move in the not-too-distant future (Still deciding between going for AI safety or just trying to do a similar type of SWE job I already do, but in an effective org) and the thing I found most surprising in this article was:
Is your bottle neck “people don’t apply”?
This is the most common problem for EA orgs, as far as I know.
From the developer side, I read articles like this one (https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really) and my conclusion was “Despite being well above average as an engineer according to objective career metrics like titles over time and compensation, as someone not at a FAANG level, I probably won’t meet the bar to get hired at an EA organisation.” This may be one reason why EA orgs say “If in doubt, apply”, but it’s still a bit daunting.
I’d be interested to know if that info (from 2019) still applies, since I also saw this (https://forum.effectivealtruism.org/posts/CdYniXZ53dyPupRiY/is-it-no-longer-hard-to-get-a-direct-work-job) but the comments muddy this a lot and make it uncertain as to how accurate this is.
I think this is the most useful part to try to get rid of.
If someone (the EA orgs / the devs / someone) could make it way less daunting for devs to apply (even if they’d be rejected), I think that would be hugely valuable.
Do you agree?
Any idea how to approach that problem (even if no immediate solutions come to mind, how could they be found?)
I think the daunting part is the “being rejected” part, more than any actual difficulty in applications. I don’t think making the process 30 seconds instead of five minutes would have made me any more likely to pull the trigger. I’ve sent in a few applications anyway because I wanted to check my current ability against the needs of the organisations, and the process itself was pretty fast.
This may not be generalisable across other people (and I’m not the kind of person who really needs it, since I did send in the applications anyway), but I see two parts to rejection.
1) The social aspect of “Oh no, rejection by a human being” which is unreasonably strong for most people. (There’s a reason why asking someone out is terrifying for a lot of people) This can also manifest as “I don’t want to waste someone’s time if I’m way below the standard”.
2) The psychological aspect of failing at something.
Of this, I suspect 1 is stronger than 2 for most individuals. A potential solution to this might be some sort of automated screen as a first round, such that individuals who fail it never actually get rejected by a human, and individuals who succeed now have enough buy-in and signal of their suitability to be more likely to progress to the next step. At the very least, I can imagine some people would say “Well, I’m sure I’m not <org> material, but it would be nice to take the test and see where I stand!” but they wouldn’t want to waste an actual human’s time by sending in an application in similar circumstances. And some of those people might be closer to <org> material than they think.
For this to work, you would need:
* A very clear idea of what the standard is
* Encouragement that if someone meets this standard, you want them to apply
* A way for candidates to disqualify themselves without ever talking to a human.
Anthropic’s call to action had at least two and a half of these. The standard wasn’t 100% objective in the sense that I can unambiguously pass/fail it right now, but it’s pretty damn close.
(I wonder if this could work with grants too, with questions with clear acceptance criteria and encouragement that if someone meets these acceptance criteria, they have met the threshold that they should apply for a grant)
Of course, this comes with its own difficulties—an official public automated test is easier to game, whereas an objective standard like “If you can complete 3 of 4 problems in a LeetCode competition within the time limit, talk to us” is less authoritative and thus less effective. So I’m not sure what the best way to go about doing this is, or if it would be effective across a bunch of not-me people.