I think the daunting part is the âbeing rejectedâ part, more than any actual difficulty in applications. I donât think making the process 30 seconds instead of five minutes would have made me any more likely to pull the trigger. Iâve sent in a few applications anyway because I wanted to check my current ability against the needs of the organisations, and the process itself was pretty fast.
This may not be generalisable across other people (and Iâm not the kind of person who really needs it, since I did send in the applications anyway), but I see two parts to rejection.
1) The social aspect of âOh no, rejection by a human beingâ which is unreasonably strong for most people. (Thereâs a reason why asking someone out is terrifying for a lot of people) This can also manifest as âI donât want to waste someoneâs time if Iâm way below the standardâ.
2) The psychological aspect of failing at something.
Of this, I suspect 1 is stronger than 2 for most individuals. A potential solution to this might be some sort of automated screen as a first round, such that individuals who fail it never actually get rejected by a human, and individuals who succeed now have enough buy-in and signal of their suitability to be more likely to progress to the next step. At the very least, I can imagine some people would say âWell, Iâm sure Iâm not <org> material, but it would be nice to take the test and see where I stand!â but they wouldnât want to waste an actual humanâs time by sending in an application in similar circumstances. And some of those people might be closer to <org> material than they think.
For this to work, you would need:
* A very clear idea of what the standard is * Encouragement that if someone meets this standard, you want them to apply * A way for candidates to disqualify themselves without ever talking to a human.
Anthropicâs call to action had at least two and a half of these. The standard wasnât 100% objective in the sense that I can unambiguously pass/âfail it right now, but itâs pretty damn close.
(I wonder if this could work with grants too, with questions with clear acceptance criteria and encouragement that if someone meets these acceptance criteria, they have met the threshold that they should apply for a grant)
Of course, this comes with its own difficultiesâan official public automated test is easier to game, whereas an objective standard like âIf you can complete 3 of 4 problems in a LeetCode competition within the time limit, talk to usâ is less authoritative and thus less effective. So Iâm not sure what the best way to go about doing this is, or if it would be effective across a bunch of not-me people.
I think the daunting part is the âbeing rejectedâ part, more than any actual difficulty in applications. I donât think making the process 30 seconds instead of five minutes would have made me any more likely to pull the trigger. Iâve sent in a few applications anyway because I wanted to check my current ability against the needs of the organisations, and the process itself was pretty fast.
This may not be generalisable across other people (and Iâm not the kind of person who really needs it, since I did send in the applications anyway), but I see two parts to rejection.
1) The social aspect of âOh no, rejection by a human beingâ which is unreasonably strong for most people. (Thereâs a reason why asking someone out is terrifying for a lot of people) This can also manifest as âI donât want to waste someoneâs time if Iâm way below the standardâ.
2) The psychological aspect of failing at something.
Of this, I suspect 1 is stronger than 2 for most individuals. A potential solution to this might be some sort of automated screen as a first round, such that individuals who fail it never actually get rejected by a human, and individuals who succeed now have enough buy-in and signal of their suitability to be more likely to progress to the next step. At the very least, I can imagine some people would say âWell, Iâm sure Iâm not <org> material, but it would be nice to take the test and see where I stand!â but they wouldnât want to waste an actual humanâs time by sending in an application in similar circumstances. And some of those people might be closer to <org> material than they think.
For this to work, you would need:
* A very clear idea of what the standard is
* Encouragement that if someone meets this standard, you want them to apply
* A way for candidates to disqualify themselves without ever talking to a human.
Anthropicâs call to action had at least two and a half of these. The standard wasnât 100% objective in the sense that I can unambiguously pass/âfail it right now, but itâs pretty damn close.
(I wonder if this could work with grants too, with questions with clear acceptance criteria and encouragement that if someone meets these acceptance criteria, they have met the threshold that they should apply for a grant)
Of course, this comes with its own difficultiesâan official public automated test is easier to game, whereas an objective standard like âIf you can complete 3 of 4 problems in a LeetCode competition within the time limit, talk to usâ is less authoritative and thus less effective. So Iâm not sure what the best way to go about doing this is, or if it would be effective across a bunch of not-me people.