Having been through many hiring rounds as a candidate, and having done some hiring for small nonprofits, I found this a really interesting read!
The uncertainty piece feels especially important. In my experience, it is not just that performance or fit sometimes does not pan out, but that organizations are often reluctant to deal with the consequences when it happens. Avoiding hard conversations, role changes, or exits tends to make that uncertainty far more costly over time, for both the person and the organization.
I am also curious about how work tests are developed. Your mention of I/O psychology stood out. Given how much weight work tests carry and how demanding they can be for candidates, I have found myself wondering about their design while completing work tests, and how often people with relevant expertise are involved in shaping them.
Relatedly, I was intrigued by your suggestion of backtesting. I am curious whether it could be embedded into things orgs already do, to reduce workload, like capturing expectations at the offer stage and revisiting them at three month or annual check-ins to see which signals actually showed up in the role—I could see this as being incredibly useful information to gather!
My experience with EA orgs as a candidate has been mixed. I have had some really thoughtful experiences/interactions, but also confusing and low EQ ones, including clearly AI generated responses, which has felt jarring in spaces that emphasize care and long term thinking, and made me pause about what the process is actually optimizing for.
Anyways, all to say I appreciate you naming how much leverage is hiding here, and I am very curious what the impact could look like for both candidates and organizations if more people took hiring this seriously!
Having been through many hiring rounds as a candidate, and having done some hiring for small nonprofits, I found this a really interesting read!
The uncertainty piece feels especially important. In my experience, it is not just that performance or fit sometimes does not pan out, but that organizations are often reluctant to deal with the consequences when it happens. Avoiding hard conversations, role changes, or exits tends to make that uncertainty far more costly over time, for both the person and the organization.
I am also curious about how work tests are developed. Your mention of I/O psychology stood out. Given how much weight work tests carry and how demanding they can be for candidates, I have found myself wondering about their design while completing work tests, and how often people with relevant expertise are involved in shaping them.
Relatedly, I was intrigued by your suggestion of backtesting. I am curious whether it could be embedded into things orgs already do, to reduce workload, like capturing expectations at the offer stage and revisiting them at three month or annual check-ins to see which signals actually showed up in the role—I could see this as being incredibly useful information to gather!
My experience with EA orgs as a candidate has been mixed. I have had some really thoughtful experiences/interactions, but also confusing and low EQ ones, including clearly AI generated responses, which has felt jarring in spaces that emphasize care and long term thinking, and made me pause about what the process is actually optimizing for.
Anyways, all to say I appreciate you naming how much leverage is hiding here, and I am very curious what the impact could look like for both candidates and organizations if more people took hiring this seriously!