Thanks for proposing this idea Dinesh. I’m supportive of the core idea here. More feedback is (obviously) better than less, and even the simplest version of what you’re proposing would be useful to implement. I hope some Orgs read this and act on it.
That said, I want to share some immediate concerns and thoughts I had (not quite hot takes, but not super well thought either)
For context: I’m someone who went through the High Impact Professionals program in late 2024 and have applied, unsuccessfully, to several EA roles over the last few years. I’m currently in the middle of another round of applications. So I’m very much the target audience for this post. But note also that frustration at this long and (so far) unsuccessful process might colour my opinions.
The purpose of a system is what it does..
I’d like to believe the framing here: hiring teams think feedback is too time-consuming, and the solution is showing them it can be automated and easy. But maybe the incentives are simply well aligned as it is right now?
The benefits of the “broad funnel, strong filter” accrues largely to orgs. They get a large, talented applicant pool and they can be highly selective. The cost, on the other hand — candidates spending months in low-feedback application cycles, burning time, money, and emotional energy — is borne largely by the applicants.
The post suggests that feedback means that applicants will apply more selectively, reducing total volume. I think most hiring teams would see that as a theoretical benefit at best. A smaller, more targeted applicant pool sounds nice in the abstract, but it also means potentially missing the unexpected candidate who wouldn’t have applied if they’d self-selected out.
The broad funnel gives orgs a lot of optionality. Orgs either have or need to create efficient processes for screening large volumes and they seem to be doing so (automated video interview, standard questions, LLM use etc)
To be clear, I’m not imputing bad intentions here. I believe most EA hiring teams would genuinely like to provide better feedback. But in a world of tradeoffs, I can understand why this might fall to the bottom of the list of priorities. .
Statistical feedback is less actionable than it appears
I think the percentile rank for applications is informative at the extremes: if I’m in the 95th percentile and still didn’t get the role, I can assume I was competitive and it just had to go one way or the other. Bad luck. And if I’m in the 10th percentile, maybe I seriously misjudged my fit/skills/experience for the job.
But what about if you end up in the messy middle?
What applicants actually need to know is why they scored where they did. Was it their experience? The framing? A mismatch between what the applicant emphasised and what the committee was actually looking for?
Is it clear what the orgs actually wanted?
That question is harder to answer than it looks because job descriptions tend to be written broadly, partly by design (broad funnel), partly because writing a really precise job description is hard and time-consuming, and honestly, probably not the best use of an org’s person-hours.
Here’s a concrete example from my own experience. I was told informally — through a conversation — that a particular role was really looking for someone with a strong entrepreneurial streak, someone who’d demonstrated the ability to start and execute projects on their own initiative. Was that in the job description? Technically, yes but it was buried among a dozen other qualities. And it was probably followed by one of those “even if you don’t meet all the criteria, we encourage you to apply” sentences.
In a situation with such blurry criteria, a percentile rank tells you where you stood but not necessarily what would have moved you up. You might see a decent percentile and think “I’m close, I just need to polish my application a bit,” when the real issue is that you’re optimising for the wrong criteria.
I don’t have any other strong solutions to offer. But I would love for us (as a community) to explore how orgs can be more explicit about what actually drives their decisions, because the current ambiguity incurs huge costs within the whole system of orgs and applicants.
One thing that might help enormously is normalising brief, even formulaic, qualitative feedback at rejection. It doesn’t need to be personalised or lengthy. Even something like “Your application was strong on X but we needed more evidence of Y” — a single sentence would be useful. I know some orgs already do this at later stages (and I’m grateful to the ones that have given me feedback). The question is whether it can be extended earlier in the process, even in a templated way.
To Dinesh’s credit, the post is pushing in the right direction. Any feedback is better than the current status quo. And if the statistical approaches outlined here are what’s realistic in the short term, I’d take them over nothing.
On the single sentence qualitative feedback, I do think this would be very helpful. Just simple, direct statements such as, “Not qualified due to lacking x/y/z core requirement,” versus simply being a weak, but not fundamentally flawed candidate would be very helpful.
Right now, everyone who isn’t hired is passed over in favor of stronger applicants. Obviously. I want to know whether my application was even read, to be honest. As a mid-career person trying to transition, I have this growing cynicism that many EA orgs are simply going to filter me based on my age and the fact that I have not worked at some elite firm or gone to a prestigious university. And that’s fine actually I guess, but it would be helpful to get direct feedback to let me know whether I’m wasting my time applying in the first place.
I can just earn-to-give and do my own thing, it won’t hurt my feelings if I’m excluded from the clique.
I agree with your first point partially—a broad funnel provides orgs with optionality. They can find good candidates even when the hiring processes are not optimized—e.g. job descriptions are not well written, etc.
That said, I do feel that the advantages of a broad funnel fall off when the number of applications/position scales moves from 10-100x to >100x. This is more an intuitive reasoning—I do not have a good rationale for it. The closest is that every application pool is a biased sampling of the population distribution. Good sampling would provide distributions with means closest to the hiring needs. Just broadening the funnel would provide sampling distributions closer to the population distribution. Practically, this would mean a lot of manual filtering work to sub-select candidates for next stages of applications. In these scenarios, incentives of hiring committees and candidates might be more aligned.
You mentioned that feedback of the 95th or 10th percentile is useful, but not the 50th percentile. But I do think the feedback is actionable—it tells that the applicant is scoring average and they do not have any counterfactual impact for that position. I agree, it doesn’t tell the applicant how to improve, but it does provide some information on how they fare in the applicant pool.
However, I completely agree that real life is too noisy/messy and applicants need more information on why they were scored in a particular way. Saulie’s comment shows a nice way this can be done—providing some information about scores with respect to the key requirements of the job.
My optimistic hope is that once orgs start doing a simple percentile rank-based feedback, they can be pushed towards more feedback with respect to the key application requirements. This is slowly moving the bottom line, from no feedback to something useful. And an automated, easy-to-setup percentile rank system might provide just a low enough barrier to get the ball rolling...
Thanks for proposing this idea Dinesh. I’m supportive of the core idea here. More feedback is (obviously) better than less, and even the simplest version of what you’re proposing would be useful to implement. I hope some Orgs read this and act on it.
That said, I want to share some immediate concerns and thoughts I had (not quite hot takes, but not super well thought either)
For context: I’m someone who went through the High Impact Professionals program in late 2024 and have applied, unsuccessfully, to several EA roles over the last few years. I’m currently in the middle of another round of applications. So I’m very much the target audience for this post. But note also that frustration at this long and (so far) unsuccessful process might colour my opinions.
The purpose of a system is what it does..
I’d like to believe the framing here: hiring teams think feedback is too time-consuming, and the solution is showing them it can be automated and easy. But maybe the incentives are simply well aligned as it is right now?
The benefits of the “broad funnel, strong filter” accrues largely to orgs. They get a large, talented applicant pool and they can be highly selective. The cost, on the other hand — candidates spending months in low-feedback application cycles, burning time, money, and emotional energy — is borne largely by the applicants.
The post suggests that feedback means that applicants will apply more selectively, reducing total volume. I think most hiring teams would see that as a theoretical benefit at best. A smaller, more targeted applicant pool sounds nice in the abstract, but it also means potentially missing the unexpected candidate who wouldn’t have applied if they’d self-selected out.
The broad funnel gives orgs a lot of optionality. Orgs either have or need to create efficient processes for screening large volumes and they seem to be doing so (automated video interview, standard questions, LLM use etc)
To be clear, I’m not imputing bad intentions here. I believe most EA hiring teams would genuinely like to provide better feedback. But in a world of tradeoffs, I can understand why this might fall to the bottom of the list of priorities. .
Statistical feedback is less actionable than it appears
I think the percentile rank for applications is informative at the extremes: if I’m in the 95th percentile and still didn’t get the role, I can assume I was competitive and it just had to go one way or the other. Bad luck. And if I’m in the 10th percentile, maybe I seriously misjudged my fit/skills/experience for the job.
But what about if you end up in the messy middle?
What applicants actually need to know is why they scored where they did. Was it their experience? The framing? A mismatch between what the applicant emphasised and what the committee was actually looking for?
Is it clear what the orgs actually wanted?
That question is harder to answer than it looks because job descriptions tend to be written broadly, partly by design (broad funnel), partly because writing a really precise job description is hard and time-consuming, and honestly, probably not the best use of an org’s person-hours.
Here’s a concrete example from my own experience. I was told informally — through a conversation — that a particular role was really looking for someone with a strong entrepreneurial streak, someone who’d demonstrated the ability to start and execute projects on their own initiative. Was that in the job description? Technically, yes but it was buried among a dozen other qualities. And it was probably followed by one of those “even if you don’t meet all the criteria, we encourage you to apply” sentences.
In a situation with such blurry criteria, a percentile rank tells you where you stood but not necessarily what would have moved you up. You might see a decent percentile and think “I’m close, I just need to polish my application a bit,” when the real issue is that you’re optimising for the wrong criteria.
I don’t have any other strong solutions to offer. But I would love for us (as a community) to explore how orgs can be more explicit about what actually drives their decisions, because the current ambiguity incurs huge costs within the whole system of orgs and applicants.
One thing that might help enormously is normalising brief, even formulaic, qualitative feedback at rejection. It doesn’t need to be personalised or lengthy. Even something like “Your application was strong on X but we needed more evidence of Y” — a single sentence would be useful. I know some orgs already do this at later stages (and I’m grateful to the ones that have given me feedback). The question is whether it can be extended earlier in the process, even in a templated way.
To Dinesh’s credit, the post is pushing in the right direction. Any feedback is better than the current status quo. And if the statistical approaches outlined here are what’s realistic in the short term, I’d take them over nothing.
On the single sentence qualitative feedback, I do think this would be very helpful. Just simple, direct statements such as, “Not qualified due to lacking x/y/z core requirement,” versus simply being a weak, but not fundamentally flawed candidate would be very helpful.
Right now, everyone who isn’t hired is passed over in favor of stronger applicants. Obviously. I want to know whether my application was even read, to be honest. As a mid-career person trying to transition, I have this growing cynicism that many EA orgs are simply going to filter me based on my age and the fact that I have not worked at some elite firm or gone to a prestigious university. And that’s fine actually I guess, but it would be helpful to get direct feedback to let me know whether I’m wasting my time applying in the first place.
I can just earn-to-give and do my own thing, it won’t hurt my feelings if I’m excluded from the clique.
Great feedback, thanks for sharing, Vinoy!
I agree with your first point partially—a broad funnel provides orgs with optionality. They can find good candidates even when the hiring processes are not optimized—e.g. job descriptions are not well written, etc.
That said, I do feel that the advantages of a broad funnel fall off when the number of applications/position scales moves from 10-100x to >100x. This is more an intuitive reasoning—I do not have a good rationale for it. The closest is that every application pool is a biased sampling of the population distribution. Good sampling would provide distributions with means closest to the hiring needs. Just broadening the funnel would provide sampling distributions closer to the population distribution. Practically, this would mean a lot of manual filtering work to sub-select candidates for next stages of applications. In these scenarios, incentives of hiring committees and candidates might be more aligned.
You mentioned that feedback of the 95th or 10th percentile is useful, but not the 50th percentile. But I do think the feedback is actionable—it tells that the applicant is scoring average and they do not have any counterfactual impact for that position. I agree, it doesn’t tell the applicant how to improve, but it does provide some information on how they fare in the applicant pool.
However, I completely agree that real life is too noisy/messy and applicants need more information on why they were scored in a particular way. Saulie’s comment shows a nice way this can be done—providing some information about scores with respect to the key requirements of the job.
My optimistic hope is that once orgs start doing a simple percentile rank-based feedback, they can be pushed towards more feedback with respect to the key application requirements. This is slowly moving the bottom line, from no feedback to something useful. And an automated, easy-to-setup percentile rank system might provide just a low enough barrier to get the ball rolling...