Rethinking application feedback
Summary
A career transition into the EA ecosystem often needs plenty of patience and a sustained motivation to do good. In a system with abundant applicants and scarce feedback, candidates can lose months of potential impact in iterating, estimating their fit, and absorbing rejections with minimal guidance.
The EA community can multiply its impact by simply providing detailed feedback on job applications. In this post, I will outline a few approaches that would require minimal time from hiring committees while delivering rich feedback to the applicants.
My key assumptions are:
The “broad funnel – strong filter” approach used by many EA organizations, by design, attracts orders (100‑1,000×) more applicants than positions.
A large fraction of applicants are testing their personal fit for different roles and cause areas.
Feedback requests from applicants help them find where they can have the most impact, rather than make them question the organization’s decision to reject their application.
The feedback provided for these job applications is often negligible, especially at the initial stages.
Based on these assumptions, I argue that:
The current feedback is too meager to be useful; applicants have to apply to several positions to gather enough information to make decisions.
The collective time lost by applicants is non‑trivial and scales with the breadth of the application funnel.
It is easy to provide rich, non‑individualized feedback that does not require significant time and effort from the hiring team.
Rich feedback can reduce both the time spent by applicants on job applications and the total number of applications processed by the hiring committee, because it helps applicants evaluate fits to positions better.
This problem is important to solve because programs like HIP and the CEA Career Bootcamp bring in experienced professionals who lose counterfactual impact in navigating low‑feedback application processes.
I provide potential approaches for providing rich feedback to applicants. In addition, I made a small proof-of-concept app to showcase these approaches.
I end the post with a survey to collect concrete data from both applicants and hiring committees. The survey is an attempt to validate my assumptions (which I will communicate with another post, if I get sufficient response).
Note: The post is written from an applicant’s perspective—that is, what is best for the applicants and their impact. The situation might look very different from the perspective of the hiring committee. I tried my best to balance both perspectives by aiming to minimize the time and effort of hiring committees while still providing rich feedback to applicants.
Problem
Here is an example scenario:[1]
An EA organization that an applicant wants to join is hiring, but the applicant is unsure of their personal and/or skillset fit.
The job description invites applicants from diverse fields (broad funnel).
EA advice recommends quick tests by applying and assessing based on outcome.
The job description asks applicants to spend no more than 1 hour on the application.
In practice, applicants spend 2–3 hours (time limits are underestimated and do not account for resume tweaks, talking to people, etc.).
Most applicants receive an email saying 1,000+ applications have been received and individual feedback is not possible. These applicants spent 2–3 hours with no clear feedback on their fit—all they learn is that this position is not a good fit at this time.
Applicants are told that this is normal and that they need to keep applying, maybe take more courses, volunteer, etc.
Some applicants move to later rounds—work tests, interview, etc. This is the only definitive feedback of fit they receive, as often no feedback is shared regardless of how far they get in the application process.
After several months to a year of applications, some applicants secure a position, others pivot to something else.
This example illustrates that the application process is essentially a brute force approach with sparse feedback. Often, the only feedback the applicant receives is how consistently they advance into different stages of the application process.
Collectively, most time is lost in the application stage (refer to table). The broadness of the funnel implies that most who apply end up not receiving any clear feedback on how their application was perceived. This in turn invites a lot more applicants as they search for feedback, further increasing the collective time lost.[2][3]
| Stage | Feedback received | Expected time spent (hour) | Actual time spent (hour) | Total hours (100x / 1000x) | Days lost (100x / 1000x) |
|---|---|---|---|---|---|
| Application (+ Resume) | Yes / No | 1 | 2 | 200 / 2000 | 20 / 200 |
| First Interview (10%) | Sparse (interviewer reactions) | 1 | 2 | 20 / 200 | 2 / 20 |
| Work test (10%) | None (typically) | 2 | 2 | 20 / 200 | 2 / 20 |
| Final Interview (2%) | Sparse (interviewer reactions) | 1 | 2 | 4 / 40 | 0.4 / 40 |
| Selected applicants (1%) | 8 | 8 / 80 | 0.8 / 8 | ||
| Rejected applicants (99%) | 2.44 (average) | 244 / 2440 | 24.4 / 244 |
(Note: I assume a day to be 10 hours—as it is essentially a loss of a work day)
TL;DR: Providing feedback for the application stage will have the largest impact on the collective time spent and possible reducing the total number of applications in the long run.
Solution
If a large chunk of applicants in the pool are estimating their personal and/or skillset fit, the most important feedback for them would be their standing in the applicant pool. As an example, if they know that they consistently fall in the upper 90% of the applicant pool, they know that their profile fits the job description, even if their application is passed over.
The information about an applicant’s position in the pool falls in the sweet spot of being rich for the applicant while being low-effort for the hiring committee.
Below, I will provide three ways this can be done, starting from low-hanging fruit to the rich, individualized feedback. In addition, I have showcased these approaches through this simple proof-of-concept app.
Example: Applicant Dataset
Assumption:[4] For every stage in the application process, the hiring committee decides a rubric for scoring applicants. The committee then scores (or uses algorithms that score) each application. The scores are then sorted, and applicants with top x% scores advance to the next stage.
For this dataset, I assume that the hiring process has three stages of selection:
Application answers + resume
Work test + first interview
Final interview
At each round, applicants are scored for the components of the stage, and a score (rounded to 1st decimal) is computed based on a weighted factor model. This score is used to determine who advances to the next stage.
Here is an example table of how the dataset looks (table is sorted by total score):
Approach 1: Simple Statistics
Total applications: 1000
Positions available: 2
Applicant’s rank: 106
Applicant’s percentile: 89.55%
Approach 2: Visual Statistics
In addition to the above, provide a histogram of applicant scores as well as the applicant’s standing.
Approach 3: Descriptive Statistics
Combine approaches 1 and 2 to provide descriptive feedback.
How This Helps
The above approaches provide applicants with feedback in the following ways:
Approach 1 uses simple statistics to provide the applicant with a measure of their counterfactual impact—i.e., how many applicants have similar or better applications.
Approach 2 adds to this by providing more information about the applicant’s location in the distribution. It also provides a lot more information about the distribution (e.g., are there many applications similar to mine? How far off am I from the selected candidates and is it heavy-tailed?).
Approach 3 provides rich information about the application process and the applicant’s standing in each stage. This is the most helpful from the applicant’s perspective—it provides a lot more information on where they can improve. Additionally, this rich feedback helps candidates make decisions about fits and career paths earlier on in the process, saving them (and the hiring organizations) a lot of time (money, and energy).
I believe that this will have several effects on the hiring system:
The time spent by each applicant is no longer lost as they are provided with rich feedback on how to improve their scores for the particular job application.
It pushes organizations to generate and share a clear rubric that helps applicants update beliefs about their fit and test it using applications.
It changes the applicant’s perspective from applying widely to applying selectively and improving themselves at every step (with their location in the applicant distribution a clear metric of their improvement).
It makes the hiring process very open and transparent, automatically increasing the value of the organization (halo effect).
Concerns
One of the main concerns organizations have about providing rich feedback is that they believe it is time-consuming. One of the aims of this post is to show that this is not the case—it is easy to provide rich feedback that is completely automated.
Another critical concern is the worry that opening up the hiring process and sharing detailed feedback might increase the possibility of gaming this system. This might be a fair concern—it might be possible to learn more about the scoring process from the detailed statistics provided in approach 3.
That said, it is expected that only a small proportion of people will try to game a system. Here, I feel that the positive benefits of feedback outweigh the small increases in this proportion, if any.
Throughout this post, I assume good faith on the candidates’ part—the only use of the provided feedback is self-improvement. And I intuitively feel like this could be a valid assumption for the EA community. Please do let me know (comments or direct communication) how valid my assumptions are, along with evidence for and against my assumptions.
Final thoughts/considerations
I am one of several experienced professionals exploring a transition into the high-impact space. From my understanding, every year programs like the Impact Accelerator Program (which I was part of) introduce 300-400 professionals to EA and high-impact organizations. More programs, like the Centre for Effective Altruism’s Career Bootcamp, are starting up, suggesting a growing demand/inclination of experienced professionals to increase their impact through their careers.
From my conversations with several others, I find that the biggest hurdle for these professionals (including myself) is navigating the low-feedback environment. After building several skills throughout my career, it seems an incredibly inefficient way to figure out where I can apply my skills in an impactful manner.
Some organizations provide feedback to applicants who clear the first few stages of their application process. For example, Ambitious Impact provided feedback on my Charity Entrepreneurship application after my final interview. This was very useful for me—it provided direct feedback on my strengths and weaknesses. Additionally, I could ask specific questions on where I excelled, fell short, and how to improve myself for future applications. This is something I wish I had for all my applications. However, I know that it is not possible to have such rich, personalized feedback for all my applications as time is a valuable resource.
Practically, rich feedback does not necessarily mean personalized/individualized feedback—I strongly believe that it is possible to provide automated feedback that is very useful for the applicants (80-20 rule). Additionally, I think this is an easy problem to solve as most EA organizations already use clear and rational methods to score and evaluate applications. Sharing the applicants’ scores as descriptive statistics is a ripe, low-hanging fruit that is ready to harvest and that can have multiplicative effects for those aiming to improve their impact through their careers. I hope I have managed to convey that in this post.
Survey
I made a survey to gather concrete data to test my assumptions. If you are an applicant or part of a hiring committee, I would be very grateful if you could fill out the survey (it should take 10-15 min for applicants, and 5 min for hiring teams). Also, please do reach out if you have more thoughts or feedback on the post.
Survey link: https://docs.google.com/forms/d/e/1FAIpQLSdgZKgkYlwbODeGT-3XDYrJyAyv0GhZj5iK2hTikcgTnvLrgg/viewform
Acknowledgements: I am very grateful to Sruthi Balakrishnan, Nina Friedrich, Ivan Muñoz, and Mike X. Cohen for providing feedback on my draft.
This is based on job application rates I have encountered and/or heard from fellow applicants. ↩︎
In the worst case scenarios, applicants fall to the “spray and pray” method to find out where their applications get hits (feedback). ↩︎
Note that this does not even account the time lost by the hiring committee in vetting extra applications that arise due to a lack of feedback. ↩︎
I expect this to be valid for many, if not all, EA orgs. Please let me know if this is not valid or if I am oversimplifying this process. ↩︎
Couldn’t agree more, thank you for posting this! I definitely agree that orgs are underestimating time spent on applications, especially high-quality applications.
On sharing data with applicants: there was one EA job I applied to that shared some insightful data, and I wish all orgs would share something like this at each stage: “We received over 1,700 applications for a single role, which is an unprecedented number for our organisation. By progressing to the work test stage, you were in the shortlist of the top 20 applicants.”
Not exactly related, but I would also be grateful if all EA work tests 1.) specified whether time spent reading instructions is included in the allotted time to complete the test; 2.) specified whether it’s permitted to take breaks or must all be completed in one sitting; 3.) were two hours or less to complete.
Thanks for proposing this idea Dinesh. I’m supportive of the core idea here. More feedback is (obviously) better than less, and even the simplest version of what you’re proposing would be useful to implement. I hope some Orgs read this and act on it.
That said, I want to share some immediate concerns and thoughts I had (not quite hot takes, but not super well thought either)
For context: I’m someone who went through the High Impact Professionals program in late 2024 and have applied, unsuccessfully, to several EA roles over the last few years. I’m currently in the middle of another round of applications. So I’m very much the target audience for this post. But note also that frustration at this long and (so far) unsuccessful process might colour my opinions.
The purpose of a system is what it does..
I’d like to believe the framing here: hiring teams think feedback is too time-consuming, and the solution is showing them it can be automated and easy. But maybe the incentives are simply well aligned as it is right now?
The benefits of the “broad funnel, strong filter” accrues largely to orgs. They get a large, talented applicant pool and they can be highly selective. The cost, on the other hand — candidates spending months in low-feedback application cycles, burning time, money, and emotional energy — is borne largely by the applicants.
The post suggests that feedback means that applicants will apply more selectively, reducing total volume. I think most hiring teams would see that as a theoretical benefit at best. A smaller, more targeted applicant pool sounds nice in the abstract, but it also means potentially missing the unexpected candidate who wouldn’t have applied if they’d self-selected out.
The broad funnel gives orgs a lot of optionality. Orgs either have or need to create efficient processes for screening large volumes and they seem to be doing so (automated video interview, standard questions, LLM use etc)
To be clear, I’m not imputing bad intentions here. I believe most EA hiring teams would genuinely like to provide better feedback. But in a world of tradeoffs, I can understand why this might fall to the bottom of the list of priorities. .
Statistical feedback is less actionable than it appears
I think the percentile rank for applications is informative at the extremes: if I’m in the 95th percentile and still didn’t get the role, I can assume I was competitive and it just had to go one way or the other. Bad luck. And if I’m in the 10th percentile, maybe I seriously misjudged my fit/skills/experience for the job.
But what about if you end up in the messy middle?
What applicants actually need to know is why they scored where they did. Was it their experience? The framing? A mismatch between what the applicant emphasised and what the committee was actually looking for?
Is it clear what the orgs actually wanted?
That question is harder to answer than it looks because job descriptions tend to be written broadly, partly by design (broad funnel), partly because writing a really precise job description is hard and time-consuming, and honestly, probably not the best use of an org’s person-hours.
Here’s a concrete example from my own experience. I was told informally — through a conversation — that a particular role was really looking for someone with a strong entrepreneurial streak, someone who’d demonstrated the ability to start and execute projects on their own initiative. Was that in the job description? Technically, yes but it was buried among a dozen other qualities. And it was probably followed by one of those “even if you don’t meet all the criteria, we encourage you to apply” sentences.
In a situation with such blurry criteria, a percentile rank tells you where you stood but not necessarily what would have moved you up. You might see a decent percentile and think “I’m close, I just need to polish my application a bit,” when the real issue is that you’re optimising for the wrong criteria.
I don’t have any other strong solutions to offer. But I would love for us (as a community) to explore how orgs can be more explicit about what actually drives their decisions, because the current ambiguity incurs huge costs within the whole system of orgs and applicants.
One thing that might help enormously is normalising brief, even formulaic, qualitative feedback at rejection. It doesn’t need to be personalised or lengthy. Even something like “Your application was strong on X but we needed more evidence of Y” — a single sentence would be useful. I know some orgs already do this at later stages (and I’m grateful to the ones that have given me feedback). The question is whether it can be extended earlier in the process, even in a templated way.
To Dinesh’s credit, the post is pushing in the right direction. Any feedback is better than the current status quo. And if the statistical approaches outlined here are what’s realistic in the short term, I’d take them over nothing.