4 Ways to Give Feedback to Job or Grant Applicants
I have previously posted about my belief that all EA organisations should provide candidates with feedback. Some people responded by suggesting that providing feedback to every job or grant applicant would be very costly and take a lot of staff time.
I have a more flexible view of feedback! I think a lot of things can be considered feedback, and it’s worth considering how each organisation can provide more feedback and more value to the community without creating disproportionate burdens on themselves.
I’ve listed ways of giving feedback from the least work for organisations to the most work. All of these options can be mixed and matched. There are also downsides to some, which I don’t go into, but I would hope hiring managers would also consider the upsides of including several of these methods at different points in their hiring or grant application review process.
Telling applicants whether or not they progressed to the next stage or got the job.
Knowing how far you’ve progressed with an organisation is feedback and is useful to candidates. It’s important to promptly update candidates who were unsuccessful as well as successful candidates; in general, EA organisations are good at this.
If it’s been longer than expected but you haven’t made a decision yet, it can be helpful to update applicants, especially for grants, as people may start to believe you dislike their project idea.
Providing concrete information on your hiring process.
Small pieces of factual information can help applicants understand how much they should update from your acceptance or rejection.
We invited 60 applicants to complete this one-hour work test. 42 applicants completed the test, and we invited the top 20 for an initial interview.
In this case, the applicant knows they were in the top half of work test results, which is useful—it’s pretty different from being in the top 5% or top 80%.
Your grant application was determined to be complete and within the scope of our fund. After careful review, we have decided not to offer a grant at this time.
In this situation, the grant applicant knows that it’s worth applying to the same grantmaker on similar topics, which is valuable information.
Giving standardized responses for why people didn’t progress to the next stage.
Interviewers, grantmakers, and assessors can use a pass/fail or Likert scale in clear categories to assess applications and tell applicants how they did.
We assessed your CV and cover letter for understanding of our organisation’s mission, working knowledge of Python and data analysis techniques, and relevant work experience. We felt you had a good understanding of our organisation’s mission and relevant work experience. We did not see evidence of knowledge of Python and data analysis techniques, so we will not be progressing with your application. Thank you for applying and please feel free to apply for roles with us in the future.
In this situation, the applicant knows the organisation was looking at three categories—hopefully categories that were mentioned in the job advertisement! - and that they met two of those categories. If they wanted to apply again for a similar job, they really need to learn Python first, or mention on their CV that they know it!
My employer, the Civil Service, tells applicants in advance which categories they’ll be assessed on at interview (for example Delivering at Pace), provides a rubric for how that category will be assessed, and then provides scores at the end (averaged from 2-4 interviewers). You can learn more about Civil Service interviews here.
Personalizing feedback for candidates (either all candidates, or those who ask).
Of course, the most helpful and most costly feedback is personalized to the individual. This can be combined with providing scores or pass/fail in standardized categories, or it can stand on its own.
Our interviewers noted they would have liked to hear more about your previous leadership and collaboration experiences.
This is helpful because it’s very actionable for future interviews and is probably directly connected to the reason the person didn’t get hired. Phrasing things as if the person may well have the relevant experience, but you didn’t get to see evidence of it, can help you avoid situations where the applicants comes back and says, “Actually I have plenty of leadership experience!” Although I have heard that some organisations (Ought was mentioned) do provide feedback in part because they want applicants to correct them if they’ve missed important information.
Your grant application was very clear about your idea, but we didn’t get a clear idea about who the team executing this idea would be. If you decide to apply to a future round, either with an iteration of this idea or with another idea, we’d like to understand that better.
This would be a super useful piece of feedback for the person receiving it. It’s extremely actionable and strongly signal’s the organisation’s openness to future grant proposals.
It’s worth deciding in advance under what circumstances you’re willing to provide feedback, how much feedback you’re willing to provide, and communicating that clearly to candidates. It’s more worthwhile for people interested in doing good to apply for your grants or jobs if they know they’ll receive some feedback during the process.
What have I missed? What other methods of giving feedback have you seen and liked or disliked? Please comment below with your views.
The candidate pool was much stronger than expected
This one can be sent to every applicant and still provides very useful information. It tells me that my expectations of the hiring bar might have been correct in the past. However, the market has changed and I should adjust my expectations.
For this one, concreteness is essential. One hiring manager phrased it like, “We had to reject many exceptional candidates that would have been instant hires a few years ago. Everyone did well on our take-home test that we thought impossible to complete within the 3 hours.”
Literally any feedback about final stage interviews.
I worry a ton about my final-stage interview performance. Partly this is a me-issue but I think there are structural reasons why final-stage interviews are so nerve-wracking.
They’re the most important to do well in. I can be a marginal candidate in every stage before that. The 20th best resume can still get a HR phone screen. The 10th best HR phone screen can still get a work trial. The 5th best work trial can still get a final interview. But only the top 1-3 candidates in a final interview can realistically expect an offer.
They’re the type of interview I have the least experience with. By definition, final stage interviews are at the end of the funnel so I’m going to have a lot less of them.
They’re oftentimes my first chance to interact with my potential coworkers and managers. And unless I already have contacts in that organization, I won’t know the professional norms or idiosyncratic expectations. This criteria is usually implicit and hard to figure out on my own.
One piece of feedback I really liked went like, “Your interview was very good and I have no doubt you could learn the skills very quickly. We just had someone else who had already done the work.”
These are great concrete examples, thank you so much for adding them!
This would be very helpful. It’s often confusing for the applicant as they have no idea what to change/work on. For me, I’ve been rejected by every EA fellowship I’ve ever applied for (woop woop, highscore) but I don’t know how to improve. Twice orgs have legit emailed me out the blue saying they like my blog/forum content and asking me to apply and then rejected me. I have no idea what stage I failed at. Was my application poorly written? Were my research suggestions poor? Is it a CV issue? Am I over or under qualified? Who knows. Certainly not me. So I’m sat here, still shooting off applications, with no idea which part is letting me down. I sometimes get “your application was really strong but unfortunately...” in the rejection email, but I’m never sure whether that’s being nice or actual feedback.
They say it’s expensive or time consuming to give feedback, and that’s a fair comment, but compared to the possible upside I think it’s a sound investment. I’ve collaborated with a bunch of really talented EAs who gave up applying to fellowships because of this. Deadlines are often extended because they want more applications, but maybe they’d get more applications if their attrition rates were lowered by giving people (especially early careers people) an idea of what areas they need to work on.
You can ask third parties to review your applications, but really only the orgs themselves know why something was rejected.
I don’t have any research or evidence to cite, but I know that I would rather live in a world in which applicants get more feedback. Thanks for writing this, and especially for emphasizing that there can be a spectrum of how much or little feedback, rather than embracing the false dichotomy of either no feedback or personalized messages.