Hiring Process and Takeaways from Fish Welfare Initiative
Who should read this: This post will likely only be useful to those employers who will be directly involved in a hiring process, although the Recommendations for Applicants section should be useful for most job applicants. Job applicants may also find it interesting to learn about the employer side of the process.
We (the co-founders of Fish Welfare Initiative) recently completed our hiring process for our first new full-time employees: a Research Analyst and an Animal Welfare Specialist (more on that distinction later).
As neither of us had previous hiring experience, we set out to build a process based on the best available evidence on how to hire effectively, objectively, and kindly. The following is what we found and learned.
We hope that this process and the linked templates will be useful to your organization and will save you some of the large time investment required to create a new process. If you have any questions or comments, feel free to comment below or contact us.
Big Takeaways
Probably the best hiring advice we received came from the CEO of a GiveWell-recommended charity. He looks for candidates who are “smart, nice, and really want the job.”
Your hiring process is a reflection of your organization. To reflect FWI, we aimed to make our hiring process evidence-based, compassionate, unconventional/innovative, and requiring some dedication.
If you’re not already, you should use Calendly or another scheduling software to schedule all interviews.
We found EA Facebook pages, our website, and personal recommendations to be the best places to find talented applicants.
Score everything with a template, where applicant materials and questions are all scored quantitatively. This will help you increase objectivity. You should input these scores for each round into one master spreadsheet.
With interviews, we updated away from asking the same somewhat shallow questions. Rather, asking fewer and more probing semi-structured questions provided more valuable information.
Don’t be afraid to gather more information about a candidate: additional calls, emails, and interviews can all be helpful.
Don’t be a jerk to your applicants. Too many employers are. Your applicants will appreciate you for how you treat them and leave with a good impression of your organization.
Resources We Used
We relied heavily on the following resources and highly recommend looking them over. We agree with most of the recommendations they make, and have tried to restrict this post to primarily our own original takeaways so as not to duplicate work.
Takeaways from EAF’s Hiring Round (which heavily inspired the creation and structure of this post)
Effective Strategies for Equity and Inclusion—Sentience Institute
Additionally, although it was published towards the end of our hiring process, Notes on hiring a copyeditor for CEA is also a good resource.
We are very grateful to the organizations and individuals who created these resources.
The Role We Hired For
We were originally looking for a researcher who had prior knowledge and (ideally) credentials with fish and animal welfare. As this was the first hire FWI was going to make, we also wanted someone who would be able to take a leadership role in shaping the organization.
We ended up advertising for two separate roles: a Research Analyst and an Animal Welfare Specialist.
Advertising for Two Separate Jobs
Initially, we were unsure whether we wanted someone who was an early-career generalist (flexible and value-aligned), or someone who was later-career and had more domain knowledge and credentials (although possibly less flexibility and value alignment). In order to attract both types of applicants (even though we only intended to hire one person), we advertised for two separate roles: a Research Analyst and an Animal Welfare Specialist. We believe this allowed us to attract a much broader pool of applicants than we otherwise would, and we were sufficiently pleased with the finalist candidates that we did end up hiring one for each role after all.
Takeaway: A role’s name and qualification requirements significantly impact who applies for it.
Recommendations for Applicants
For more in-depth recommendations, see posts by the Effective Altruism Foundation and Charity Entrepreneurship.
Here are two more recommendations we had for applicants that weren’t listed by EAF or CE:
Check your spam folder. Our emails going to spam was a problem for several applicants.
Show interest early and often to the organization. The candidates we were most excited about were the ones who reached out early to us, and who regularly responded to our emails and asked us questions throughout the process.
Application Process Overview
Our application process consisted of six broad stages:
Initial application (includes filter questions, resume, cover letter, optional writing sample)
15-minute short interview
Test task (up to 8 hours)
45-minute long interview
Reference check
Calls with finalists and answering remaining questions
Within 4.5 weeks, we received 82 applications. Two months following the close of our initial application we made two full-time offers, which were both accepted. The following table provides further detail on our application stages (the time estimates were estimated by us):
We did not use any preset cutoff score to determine the number of candidates who would proceed to each round. Rather, we created rough cutoff scores after scoring at least some of the applicants. We may think more about using preset scores and criteria in the future (we expect using these could make the process more objective), although it is difficult to know what score makes a good cutoff before having calibrated yourself by reviewing several applicants.
Over the course of the process, 5 candidates withdrew their applications. We expect that we could have slightly lowered this number with a speedier application process. Although we felt there was sufficient value in each of our sections to justify their time cost, we will think more in future hiring rounds about how we can shorten the application length (perhaps by running stages concurrently, instead of discretely, for different candidates).
Internally, we spent roughly 150 hours on the application process, divided between our then two-person team.
We estimate that our application process caused all of our applicants to spend a combined total of roughly 616 hours working. We take this as a good reminder to be cautious of how much time we ask for from applicants.
Application Process in Detail
Stage 0: Spreading the Word
The following table lists where our applicants heard about the job:
What We Learned:
Posting on EA Facebook pages (e.g. Effective Altruism Job Postings) is very helpful for finding talented EA applicants.
Asking particular people who know many potential candidates is also very helpful, and will produce a disproportionate number of qualified applicants. One of our 8 final candidates learned of the job via a tweet from a famous fish researcher.
Veganjobs.com seems to be a very common way for people in the animal protection movement to find new jobs.
Stage 1: Initial Application
In the job ad, we asked applicants to complete a Google Form. This required them to upload their resume/CV, cover letter, and an optional writing sample. It also included several filter questions, the answers to which we could easily read and use to rule out some of the candidates.
What We Learned:
Filter questions were useful, especially logistical questions. In the future, we will also likely ask:
“Are you willing to relocate to <insert city name>?”
“When can you begin? We will only be considering candidates who can begin by <date>.”
A question getting at whether they were involved with animal rights (this told us a lot about an applicant’s motivations and helped our shape future questions; we wanted someone who was fiercely passionate about animals but still able to work collaboratively with industry).
Candidates from certain countries (especially in Europe) often include a headshot in their CVs. In order to avoid potential bias, in the future we will explicitly ask candidates on the form not to do this.
We asked for specific salary expectations, and found this to be useful in immediately filtering out some people whom we could never afford. However, if you are recruiting internationally make sure to specify a currency for the salary.
It is probably not a good idea to have optional materials in the first stage: it just made scoring more complicated. In the future, we will likely request additional materials such as a writing sample at a later stage.
Stage 2: Short Interview
The purpose of the short interview was to rule out candidates by screening them on knowledge of fish, communication skills, past history, tolerance for instability, and value alignment. Interviews lasted 15-30 minutes, and were largely structured.
We did several practice interviews before interviewing real applicants, and we found these helpful.
What We Learned:
Interviews are hard to get right. Nearly everyone will say that they are good at the thing you ask about, and will be able to provide some vague examples. The trick is being able to endure some social discomfort by pressing candidates on their examples in order to assess how impressive the examples (and the candidate) actually are.
We initially graded the applicant on each question. In the subsequent interview we later changed to having a list of general traits (e.g. value alignment, domain expertise, communication skills), and then adding a +1 or −1 each time the applicant said something relevant to that trait (For instance, for “value alignment” mentioning that altruism was an underlying motivation in their lives would get them a +1, while showing little passion when discussing their past work with fish welfare would get them a −1). We think this latter approach worked better.
We were not very pleased with our interview questions, as we had so many (about 15) that we lacked time to ask follow-up questions. In the future, we will likely stick with just 3-6 questions in the first interview, and press candidates for further detail on each.
See Hire with Your Head for helpful advice on interview questions that we wish we would have started following sooner.
Stage 3: Test Task
For the test task, we asked candidates to do a very similar task to the work they would do on the job.
What We Learned:
The test task proved to be a good filter because several people just didn’t complete it (in our case 3 people, or 18% of the remaining applicants). Although this could be due to the more talented candidates with other offers not being willing to spend more time on our application, we suspect (and hope) that the more likely explanation is that these candidates were less serious about the position.
We instructed candidates not to spend over 8 hours on the task. However, in the future we will make explicit exactly what the consequences are for surpassing this time cap (i.e. a point deduction vs disqualification of application). Leaving it ambiguous meant that some candidates went well over the limit and produced a more polished report, while others kept the time limit and produced a less finished report. This made evaluation somewhat harder. For shorter test tasks, requiring that they be done in a single, timed sitting would resolve this ambiguity.
Our test task involved contacting certain people in the real world (we’re being intentionally vague here so we can use the task again) in order to test applicants’ communication abilities. This has the drawbacks that 1) some of it is beyond the applicant’s control, and 2) some applicants will find this very stressful. Retrospectively, we are glad we included such a task as such communication skills are an important part of the job and are difficult to test for in interviews.
Up to 8 hours is a long time for a test task, and is more than most people will be accustomed to. While two applicants gave us negative feedback about this, we think the insight we gained into the applicant’s output ability and desire for the job well outweighs this time cost.
Stage 4: Long Interview
The purpose of the long interview was to gain deeper insight into the applicant’s experience, ethical alignment, and fit for the job. It is also an opportunity to get to know the applicant much better. We had our other co-founder conduct the long interview in order to ensure that both of us felt comfortable with the candidates.
What We Learned:
One of the most helpful questions we asked was: “We are most concerned with X part of your application. Do you think this is a valid concern?” For instance, we asked about lack of prior experience with fish, and comfort with being managed by a younger manager. In addition to gaining further evidence about a potential weakness, this also helped us assess a candidate’s ability to handle critical feedback.
Stage 5: Reference Check
We were initially unsure about the use of references, but retrospectively we are glad we used them. Even if nothing negative comes up, we found references to be useful to distinguish people who produce overall good work from people who produce overall exceptional work.
The best questions were the ones to which it was hardest to give vague positive answers. For instance, one of our favorite questions was: “Compared to all the students/employees you worked with, what percentage would you place the applicant in?” You might follow it up with: “What would it take for them to go up five percentage points?”
Stage 6: Call with Finalists
One of the best pieces of advice we received was to take our time with the process, and not make a decision until we were extremely confident in it. We thus had informal calls with all of our finalists in order to get to know them better and confront any remaining doubts.
What We Learned:
Especially on small teams, someone’s likeability is, to some extent, both a valid and invalid reason to hire them. We must be cautious of bias, but on the other hand an organization will run more effectively if its employees generally like each other. Such phone calls (as with any informal discussions) will leave additional room for biases, but overall we think they were useful as they allowed us to get a fuller picture of what it would be like to work with someone.
Candidate Evaluation
For most stages, we graded candidates on a rubric. We think this was useful as it increased objectivity, and allowed us to go back and see exactly what we thought of one candidate compared to another. We also used weighted criteria, then took the sum of the products of each criteria and its weight for the final score of an applicant in a round.
See our Candidate Evaluation spreadsheet.
Calibrations
With most of our graded stages, we had some form of internal calibration to ensure that 1) both reviewers were interpreting the scoring criteria in the same way, and 2) neither reviewer had any bias at play (of course, this doesn’t help if both reviewers share the same bias). Our calibration involved selecting a random sample of candidates scored by one reviewer, and then having the other reviewer score them to see how the scores compared. This did not take much time and we are glad we did it.
See our Calibration spreadsheet.
Diversity and Inclusion
We took diversity, inclusion, and objectivity seriously in our application process. The best resource we found on the topic was Sentience Institute’s blog post Effective Strategies for Equity and Inclusion.
Here are two of the things we did that you may not already be doing. Both apply to the initial job ad:
It can be tempting to put a laundry list of all the qualifications that a perfect applicant would have. However, you should only list the qualifications that are genuinely required, as a massive list of requirements may unduly lead to fewer female applicants (see article).
Paste the ad into a gender decoder to ensure a roughly equal amount of masculine- and feminine-coded words. We used this one.
We decided against anonymizing parts of the application process, as the research appears to be mixed as to whether anonymization is actually helpful. However, we are unsure about this and may do some anonymization in future hiring rounds.
Not Being a Jerk
It’s easy to see candidates as just another data point on your spreadsheet, but behind each of those data points is a person who is likely stressed out about getting a job and spending a lot of time on your application. We tried several things to make the process as straightforward and kind as possible:
We sent out a hiring process schedule to all those who applied (although retrospectively, we wished we would have just sent it to those who made it past the initial application, so as to not make it as obvious to the people who were eliminated in the first round).
We notified everyone, as soon as we reasonably could, whether they would be moving on to the next round or not. It’s concerning how many employers don’t do this.
We started the interviews with easier questions (e.g. “What are your biggest strengths?”) to help people overcome their initial anxiety.
We sent personalized rejection emails to our finalist candidates.
We received positive feedback from the applicants, even after they had been rejected, on all of these things. Aside from the inherent good in being kind to people, we expect that it is instrumentally valuable to leave people with a good impression of your organization.
We wish we would have:
Sent people a confirmation email after having received their test task. Not receiving a confirmation email likely left some people stressed about whether we received their task.
Not (initially) made the test task due right after the holidays. However, we also didn’t want to lengthen the application process further.
Thought more about how to shorten the overall application process length.
Learning How to Reject People
For many reviewers (including ourselves), having to reject a large number of talented, nice people is emotionally draining. The best piece of advice we can give regarding rejecting people is the following:
Rejection only means that the applicant is not the right fit for this position at this time. It does not mean that the applicant isn’t an awesome person or someone who will never be successful. By turning someone down for this job, you may in part be enabling them to get a future job where they would be even happier.
Templates
See this folder. It includes the following materials, which are openly available for download, modification, and use:
These are mostly the same as the questions and documents we used in our actual hiring process. Therefore, before using them we recommend re-reading the relevant section(s) of this post and making the changes that we suggest.
Conclusion
We hope this has been helpful to your organization. If you have further questions, feel free to comment below or contact us. Good luck!
- Relative Impact of the First 10 EA Forum Prize Winners by 16 Mar 2021 17:11 UTC; 88 points) (
- Perhaps the highest leverage meta-skill: an EA guide to hiring by 22 Aug 2022 10:18 UTC; 65 points) (
- 32 EA Forum Posts about Careers and Jobs (2020-2022) by 19 Mar 2022 22:27 UTC; 30 points) (
- EA Forum Prize: Winners for April 2020 by 8 Jun 2020 9:07 UTC; 24 points) (
- 8 Aug 2021 7:00 UTC; 2 points) 's comment on You should write about your job by (
Hi thank you for this. Very much appreciate the effort. Is it possible to answer these questions?
Of the 82, how many people were of the “level” where you would want to hire them? But couldn’t due to “lack of funding”, “wanting to grow slowly”, “don’t want to overwhelm the management” (how good was the talent pool)
Why are you not able to hire more than 2 people? (Are you low on funding or wanting to grow slowly, don’t want to overwhelm the management etc...)
Did you get the type of candidates you set out to hire? Or did you have to settle for someone with “lesser experience” than you wanted?
Wow! This is really good!
I think the general advice is great, and I really appreciate your candidness: Revealing the data and the materials you used, as well as the level of detail regarding your process.
This isn’t something that is usually written and I’m sure it’ll help a lot of people facing hiring challenges for EA orgs...
I can add a little about my own experience and process regarding rejection (which I agree is one of the hardest parts):
1. I try to honestly explain to candidates why they were rejected (usually by mail, sometimes by phone). This is usually possible for almost any candidate who has had an interview that wasn’t very short (with the exception of a few candidates that I have a very strong impression that they don’t want to hear it). Specifically, if possible, I try to answer the question of “What would need to change for you to be accepted in a year”. I started out very nervous about how candidates will receive it and have been surprised at how much it’s appreciated.
2. I really agree with what you wrote about not being a jerk, and that timely answers are an important part of it. This is especially true for rejection, partly because it’s easy for us to procrastinate making a decision when that decision is uncomfortable.
3. I think it’s important to make everything is worded precisely and clearly and leaves no room for misinterpretation. Be careful not to give false hope that you might still reconsider (if that’s not true), don’t write something that might be interpreted as hinting at some hidden reasons for the rejection, etc. This isn’t the place for writing with style. It should be optimized for conciseness and clarity. This is also why I usually send rejections by email, rather than phone. Phone calls are more personal but I can look over what I write in an email and make sure it says exactly what I mean.
4. In cases where I have a good impression of a candidate but there isn’t a fit, I offer to intro them to people I know who are also hiring for similar roles at other orgs\companies. It’s a good way of helping everyone involved and shows that I really do believe they can be great for other roles\orgs.
Thanks for the advice! I think #3 in particular is important, as it’s easy for someone trying to be nice to cause even more issues by not being sufficiently clear or blunt
Hopefully this doesn’t explode into a big debate...
Before I looked at the link, this sounded plausibly good. But looking at the words highlighted, they’re just stereotypes (and apparently that’s the point, based on the research), and trying to ensure balance might come at the cost of accurately describing the role (or giving the right impression of the role, based on the emphasis), so this doesn’t seem like a good target as a hard constraint to me. Maybe it’s still good to ensure balance anyway or at least move in that direction, but I think we should be pretty careful here.
Apparently these words are masculine:
And these are feminine:
I can imagine roles where words from one list just describe them poorly, but words from the other list describe them well, so trying to ensure equal balance would mislead, although just more balance might not.
Maybe a better (but more costly) approach would be to get feedback on the posting from people of the corresponding demographics or even get them to write a version of the posting.
I care a lot about feminism and a lot about diversity in hiring practices, but I will admit that I’m skeptical about (even irritated by) most gender decoders.
EDIT: I’m so sorry, I’ve contributed to turning this into a debate. facepalm
To be honest, my gut reaction to the gender decoder was that it is itself sexist, because it promotes sexist stereotypes.
However, the negative effects are probably pretty distributed across society generally and hard to measure, while the potential benefits from more actual diversity (in EA or generally) could outweigh them. It’s plausible that it actually on net reduces gender stereotyping by increasing the visibility of women in stereotypically male roles and men in stereotypically female roles. I really don’t know, though.
Thanks. :P
I think these are all valid points, and yeah the words are just stereotypes. Worth using caution with these sorts of simplistic decoders (but I still think they’re somewhat helpful). I think you could probably pay for a better one but I doubt that’s worth the money.
We did also ask people of different genders to review the ad before putting it out, and I definitely think that was worth the time cost.
Thanks for summarizing your insights, I think it’s great that you enable others to benifit from those learning opportunities.
Maybe I overread it, but did you think about compensating the applicants for the work they are putting into this? The OPP did this, giving me the impression that they value my time (and the time of EAs generally). I can imagine that this might be too costly for smaller orgs. Though you could set the price lower than them (around 300$ for 8 hours, IIRC). Already a 50$ Amazon gift card would have left me with the impression that an org thinks about my opportunity cost to spend 8 hours on a work test.
For some reason we did not consider compensating them for their time (probably due to our generally tight startup budget), although we probably will in the future. Thanks for the suggestion!
You could still compensate them! I’m sure they would appreciate your reaching out with retrospective compensation, although obviously the signalling value to this set of applicants would be lost.
How do you think moving requirements into bonuses/”good to haves” would compare here? I think if people see they hit a bunch of them, they’ll be more encouraged to apply, but if there are too many, they might still get discouraged, and this could affect some demographics disproportionately, too.
I think it’s good to have a balance.
It’s about balancing the ad to appeal to both A) really talented/good fit people who may have other options but are more likely to apply if they see they check a ton of boxes, and B) the talented but less apparently a great fit applicants (which you may want to cater to if you’re not finding enough of the first type, and also because the best applicants don’t always look that way on paper). And of course demographic/diversity reasons push the balance somewhat more towards B.
We did end up going with a few “requirements” and a longer list of “good to haves”, and I think that worked well. Will do again in the future.