Epistemic status: I have never had an EA job, and I only recently started applying for EA Jobs. This should not read as “here is gold-standard advice from someone with lots of experience in hiring” but rather “here is some advice that Akash has found helpful recently.”
As I apply to EA jobs (i.e., jobs at organizations that are explicitly aligned with the effective altruism movement), I am finding it helpful to keep in mind the following three pieces of advice:
Be radically honest (don’t present the “best” version of myself– present the most accurate version of myself)
Meet people (don’t assume that EA is a perfect meritocracy– recognize that many jobs result from networking)
Be critical (don’t assume that every EA job is highly impactful– take time to compare the expected impact of different roles/orgs).
Honesty
Alice: My goal is to get a job at Google. In order to do this, I will study up on what Google wants from job applicants. I will prepare some answers and questions that Google wants. I will optimize for presenting the version of Alice that is most attractive to Google, so the recruiter is most likely to hire me.
Bob: My goal is to reduce x-risk. In order to do this, one plausible instrumental goal would be to work at 80,000 Hours. I’m not sure that this is the best instrumental goal; maybe someone else would be a better fit, or maybe my comparative advantage is somewhere else. I will prepare some answers and questions that most accurately reflect my current aptitudes and uncertainties. I will optimize for presenting the version of Bob that is most authentic, so both me and the recruiter can make an informed choice.
I used to think about job applications like Alice. In EA, I now think about job applications like Bob.
The key insight was realizing that getting a job is not my terminal goal. Getting a job is a means to some end (e.g., reducing x-risk, improving the lives of farmed animals). If someone else is a better fit for the job, I want the organization to know that.
I think this differs pretty meaningfully from non-EA settings.
In non-EA settings, it often makes sense to present the “best” version of yourself to increase your chances of getting a job.
In EA settings, where you trust that the employer shares your terminal goals, I think it makes sense to optimize for “presenting the most accurate/authentic version of yourself.” I want the job to go to the best candidate, even if that’s not me.
One caveat is that you can learn new skills and you may change in the new job. As an example, if you have low motivation when working alone, but you are more motivated when working in teams, you might want to acknowledge both of these points in an EA job interview).
Overall, I’ve found it helpful to remind myself of the following: “If I’m not the best person for the role, I don’t want to be hired.”
(Note that the more precise version of the claim is something like “If joining this organization is not the action that reduces x-risk the most , then I don’t want to join this organization.” This framing is technically more accurate (e.g., sometimes the 2nd-best person for the role should take it because the best person for the role could do something even more impactful). But this version is also a bit clunkier, so I use the simpler version in practice).
Personally, I have also found it easier and less cognitively demanding to be optimizing for “present myself accurately” than “present myself in the way that is most likely to impress X.”
Meeting People
I used to think that meeting people was fairly important. I now think that it is extremely important. A few reasons I updated:
It seems that there are a few core community builders who stay up-to-date on the needs of EA orgs. I think they often spread information faster than public resources, and they can also make personalized recommendations.
There are lots of jobs/roles that are not posted publicly.
Having someone to “vouch” for you seems quite important. If someone whose judgment I trust recommends person A, I’m considerably more likely to talk to person A.
I personally was offered several work-trials as a result of meeting people at retreats, EAG, and being around Berkeley.
How to meet people:
Travel to an EA hub (e.g., Berkeley, Oxford, DC). I strongly recommend this (it also forces you to do research on the people in the area that you’d be excited to talk to).
Reach out to people online (many EAs are excited to talk to you. I may also be able to give you recommendations if you DM me).
Ask EAs that you know to give you recommendations. (Tangent: I think this takes advantage of the friendship paradox. Your EA contacts probably have more EA contacts than you do).
Importantly, if you’re being honest when meeting people (i.e., authentically presenting your strengths and weaknesses), they’ll have better recommendations for you.
Being Critical
I used to think that getting a job at any of the well-respected EA orgs constituted a success story, and differences between EA jobs were relatively small. I now think that there are major differences between the expected impact of different EA jobs, and it’s important to spend time critically evaluating each option. A few reasons I updated:
Different orgs have extremely different cause prioritizations (e.g., neartermist vs. longtermist).
Even within the same cause area, different orgs have different theories of change (e.g., theoretical alignment research vs. applied alignment research).
Even at a highly impactful org, different roles could vary tremendously in their expected impact.
Even if two roles were equally impactful in theory, it’s likely that I would be 2-10X better in one of them than the other. (Due to considerations of personal fit/comparative advantage.)
Even if two roles were equally impactful for me, they would likely cause me to develop different aptitudes. If one role helps me develop aptitudes that help me become more impactful later in my career, this role seems like it has a much higher expected value.
Many other community builders (whose judgment I respect) share this view.
Many of the EAs who I respect the most are ones who consistently question whether or not they’re on the right path. They identify cruxes and make statements like “I think there’s a 70% chance that I’m on the best path, but I wouldn’t be surprised if Y were true, in which case I think it would be better for me to do Z.”
Other Advice
A few other things I’ve been finding helpful (but didn’t make my “Top 3”):
Think about the qualities of the people who you’re going to be working with. Orgs are defined not only by their mission but by their people.
Question “identities” that you have (e.g., “I’m an introvert” or “I’m not a math person” or “I’m not the type of person who would enjoy X.”)
Identity labels are often useful. But sometimes, it’s helpful to “take off the identity hat” and imagine what you would do if you had the opposite identity. I find this especially helpful for brainstorming & for thinking about ways I want to skill-up or experiment with new “identity hats.”)
Take personal fit seriously, but try not to think about it too narrowly.
I think this is a pretty hard balance. On one hand, I think personal fit is extremely important. On the other hand, I think it’s easy to put on “personal fit blinders” and assume that you won’t enjoy something or won’t be good at something.
I also think that features of the environment (e.g., who you work with, who you live with, how much co-working you do, how much management you have) are often highly influential RE personal fit).
One concrete update is “try to do a lot of work-trials and get experience doing things. Try not to rule things out, especially if you haven’t had experience doing the work.”
Additional Resources
This post is not thorough. I focused on advice that I, personally, have been finding helpful, and that I think may be helpful to others.
There are many other resources out there. To highlight some:
Advice I’ve Found Helpful as I Apply to EA Jobs
Epistemic status: I have never had an EA job, and I only recently started applying for EA Jobs. This should not read as “here is gold-standard advice from someone with lots of experience in hiring” but rather “here is some advice that Akash has found helpful recently.”
As I apply to EA jobs (i.e., jobs at organizations that are explicitly aligned with the effective altruism movement), I am finding it helpful to keep in mind the following three pieces of advice:
Be radically honest (don’t present the “best” version of myself– present the most accurate version of myself)
Meet people (don’t assume that EA is a perfect meritocracy– recognize that many jobs result from networking)
Be critical (don’t assume that every EA job is highly impactful– take time to compare the expected impact of different roles/orgs).
Honesty
Alice: My goal is to get a job at Google. In order to do this, I will study up on what Google wants from job applicants. I will prepare some answers and questions that Google wants. I will optimize for presenting the version of Alice that is most attractive to Google, so the recruiter is most likely to hire me.
Bob: My goal is to reduce x-risk. In order to do this, one plausible instrumental goal would be to work at 80,000 Hours. I’m not sure that this is the best instrumental goal; maybe someone else would be a better fit, or maybe my comparative advantage is somewhere else. I will prepare some answers and questions that most accurately reflect my current aptitudes and uncertainties. I will optimize for presenting the version of Bob that is most authentic, so both me and the recruiter can make an informed choice.
I used to think about job applications like Alice. In EA, I now think about job applications like Bob.
The key insight was realizing that getting a job is not my terminal goal. Getting a job is a means to some end (e.g., reducing x-risk, improving the lives of farmed animals). If someone else is a better fit for the job, I want the organization to know that.
I think this differs pretty meaningfully from non-EA settings.
In non-EA settings, it often makes sense to present the “best” version of yourself to increase your chances of getting a job.
In EA settings, where you trust that the employer shares your terminal goals, I think it makes sense to optimize for “presenting the most accurate/authentic version of yourself.” I want the job to go to the best candidate, even if that’s not me.
One caveat is that you can learn new skills and you may change in the new job. As an example, if you have low motivation when working alone, but you are more motivated when working in teams, you might want to acknowledge both of these points in an EA job interview).
Overall, I’ve found it helpful to remind myself of the following: “If I’m not the best person for the role, I don’t want to be hired.”
(Note that the more precise version of the claim is something like “If joining this organization is not the action that reduces x-risk the most , then I don’t want to join this organization.” This framing is technically more accurate (e.g., sometimes the 2nd-best person for the role should take it because the best person for the role could do something even more impactful). But this version is also a bit clunkier, so I use the simpler version in practice).
Personally, I have also found it easier and less cognitively demanding to be optimizing for “present myself accurately” than “present myself in the way that is most likely to impress X.”
Meeting People
I used to think that meeting people was fairly important. I now think that it is extremely important. A few reasons I updated:
It seems that there are a few core community builders who stay up-to-date on the needs of EA orgs. I think they often spread information faster than public resources, and they can also make personalized recommendations.
There are lots of jobs/roles that are not posted publicly.
Having someone to “vouch” for you seems quite important. If someone whose judgment I trust recommends person A, I’m considerably more likely to talk to person A.
I personally was offered several work-trials as a result of meeting people at retreats, EAG, and being around Berkeley.
How to meet people:
Travel to an EA hub (e.g., Berkeley, Oxford, DC). I strongly recommend this (it also forces you to do research on the people in the area that you’d be excited to talk to).
Reach out to people online (many EAs are excited to talk to you. I may also be able to give you recommendations if you DM me).
Ask EAs that you know to give you recommendations. (Tangent: I think this takes advantage of the friendship paradox. Your EA contacts probably have more EA contacts than you do).
Importantly, if you’re being honest when meeting people (i.e., authentically presenting your strengths and weaknesses), they’ll have better recommendations for you.
Being Critical
I used to think that getting a job at any of the well-respected EA orgs constituted a success story, and differences between EA jobs were relatively small. I now think that there are major differences between the expected impact of different EA jobs, and it’s important to spend time critically evaluating each option. A few reasons I updated:
Different orgs have extremely different cause prioritizations (e.g., neartermist vs. longtermist).
Even within the same cause area, different orgs have different theories of change (e.g., theoretical alignment research vs. applied alignment research).
Even at a highly impactful org, different roles could vary tremendously in their expected impact.
Even if two roles were equally impactful in theory, it’s likely that I would be 2-10X better in one of them than the other. (Due to considerations of personal fit/comparative advantage.)
Even if two roles were equally impactful for me, they would likely cause me to develop different aptitudes. If one role helps me develop aptitudes that help me become more impactful later in my career, this role seems like it has a much higher expected value.
Many other community builders (whose judgment I respect) share this view.
Many of the EAs who I respect the most are ones who consistently question whether or not they’re on the right path. They identify cruxes and make statements like “I think there’s a 70% chance that I’m on the best path, but I wouldn’t be surprised if Y were true, in which case I think it would be better for me to do Z.”
Other Advice
A few other things I’ve been finding helpful (but didn’t make my “Top 3”):
Think about the qualities of the people who you’re going to be working with. Orgs are defined not only by their mission but by their people.
Question “identities” that you have (e.g., “I’m an introvert” or “I’m not a math person” or “I’m not the type of person who would enjoy X.”)
Identity labels are often useful. But sometimes, it’s helpful to “take off the identity hat” and imagine what you would do if you had the opposite identity. I find this especially helpful for brainstorming & for thinking about ways I want to skill-up or experiment with new “identity hats.”)
Take personal fit seriously, but try not to think about it too narrowly.
I think this is a pretty hard balance. On one hand, I think personal fit is extremely important. On the other hand, I think it’s easy to put on “personal fit blinders” and assume that you won’t enjoy something or won’t be good at something.
I also think that features of the environment (e.g., who you work with, who you live with, how much co-working you do, how much management you have) are often highly influential RE personal fit).
One concrete update is “try to do a lot of work-trials and get experience doing things. Try not to rule things out, especially if you haven’t had experience doing the work.”
Additional Resources
This post is not thorough. I focused on advice that I, personally, have been finding helpful, and that I think may be helpful to others.
There are many other resources out there. To highlight some:
Holden Karnofsky’s recent aptitudes framework.
The scale/neglectedness/solvability/personal fit framework.
The 80,000 Hours career guide and career advising service.
This recent post about applying to EA jobs and grants.
I’m grateful to Jack Goldberg and George Stiffman for offering feedback on a draft of this post.