When deciding who to hire to fill a role in your organization, there are lots of factors to consider. Relevant experience, demonstrated capability, education, perceived fit, etc. One variable that exists for EA orgs (but I think less so for non-EA orgs) is how strongly the person identifies as an effective altruist. This is directly related to value alignment and perceived fit, and possibly related to dedication, capability, and other factors.
Job postings for EA orgs often recommend âfamiliarity with EA conceptsâ or something similar. Applications and interviews will sometimes ask about this as well. Anecdotally, Iâve experienced several job searches at EA orgs (from the employerâs side of things) in which âare they an EA?â is a question given somewhat high importance.
To be clear, Iâm pretty convinced that âvalues alignmentâ in general is important for a new hire. If youâre hiring someone for animal rights advocacy, you probably want someone who cares deeply about animal welfare. If youâre hiring someone to research AI safety, you probably want someone who is passionate about making safe AI.
But for both the animal welfare hire and the AI safety hire, how important is it that they identify as an effective altruist? What if theyâre truly passionate about animal welfare, but they think that EA ideas around cause analysis are pointless?
Should hiring managers be prioritizing EA-alignment? Or should they just focus on values alignment for their specific organization?
(I struggled over the framing for this question, so feel free to reframe if thatâs helpful.)
This was somewhat answered (in the case of CEA) here