Reducing EA job search waste
Broad intuition: Private industry isn’t optimizing for the same things as EA organizations, so we should expect that private industry’s hiring system doesn’t optimize EA values. Here I make some proposals for hiring practices and job listings. Aside from being implemented by orgs themselves, they could be implemented / coordinated / enforced by job boards such as the one on 80k hours.
With all of these proposals, I can’t begin to form principled estimates of the scope and tractability because I have almost no data. My intuition is that the amount of time spent applying to EA orgs is in the thousands of hours, and that implementation of my first proposal alone would reduce EA application expenditure by around 30% while also improving hiring outcomes.
I don’t have a sense of how controversial my intuitions / basic arguments here will be, so I’m trying to keep things short and intuitive before filling in given feedback.
Applicant pool transparency
Expose # of applicants in each stage of hiring process
(note that this includes counts of both pending and rejected applications)
This should hugely impact an EA’s ability to estimate their probability of being hired. It also informs EA’s about demand & neglectedness; this is highly relevant for EA’s with skills in multiple areas as well as EA’s taking on new skills.
Cost: While I am not intimately familiar with modern HR software solutions, it should be very cheap to query the number of applicants in each stage of interview and publish this value in real time. I would be happy to personally implement this for any firm.
Publish resumes and application notes of past acceptances
This is only relevant for large-ish orgs hiring multiples of similar positions; however, I assume that this characterizes the majority of applications. I also assume that a high percentage of applicants for these positions are uncompetitive against the field and would not apply given more information.
In my weakly held opinion, it would not be necessary to protect the privacy of employees by making this optional. Non-optional publication has the secondary benefit of incentivizing/enforcing honesty among applicants.
Cost: This should be inexpensive, since hiring is relatively infrequent. I assume the largest cost would be making a frontend for browsing this information, but that would be easy to replicate across firms. I would be happy to personally implement this for any firm.
Publish [scrubbed] resumes and application notes of past & pending applications (with consent)
I am not enthusiastic about this proposal but would consider it net positive. It’s much more costly than the others so I will not spend more time on it, I just wanted to point out that it’s an option.
Note: What if [policy] would dissuade the actual best applicant?
I assume this is possible. On the other hand, it is possible that the applicant you want does not apply but would have if they were more informed about the applicant pools of your firm and others.
Even if we expected hiring outcomes to in fact be worse somehow, it still seems likely to me that it would be optimal for some amount of firms to experiment with the policies and report back.
Furthermore, this argument is essentially paternalistic on the part of the firm towards the applicant. I think it would be more appropriate to place more trust in applicants to make generally more effective decisions in an environment with strictly more information.
We should also ask our institutions to expose themselves to feedback and regulation from the community where possible. I think we should be allowed to see the labor cost on the community of MIRI or OpenAI’s current recruitment process, we should be allowed to grade 80k hours on the outcomes of its career recommendations, etcetera.
Feedback to rejected applicants
Basic argument: Applicants are not very informed about their hireability. The hiring process is an expensive effort by a firm aimed to assess the hireability of all willing applicants. Currently, most firms share the bare minimum of the fruits of this effort with applicants, in the form of a binary decision. Shouldn’t we expect it to be beneficial to share as much as possible instead?
Assuming that firms record at least some notes about each applicant, increasing the richness of feedback should be pretty cheap.
Coordination between employers
Here I would differentiate between “soft” coordination, such as creating a standardized portal through which to issue applications to multiple firms, and “hard” coordination, such as how sports teams use a draft system. It seems obvious to me that both forms of coordination could be useful under some circumstances, but I don’t have a lot to say about this idea otherwise and merely wanted to point out the option.
- Illegible impact is still impact by 13 Feb 2020 21:45 UTC; 134 points) (
- 15 Oct 2021 5:39 UTC; 3 points) 's comment on The Cost of Rejection by (
Although private industry and EA organisations may have different incentives, a lot of law for the former will apply to the latter. Per Khorton, demanding the right to publish successful applicants CVs would be probably illegal in many places, and some ‘coordination’ between EA orgs (e.g. a draft system) seems likely to run afoul of competition law.
Further:
The lowest hanging fruit here (which seems a like a good idea) is to give measures of applicant:place ratios for calibration purposes.
Independent of legal worries, one probably doesn’t need to look at resumes to gauge applicant pool—most orgs have team pages, and so one can look at bios.
More extensive feedback to unsuccessful applicants is good, but it easier said than done, as explained by Kelsey Piper here.
I don’t think EA employers are ‘accountable to the community’ for how onerous their hiring process is, provided they make reasonable efforts inform potential applicants before they apply. If they’ve done this, then I’d default to leaving it to market participants to make decisions in their best interest.
I was aware of the possibility of relevant competition law, but didn’t mention it because I’m just not that familiar. My assumption was that it would not be the same for non-profits, but that could be untrue. I am not very excited about coordination between employers in any case.
This is a good point.
Thanks for the post by Kelsey. My thought is that we shouldn’t expect organizations to worry too much about whether the feedback is constructive or even easy to understand, which seems to be the bulk of the work Kelsey is describing. On the one hand it’s bad if EA orgs alienate applicants via the mechanisms Kelsey describes, on the other hand I do still think that something is better than nothing given sufficient maturity. Nonetheless I take your point seriously.
There are serious legal risks to giving feedback of any kind, let alone feedback that is neither “constructive” nor “easy to understand”. I found this book on U.S. employment law to be an accessible introduction to legal restrictions around hiring with good citations (though it is written in an alarmist, of-the-moment tone).
We might hope that candidates with an EA mindset wouldn’t sue after getting feedback, but not all candidates will have strong EA ties, and even people with strong EA ties sometimes do surprising things.
Other difficulties with feedback include:
Making it harder to implement work tests in the future (Open Phil tells me I didn’t do X on their test, so I do it next time and tell my friends to do it next time and everyone’s natural ability is now a bit murkier)
Creating arguments with disgruntled candidates (“that’s not enough justification for not hiring me, I’m going to send you nasty emails now”; “you told me I didn’t have X, but I actually do and accidentally left it out of my resume, you’d better hire me now”)
Creating a sense of bias/favoritism (person A is a really strong candidate on the cusp of getting hired and gets detailed feedback; person B is a really weak candidate and would be much less useful to provide with feedback; person B hears that person A got feedback and is angry)
Personally, I love feedback, and I appreciate Ben West of Ought for giving the best feedback of any org I applied to in my last round of job-hunting, but I can understand why organizations often don’t give out very much.
+1 to Ought giving great job-search feedback.
I want more information, but I don’t want information shared about me without my consent! I would definitely not work somewhere that shared my CV and application notes without my consent. This proposal is also illegal in Europe.
Thanks for posting! I liked the note at the beginning, which gave me a good sense for your confidence levels and purpose. As usual, this comment will highlight areas where I disagree or saw room for expansion, rather than agreement.
This wouldn’t be hard to do in most cases, but it might be risky for smaller hiring processes. For example, if only three people apply for Position X, it might be possible for someone to infer who they were based on their knowledge of the community, and therefore to know who was rejected. (Ideally, not getting a job wouldn’t expose someone to any embarrassment in the EA community, but I don’t think that’s the case right now.)
As others have noted in the comments, candidates can get a good amount of information by looking at public bios/LinkedIn pages/other information about current employees. There’s also a risk that seeing what got someone hired in 2016 won’t be very informative for “what will get someone hired in 2019”. Organizations often raise their standards as they grow and professionalize.
I’m very wary about recommendations that lead to EA organizations having to consider the entire community’s opinion on internal decisions. MIRI and OpenAI seems like especially strange organizations to ask for this data; they have many fewer applicants than, say, Open Phil or GiveWell, and are much less likely to hire for “generalist” roles that offer more flexible approaches to hiring. (That is, Open Phil probably has more degrees of freedom in finding good generalist researchers than OpenAI does in finding machine learning experts.)
Also, it seems like the most appropriate group of people to make this kind of demand of MIRI would be “MIRI’s donors”, and they haven’t done so that I know of.
As for 80K: If you want to grade them based on the outcomes of their recommendations, you can already do so. They get plenty of feedback from community members on their strategy. Is there particular data you want them to collect that they currently don’t?