In principle this could be driven, in part, by scepticism about the absolute valuation of EA hires, rather than the relative valuation of hires.
Do you have a sense of what the percentage difference is between the typical first and second most preferred hire (i.e. the first most preferred hire is X% more impactful than the second), and what you think the absolute $ difference is?
We could start with the survey data suggesting that the difference between the first- and second-place choices being about 1⁄3 of the total value of position and then adjust downward from there.
I would adjust downward considerably, especially for more junior positions, for various reasons:
Although I admittedly done my research, these results are not consistent with my intuition about the distribution of ability levels in applicant pools more generally. So that would be my starting point, and I’d want to see strong reasons for a much bigger delta in EA between the first- and second choices in a largish candidate pool than in other professional fields.
As an intuition pump: Taken literally, these results suggest an indifference between the first choice working ~0.68 FTE for the same salary, management overhead, etc. as the second choice working a full FTE. Or: that an organization with a 3-person team covering function X would be largely indifferent between hiring the first and second choices (and leaving slot 3 unfilled) vs. hiring the 3rd/4th/5th, even keeping cost and other factors constant. While it’s plausible there are roles and applicant pools for which this tradeoff would be worth making, I would not presume it applies to most roles and pools.
The respondents likely knew who their first choices were and what they had done on the job, while the backup choices would be more of an abstraction.
As @Brad West🔸 notes, there are some psychological reasons for respondents to overestimate the importance of the “best” choice.
Even if we knew how much better the best candidate was than the second-best candidate, there’s still measurement error in the hiring process. That comes both from noise in the hiring competition itself, and from imperfect fit between the hiring process and true candidate quality. Respondents may overestimate how reliable their hiring processes were—if they re-ran the process with the same applicants but different work trial questions and other “random” factors, what are the odds that the same person would have been selected?
At least as of 2010, the standard error of difference for a section of the SAT was about 40-45 points (out of a range of 200-800). So—despite having a very high reliability (at/over .9) due to tried-and-true design and lots of questions, an administration of the SAT will have enough measurement error that it likely won’t identify the single best candidate out of a medium-to-large size group of good students who is the best at SAT critical reading tasks (much less the candidate who is best at critical reading itself!)
Although organizations hiring have some advantages over the SAT test writers, it seems to me that they also have some real disadvantages too (e.g., fewer scored items, subjective scoring, a need to reject most candidates after only a few items have been scored).
On the whole, I’m not convinced that the reliability of most hiring processes is as high as the reliability of the SAT. And if re-running the hiring process five times might get us 3-4 different top picks, that would make me skeptical of a proposition that the #1 candidate on a particular run of a hiring process was likely to be heads and shoulders above the #2 candidate on that run, or even the #5 candidate in a sufficiently large pool.
Hi David, Nick and Jason. Thanks for engaging and bringing in numbers to what would otherwise be a very subjective discussion!
I’m afraid I don’t put stock in that survey due to the potential echo chamber bias and small number of respondents (7). What I would put stock in, and be very interested to read, would be an assessment by an external, unbiased consulting firm that can tell us what great hires are worth and quantify the drop-off to second-choice candidates. ChatGPT suggests the following:
In principle this could be driven, in part, by scepticism about the absolute valuation of EA hires, rather than the relative valuation of hires.
Do you have a sense of what the percentage difference is between the typical first and second most preferred hire (i.e. the first most preferred hire is X% more impactful than the second), and what you think the absolute $ difference is?
We could start with the survey data suggesting that the difference between the first- and second-place choices being about 1⁄3 of the total value of position and then adjust downward from there.
I would adjust downward considerably, especially for more junior positions, for various reasons:
Although I admittedly done my research, these results are not consistent with my intuition about the distribution of ability levels in applicant pools more generally. So that would be my starting point, and I’d want to see strong reasons for a much bigger delta in EA between the first- and second choices in a largish candidate pool than in other professional fields.
As an intuition pump: Taken literally, these results suggest an indifference between the first choice working ~0.68 FTE for the same salary, management overhead, etc. as the second choice working a full FTE. Or: that an organization with a 3-person team covering function X would be largely indifferent between hiring the first and second choices (and leaving slot 3 unfilled) vs. hiring the 3rd/4th/5th, even keeping cost and other factors constant. While it’s plausible there are roles and applicant pools for which this tradeoff would be worth making, I would not presume it applies to most roles and pools.
The respondents likely knew who their first choices were and what they had done on the job, while the backup choices would be more of an abstraction.
As @Brad West🔸 notes, there are some psychological reasons for respondents to overestimate the importance of the “best” choice.
Even if we knew how much better the best candidate was than the second-best candidate, there’s still measurement error in the hiring process. That comes both from noise in the hiring competition itself, and from imperfect fit between the hiring process and true candidate quality. Respondents may overestimate how reliable their hiring processes were—if they re-ran the process with the same applicants but different work trial questions and other “random” factors, what are the odds that the same person would have been selected?
At least as of 2010, the standard error of difference for a section of the SAT was about 40-45 points (out of a range of 200-800). So—despite having a very high reliability (at/over .9) due to tried-and-true design and lots of questions, an administration of the SAT will have enough measurement error that it likely won’t identify the single best candidate out of a medium-to-large size group of good students who is the best at SAT critical reading tasks (much less the candidate who is best at critical reading itself!)
Although organizations hiring have some advantages over the SAT test writers, it seems to me that they also have some real disadvantages too (e.g., fewer scored items, subjective scoring, a need to reject most candidates after only a few items have been scored).
On the whole, I’m not convinced that the reliability of most hiring processes is as high as the reliability of the SAT. And if re-running the hiring process five times might get us 3-4 different top picks, that would make me skeptical of a proposition that the #1 candidate on a particular run of a hiring process was likely to be heads and shoulders above the #2 candidate on that run, or even the #5 candidate in a sufficiently large pool.
Hi David, Nick and Jason. Thanks for engaging and bringing in numbers to what would otherwise be a very subjective discussion!
I’m afraid I don’t put stock in that survey due to the potential echo chamber bias and small number of respondents (7). What I would put stock in, and be very interested to read, would be an assessment by an external, unbiased consulting firm that can tell us what great hires are worth and quantify the drop-off to second-choice candidates. ChatGPT suggests the following: