An analysis I’d be curious to see, though you may not have time/desire to run it:
If you had took the evidence you’d gathered on your candidates besides GMA, how highly would those candidates’ scores-without-GMA correlate with their GMAs?
I’m not surprised that form scores and GMA were only loosely correlated, but I wonder if the full process of testing candidates might have given you a very good ability to predict GMA.
Reasoning: I respect the evidence showing GMA as a useful tool for hiring, but I’ve often thought of it as “unnecessary”; maybe it tells you something useful fast, but I hadn’t thought it would tell you much that you wouldn’t learn by seeing all the other steps of the application process.
Also, given concerns around bias and U.S. legal restrictions, I’m somewhat wary of other EA orgs deciding to adopt GMA testing. So I’d be interested to see whether, all else considered, it added useful info on candidates.
My impression is that while specifically *IQ* tests in hiring are restricted in the US, many of the standard hiring tests used there (eg Wonderlic https://www.wonderlic.com/) are basically trying to get at GMA. So I wouldn’t say the outside view was that testing for GMA was bad (though I don’t know what proportion of employers use such tests).
I just ran the numbers. These are the GMA correlations with an equally-weighted combination of all other instruments of the first three stages (form, CV, work test(s), two interviews). Note that this make the sample size very small:
Research Analyst: 0.19 (N=6)
Operations Analyst: 0.79 (N=4)
First two stages only (CV, form, work test(s)):
Research Analyst: 0.13 (N=9)
Operations Analyst: 0.70 (N=7)
I think the strongest case is their cost-effectiveness in terms of time invested on both sides.
Question:
An analysis I’d be curious to see, though you may not have time/desire to run it:
If you had took the evidence you’d gathered on your candidates besides GMA, how highly would those candidates’ scores-without-GMA correlate with their GMAs?
I’m not surprised that form scores and GMA were only loosely correlated, but I wonder if the full process of testing candidates might have given you a very good ability to predict GMA.
Reasoning: I respect the evidence showing GMA as a useful tool for hiring, but I’ve often thought of it as “unnecessary”; maybe it tells you something useful fast, but I hadn’t thought it would tell you much that you wouldn’t learn by seeing all the other steps of the application process.
Also, given concerns around bias and U.S. legal restrictions, I’m somewhat wary of other EA orgs deciding to adopt GMA testing. So I’d be interested to see whether, all else considered, it added useful info on candidates.
My impression is that while specifically *IQ* tests in hiring are restricted in the US, many of the standard hiring tests used there (eg Wonderlic https://www.wonderlic.com/) are basically trying to get at GMA. So I wouldn’t say the outside view was that testing for GMA was bad (though I don’t know what proportion of employers use such tests).
I also find myself feeling initially skeptical/averse of GMA testing for hiring, though I don’t really have a specific reason why.
I just ran the numbers. These are the GMA correlations with an equally-weighted combination of all other instruments of the first three stages (form, CV, work test(s), two interviews). Note that this make the sample size very small:
Research Analyst: 0.19 (N=6)
Operations Analyst: 0.79 (N=4)
First two stages only (CV, form, work test(s)):
Research Analyst: 0.13 (N=9)
Operations Analyst: 0.70 (N=7)
I think the strongest case is their cost-effectiveness in terms of time invested on both sides.