The reason why I wasn’t noting that the table is inconsistent with ‘not predictive’ is that I was unconsciously equating ‘not predictive’ with ‘not sufficiently predictive to be worth the candidate’s time’. Only your insisting made me think about it more carefully. Given that unconscious semantics, it’s not a strong misrepresentation of the source either. But of course it’s sloppy and you were right to point it out.
I hope this somewhat restores your expectation of my epistemic integrity. Also, I think there is not just evidence against, but also evidence for epistemic integrity in my article. That should factor into how ‘skeptical’ readers ought to be. Examples: The last paragraph of the introduction. The fact that I edit the article based on comments once I’m convinced that a comment is correct. The fact that I call out edits and don’t just rewrite history. The fact that it’s well-structured overall (not necessarily at the paragraph level), which makes it easy to respond to claims. The fact that I include and address possible objections to my points.
Addendum: I would explain the high incremental validity by the fact that a GMA test barely measures conscientiousness and integrity. In fact, footnote ‘c,d’ mentions that ‘the correlation between integrity and ability is zero’. But conscientiousness and integrity are important for job performance (depending on the job). I would expect much lower incremental validity over structured interviews or work samples. Because these, when done well, tell a lot about conscientiousness and integrity by themselves.
Okay, you convince me. I’ve rewritten that item.
The reason why I wasn’t noting that the table is inconsistent with ‘not predictive’ is that I was unconsciously equating ‘not predictive’ with ‘not sufficiently predictive to be worth the candidate’s time’. Only your insisting made me think about it more carefully. Given that unconscious semantics, it’s not a strong misrepresentation of the source either. But of course it’s sloppy and you were right to point it out.
I hope this somewhat restores your expectation of my epistemic integrity. Also, I think there is not just evidence against, but also evidence for epistemic integrity in my article. That should factor into how ‘skeptical’ readers ought to be. Examples: The last paragraph of the introduction. The fact that I edit the article based on comments once I’m convinced that a comment is correct. The fact that I call out edits and don’t just rewrite history. The fact that it’s well-structured overall (not necessarily at the paragraph level), which makes it easy to respond to claims. The fact that I include and address possible objections to my points.
Addendum: I would explain the high incremental validity by the fact that a GMA test barely measures conscientiousness and integrity. In fact, footnote ‘c,d’ mentions that ‘the correlation between integrity and ability is zero’. But conscientiousness and integrity are important for job performance (depending on the job). I would expect much lower incremental validity over structured interviews or work samples. Because these, when done well, tell a lot about conscientiousness and integrity by themselves.