Relevant info: this is essentially a CRM database (Customer Relations Management), which very commonly used by companies and non-profits. Your name is likely on hundreds of different CRM databases.
Let’s imagine for example, my interaction with Greenpeace. I signed a petition for Greenpeace when I was a teenager, which input my phone number, email and name into a Greenpeace CRM. Greenpeace then might have some partners who match names and email address with age and earning potential. They categorise me as a student, with low earning potential but with potential to give later, so they flag me for a yearly call to try to get me to sign up to be a member. If I was flagged as being a particularly large earner, I imagine more research would have been done on me, and I would receive more intensive contact with Greenpeace.
CRMs are by design pretty “creepy”, for example, if you use Hub Spot for newsletters, it shows de-anonymised data for who viewed what, and for how long. I imagine CRMs that have access to browser cookies are 100x more “creepy” than this.
I’m not well-versed on how CRMs work, so this is useful information, thanks. Though my guess is that CRMs probably don’t typically include assessments of IQ?
I am still interested in the answers to the above questions though, and potentially other follow-up Qs, like how CEA staff were planning on actually measuring EAG participants or members on these axes, the justifications behind the inputs in the draft, and what the proposed ideas may reflect in terms of the values and views held by CEA leadership.
I’m not claiming measuring IQ is morally bad (I don’t think I’ve made any moral claims in this comment thread?), but based just on “It was to be used by CEA staff to score attendees of EA conferences”, I think there is a range of executions that could make me think “this is a ridiculous thing to even consider trying, how on earth is this going to be reliable” to “this might be plausibly net positive”, and it’s hard to know what is actually going on just by reading the vox article.
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down “IQ 100” based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down “IQ 100” based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Yes. I’m in EA to give money/opportunities, not to get money/opportunities.
Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and I’m happy that in EA everyone uses extensive work trials instead of those.
I’m glad that this will not affect you in this case, but folks interested in the EA space because it provides an avenue for a more impactful career may disagree, and for a movement that is at least partly about using evidence and reason to create more positive impact, I’d be surprised if people genuinely believed that operationalization listed above is a good reflection of those ideals.
Yeah I think measuring IQ is a stupid idea but suppose you were to do it anyway—surely you’d want to measure IQ through an actual test and not just through guessing, right?
Relevant info: this is essentially a CRM database (Customer Relations Management), which very commonly used by companies and non-profits. Your name is likely on hundreds of different CRM databases.
Let’s imagine for example, my interaction with Greenpeace. I signed a petition for Greenpeace when I was a teenager, which input my phone number, email and name into a Greenpeace CRM. Greenpeace then might have some partners who match names and email address with age and earning potential. They categorise me as a student, with low earning potential but with potential to give later, so they flag me for a yearly call to try to get me to sign up to be a member. If I was flagged as being a particularly large earner, I imagine more research would have been done on me, and I would receive more intensive contact with Greenpeace.
CRMs are by design pretty “creepy”, for example, if you use Hub Spot for newsletters, it shows de-anonymised data for who viewed what, and for how long. I imagine CRMs that have access to browser cookies are 100x more “creepy” than this.
I’m not well-versed on how CRMs work, so this is useful information, thanks. Though my guess is that CRMs probably don’t typically include assessments of IQ?
I am still interested in the answers to the above questions though, and potentially other follow-up Qs, like how CEA staff were planning on actually measuring EAG participants or members on these axes, the justifications behind the inputs in the draft, and what the proposed ideas may reflect in terms of the values and views held by CEA leadership.
Why is including an assessment of IQ morally bad to track potential future hires? Or do you think it’s just a useless thing to estimate?
I’m not claiming measuring IQ is morally bad (I don’t think I’ve made any moral claims in this comment thread?), but based just on “It was to be used by CEA staff to score attendees of EA conferences”, I think there is a range of executions that could make me think “this is a ridiculous thing to even consider trying, how on earth is this going to be reliable” to “this might be plausibly net positive”, and it’s hard to know what is actually going on just by reading the vox article.
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down “IQ 100” based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Yes. I’m in EA to give money/opportunities, not to get money/opportunities.
Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and I’m happy that in EA everyone uses extensive work trials instead of those.
I’m glad that this will not affect you in this case, but folks interested in the EA space because it provides an avenue for a more impactful career may disagree, and for a movement that is at least partly about using evidence and reason to create more positive impact, I’d be surprised if people genuinely believed that operationalization listed above is a good reflection of those ideals.
Yeah I think measuring IQ is a stupid idea but suppose you were to do it anyway—surely you’d want to measure IQ through an actual test and not just through guessing, right?