But this changed fast. In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for “Potential Expected Long-Term Instrumental Value.” It was to be used by CEA staff to score attendees of EA conferences, to generate a “database for tracking leads” and identify individuals who were likely to develop high “dedication” to EA — a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.
Individuals were to be assessed along dimensions such as “integrity” or “strategic judgment” and “acting on own direction,” but also on “being value-aligned,” “IQ,” and “conscientiousness.” Real names, people I knew, were listed as test cases, and attached to them was a dollar sign (with an exchange rate of 13 PELTIV points = 1,000 “pledge equivalents” = 3 million “aligned dollars”).
What I saw was clearly a draft. Under a table titled “crappy uncalibrated talent table,” someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
The list showed just how much what it means to be “a good EA” has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.
When I confronted the instigator of PELTIV, I was told the measure was ultimately discarded. Upon my request for transparency and a public apology, he agreed the EA community should be informed about the experiment. They never were. Other metrics such as “highly engaged EA” appear to have taken its place.
On CEA gathering information from EA conference attendees:
Can someone from CEA clarify what information, if any, is currently being gathered on EA members
which of these, if any, is being used for assessing individuals,
for what purpose (e.g. for EAGs, other CEA opportunities, “identifying individuals who were likely to develop high dedication to EA”), and
which organizations these are shared with, if relevant?
Given CEA had a leadership change in 2019, the same year as the most recent leadership change, can someone from CEA clarify the timing of this measure of value (i.e. was this under Larissa Hesketh-Rowe or Max Dalton as CEO)?
Can someone from CEA also justify the reasoning behind these two claims in particular, and the extent to which this represents the views of CEA leadership at present?
For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120.
Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
Will also note the apparent consistency with the previous case of CEA over-emphasizing longtermism and AI over global health and animal welfare in earlier versions of the handbook, despite claiming to not take an organizational stance on any cause areas specifically.
Relevant info: this is essentially a CRM database (Customer Relations Management), which very commonly used by companies and non-profits. Your name is likely on hundreds of different CRM databases.
Let’s imagine for example, my interaction with Greenpeace. I signed a petition for Greenpeace when I was a teenager, which input my phone number, email and name into a Greenpeace CRM. Greenpeace then might have some partners who match names and email address with age and earning potential. They categorise me as a student, with low earning potential but with potential to give later, so they flag me for a yearly call to try to get me to sign up to be a member. If I was flagged as being a particularly large earner, I imagine more research would have been done on me, and I would receive more intensive contact with Greenpeace.
CRMs are by design pretty “creepy”, for example, if you use Hub Spot for newsletters, it shows de-anonymised data for who viewed what, and for how long. I imagine CRMs that have access to browser cookies are 100x more “creepy” than this.
I’m not well-versed on how CRMs work, so this is useful information, thanks. Though my guess is that CRMs probably don’t typically include assessments of IQ?
I am still interested in the answers to the above questions though, and potentially other follow-up Qs, like how CEA staff were planning on actually measuring EAG participants or members on these axes, the justifications behind the inputs in the draft, and what the proposed ideas may reflect in terms of the values and views held by CEA leadership.
I’m not claiming measuring IQ is morally bad (I don’t think I’ve made any moral claims in this comment thread?), but based just on “It was to be used by CEA staff to score attendees of EA conferences”, I think there is a range of executions that could make me think “this is a ridiculous thing to even consider trying, how on earth is this going to be reliable” to “this might be plausibly net positive”, and it’s hard to know what is actually going on just by reading the vox article.
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down “IQ 100” based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down “IQ 100” based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Yes. I’m in EA to give money/opportunities, not to get money/opportunities.
Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and I’m happy that in EA everyone uses extensive work trials instead of those.
I’m glad that this will not affect you in this case, but folks interested in the EA space because it provides an avenue for a more impactful career may disagree, and for a movement that is at least partly about using evidence and reason to create more positive impact, I’d be surprised if people genuinely believed that operationalization listed above is a good reflection of those ideals.
Yeah I think measuring IQ is a stupid idea but suppose you were to do it anyway—surely you’d want to measure IQ through an actual test and not just through guessing, right?
The fact that the PELTIV score involves measurement of IQ without using a proper test, combined with the separate Bostrom controversy revealing that at least some EAs take questions of “race and IQ” very seriously, makes me very deeply concerned that there may have been racial discrimination in some EA decision-making at some point.
I second the call for more information about this particular issue.
Just to add to this, I’ve had conversations about expanding EA around the world whereby it felt the (negative) response had elitist/racist undertones.
Some questions for CEA:
On CEA gathering information from EA conference attendees:
Can someone from CEA clarify what information, if any, is currently being gathered on EA members
which of these, if any, is being used for assessing individuals,
for what purpose (e.g. for EAGs, other CEA opportunities, “identifying individuals who were likely to develop high dedication to EA”), and
which organizations these are shared with, if relevant?
Given CEA had a leadership change in 2019, the same year as the most recent leadership change, can someone from CEA clarify the timing of this measure of value (i.e. was this under Larissa Hesketh-Rowe or Max Dalton as CEO)?
Can someone from CEA also justify the reasoning behind these two claims in particular, and the extent to which this represents the views of CEA leadership at present?
Will also note the apparent consistency with the previous case of CEA over-emphasizing longtermism and AI over global health and animal welfare in earlier versions of the handbook, despite claiming to not take an organizational stance on any cause areas specifically.
Relevant info: this is essentially a CRM database (Customer Relations Management), which very commonly used by companies and non-profits. Your name is likely on hundreds of different CRM databases.
Let’s imagine for example, my interaction with Greenpeace. I signed a petition for Greenpeace when I was a teenager, which input my phone number, email and name into a Greenpeace CRM. Greenpeace then might have some partners who match names and email address with age and earning potential. They categorise me as a student, with low earning potential but with potential to give later, so they flag me for a yearly call to try to get me to sign up to be a member. If I was flagged as being a particularly large earner, I imagine more research would have been done on me, and I would receive more intensive contact with Greenpeace.
CRMs are by design pretty “creepy”, for example, if you use Hub Spot for newsletters, it shows de-anonymised data for who viewed what, and for how long. I imagine CRMs that have access to browser cookies are 100x more “creepy” than this.
I’m not well-versed on how CRMs work, so this is useful information, thanks. Though my guess is that CRMs probably don’t typically include assessments of IQ?
I am still interested in the answers to the above questions though, and potentially other follow-up Qs, like how CEA staff were planning on actually measuring EAG participants or members on these axes, the justifications behind the inputs in the draft, and what the proposed ideas may reflect in terms of the values and views held by CEA leadership.
Why is including an assessment of IQ morally bad to track potential future hires? Or do you think it’s just a useless thing to estimate?
I’m not claiming measuring IQ is morally bad (I don’t think I’ve made any moral claims in this comment thread?), but based just on “It was to be used by CEA staff to score attendees of EA conferences”, I think there is a range of executions that could make me think “this is a ridiculous thing to even consider trying, how on earth is this going to be reliable” to “this might be plausibly net positive”, and it’s hard to know what is actually going on just by reading the vox article.
Would you be happy if a CEA staff member had a quick chat with you at EAG, wrote down “IQ 100” based on that conversation on an excel sheet, and this cost you opportunities in the EA space as a result?
Yes. I’m in EA to give money/opportunities, not to get money/opportunities.
Edit: I do think some people (in and outside of EA) overvalue quick chats when hiring, and I’m happy that in EA everyone uses extensive work trials instead of those.
I’m glad that this will not affect you in this case, but folks interested in the EA space because it provides an avenue for a more impactful career may disagree, and for a movement that is at least partly about using evidence and reason to create more positive impact, I’d be surprised if people genuinely believed that operationalization listed above is a good reflection of those ideals.
Yeah I think measuring IQ is a stupid idea but suppose you were to do it anyway—surely you’d want to measure IQ through an actual test and not just through guessing, right?
The fact that the PELTIV score involves measurement of IQ without using a proper test, combined with the separate Bostrom controversy revealing that at least some EAs take questions of “race and IQ” very seriously, makes me very deeply concerned that there may have been racial discrimination in some EA decision-making at some point.
I second the call for more information about this particular issue.
Just to add to this, I’ve had conversations about expanding EA around the world whereby it felt the (negative) response had elitist/racist undertones.