Stan—this is a legitimate and interesting question. I don’t know of good, representtive, quantitative data that’s directly relevant.
However, I can share some experiences from teaching EA content that might be illuminating, and semi-relevant. I’ve taught my ‘Psychology of Effective Altruism’ course (syllabus here), four times at a large American state university where the students show a very broad range of cognitive ability. This is an upper-level undergraduate seminar restricted mostly to juniors and seniors. I’d estimate the IQ range of the students taking the course to be about 100-140, with a mean around 115.
In my experience, the vast majority of the students really struggle with central EA concepts and rationality concepts like scope-sensitivity, neglectedness, tractability, steelmanning, recognizing and avoiding cognitive biases, and decoupling in general.
I try very hard to find readings and videos that explain all of these concepts as simply and clearly as possible. Many students kinda sorta get some glimpses into what it’s like to see the world through EA eyes. But very few of them can really master EA thinking to a level that would allow them to contribute significantly to the EA mission.
I would estimate that out of the 80 or so students who have taken my EA classes, only about 3-5 of them would really be competitive for EA research jobs, or good at doing EA public outreach. Most of those students probably have IQs above about 135. So this is mostly a matter of raw general intelligence (IQ), and partly a matter of personality traits such as Openness and Conscientiousness, and partly a matter of capacity for Aspy-style hyper-rationality and decoupling.
So, my impression from years of teaching EA to a wide distribution of students is that EA concepts are just intrinsically really, really difficult for ordinary human minds to understand, and that only a small percentage of people have the ability to really master them in an EA-useful way. So, cognitive elitism is mostly warranted for EA.
Having said that, I do think that EAs may under-estimate how many really bright people are out there in non-elitist institutions, jobs, and cities. The really elite universities are incredibly tiny in terms of student numbers. There might be more really smart people at large, high-quality state universities like U. Texas Austin (41,000 undergrads) or U. Michigan (33,000 undergrads) than there are at Harvard (7,000 undergrads) or Columbia (9,000 undergrads). Similar reasoning might apply in other countries. So, it would seem reasonable for EAs to consider broadening our search for EA-capable talent beyond super-elite institutions and ‘cool’ cities and tech careers, into other places where very smart people might be found.
You seem surprisingly confident that you know the “raw general intelligence” of your classes in general and the subgroup of those who would compete for EA jobs in particular. Isn’t there a danger that you are conflating “aptitude for EA ideas” with intelligence? Or even that the aspy intelligence that is associated with EA fluency might be misconstrued as very high general intelligence?
I’m more open to the idea that “EA orthodoxy” is a quality that is very unevenly distributed, and in many jobs would have an outsize impact on effectiveness. Less convinced that general intelligence is one of those things.
Stan—those are legitimate concerns, that there might be some circularity in judging general intelligence in relation to understanding EA concepts, in a classroom context.
I do have a pretty good sense of my university undergrads’ overall intelligence distribution from teaching many other classes on many topics over the last 23 years, and knowing the SAT and ACT distributions of the undergrads.
Within each class, I guess I’m judging overall intelligence mostly from participation in class discussions and online discussion forums, and term paper proposals, revisions, and final drafts.
As I mentioned, it would be nice to have some more quantitative, representative data on how IQ predicts capacity to understand EA concepts—and whether having certain other traits (e.g. Aspy-style thinking, Openness, etc) might add some more predictive validity over and above IQ.
I agree with your point about broadening beyond elite institutions, and there’s also an interesting argument that a focus on elite institutions could select for undesirable qualities as well as intelligence—e.g. a preoccupation with jumping through well-defined hoops in order to achieve social status, and general disregard for “little people”. For example, in 2014 a Yale prof wrote:
Our system of elite education manufactures young people who are smart and talented and driven, yes, but also anxious, timid, and lost, with little intellectual curiosity and a stunted sense of purpose: trapped in a bubble of privilege, heading meekly in the same direction, great at what they’re doing but with no idea why they’re doing it.
...
I taught many wonderful young people during my years in the Ivy League—bright, thoughtful, creative kids whom it was a pleasure to talk with and learn from. But most of them seemed content to color within the lines that their education had marked out for them… Everyone dressed as if they were ready to be interviewed at a moment’s notice.
...
It is true that today’s young people appear to be more socially engaged than kids have been for several decades and that they are more apt to harbor creative or entrepreneurial impulses. But it is also true, at least at the most selective schools, that even if those aspirations make it out of college—a big “if”—they tend to be played out within the same narrow conception of what constitutes a valid life: affluence, credentials, prestige.
...
...what these institutions mean by leadership is nothing more than getting to the top. Making partner at a major law firm or becoming a chief executive, climbing the greasy pole of whatever hierarchy you decide to attach yourself to. I don’t think it occurs to the people in charge of elite colleges that the concept of leadership ought to have a higher meaning, or, really, any meaning.
...
...Kids at less prestigious schools are apt to be more interesting, more curious, more open, and far less entitled and competitive.
...
...Selecting students by GPA or the number of extracurriculars more often benefits the faithful drudge than the original mind.
I suspect that the “elite institution mindset” played an important role in the FTX debacle, and is also a big reason why the EA movement has contributed to AI racing. Throwing money at people with strong resumes often means throwing money at status seekers, and “winning the AI race” (probably more accurate to say “speedrunning human extinction”) could sadly be seen as a way to gain status. As Jack Clark put it:
In AI, like in any field, most of the people who hold power are people who have been very good at winning a bunch of races. It’s hard for these people to not want to race and they privately think they should win the race.
...
Pretty much everyone who works on AI thinks that they’re ‘one of the good people’. Statistically, this is unlikely to be the case.
...
Proactively giving up power is one of the hardest things for people to do. Giving up power and money is even harder. AI orgs are rapidly gathering power and money and it’s not clear they have right incentives to willfully shed their own power. This sets us up for racing.
PS I should add that, when I taught EA concepts to my undergrads at Chinese University of Hong Kong—Shenzhen (CUHK-SZ) (c. 2020-2021), which is a much more cognitively selective university that the American state university where I usually teach, the Chinese undergrads had a much easier time understanding the EA ideas. Despite having much lower familiarity with other aspects of the Euro-American culture, charity system, Rationalism subculture, etc.
So I take this as (weak but suggestive) evidence that cognitive ability is a major driver of ability to understand EA principles.
Also, of course, if EA principles were easy to develop and master among ordinary people, EA principles would probably have been developed and mastered much earlier historically.
Stan—this is a legitimate and interesting question. I don’t know of good, representtive, quantitative data that’s directly relevant.
However, I can share some experiences from teaching EA content that might be illuminating, and semi-relevant. I’ve taught my ‘Psychology of Effective Altruism’ course (syllabus here), four times at a large American state university where the students show a very broad range of cognitive ability. This is an upper-level undergraduate seminar restricted mostly to juniors and seniors. I’d estimate the IQ range of the students taking the course to be about 100-140, with a mean around 115.
In my experience, the vast majority of the students really struggle with central EA concepts and rationality concepts like scope-sensitivity, neglectedness, tractability, steelmanning, recognizing and avoiding cognitive biases, and decoupling in general.
I try very hard to find readings and videos that explain all of these concepts as simply and clearly as possible. Many students kinda sorta get some glimpses into what it’s like to see the world through EA eyes. But very few of them can really master EA thinking to a level that would allow them to contribute significantly to the EA mission.
I would estimate that out of the 80 or so students who have taken my EA classes, only about 3-5 of them would really be competitive for EA research jobs, or good at doing EA public outreach. Most of those students probably have IQs above about 135. So this is mostly a matter of raw general intelligence (IQ), and partly a matter of personality traits such as Openness and Conscientiousness, and partly a matter of capacity for Aspy-style hyper-rationality and decoupling.
So, my impression from years of teaching EA to a wide distribution of students is that EA concepts are just intrinsically really, really difficult for ordinary human minds to understand, and that only a small percentage of people have the ability to really master them in an EA-useful way. So, cognitive elitism is mostly warranted for EA.
Having said that, I do think that EAs may under-estimate how many really bright people are out there in non-elitist institutions, jobs, and cities. The really elite universities are incredibly tiny in terms of student numbers. There might be more really smart people at large, high-quality state universities like U. Texas Austin (41,000 undergrads) or U. Michigan (33,000 undergrads) than there are at Harvard (7,000 undergrads) or Columbia (9,000 undergrads). Similar reasoning might apply in other countries. So, it would seem reasonable for EAs to consider broadening our search for EA-capable talent beyond super-elite institutions and ‘cool’ cities and tech careers, into other places where very smart people might be found.
Thanks, Geoffrey!
You seem surprisingly confident that you know the “raw general intelligence” of your classes in general and the subgroup of those who would compete for EA jobs in particular. Isn’t there a danger that you are conflating “aptitude for EA ideas” with intelligence? Or even that the aspy intelligence that is associated with EA fluency might be misconstrued as very high general intelligence?
I’m more open to the idea that “EA orthodoxy” is a quality that is very unevenly distributed, and in many jobs would have an outsize impact on effectiveness. Less convinced that general intelligence is one of those things.
Stan—those are legitimate concerns, that there might be some circularity in judging general intelligence in relation to understanding EA concepts, in a classroom context.
I do have a pretty good sense of my university undergrads’ overall intelligence distribution from teaching many other classes on many topics over the last 23 years, and knowing the SAT and ACT distributions of the undergrads.
Within each class, I guess I’m judging overall intelligence mostly from participation in class discussions and online discussion forums, and term paper proposals, revisions, and final drafts.
As I mentioned, it would be nice to have some more quantitative, representative data on how IQ predicts capacity to understand EA concepts—and whether having certain other traits (e.g. Aspy-style thinking, Openness, etc) might add some more predictive validity over and above IQ.
I agree with your point about broadening beyond elite institutions, and there’s also an interesting argument that a focus on elite institutions could select for undesirable qualities as well as intelligence—e.g. a preoccupation with jumping through well-defined hoops in order to achieve social status, and general disregard for “little people”. For example, in 2014 a Yale prof wrote:
I suspect that the “elite institution mindset” played an important role in the FTX debacle, and is also a big reason why the EA movement has contributed to AI racing. Throwing money at people with strong resumes often means throwing money at status seekers, and “winning the AI race” (probably more accurate to say “speedrunning human extinction”) could sadly be seen as a way to gain status. As Jack Clark put it:
PS I should add that, when I taught EA concepts to my undergrads at Chinese University of Hong Kong—Shenzhen (CUHK-SZ) (c. 2020-2021), which is a much more cognitively selective university that the American state university where I usually teach, the Chinese undergrads had a much easier time understanding the EA ideas. Despite having much lower familiarity with other aspects of the Euro-American culture, charity system, Rationalism subculture, etc.
So I take this as (weak but suggestive) evidence that cognitive ability is a major driver of ability to understand EA principles.
Also, of course, if EA principles were easy to develop and master among ordinary people, EA principles would probably have been developed and mastered much earlier historically.