Aside, which I’m adding after having written the rest of the comment: I think most EAs would agree that intelligence isn’t a moral virtue. Nonetheless, I think there can be a tendency in EA to praise intelligence (and its signals) in a way that borders on or insinuates moralization.
In a more just world, being called “unintelligent” or even “stupid” wouldn’t seem much different than being called “unattractive,” and being called smart or even “brilliant” wouldn’t leave anyone gushing with pride.
Nice post. I started writing a comment that turned into a couple pages, which I hope will become a post along the lines of:
No really, what about ‘dumb’ EAs? Have there been any attempts to really answer this question?
There seems to be a consensus among smart, well-connected EAs that impact is heavy-tailed, so everyone but the top 0.1-10% (of people who call themselves EA!) or something is a rounding error.
I think this is largely true from the perspective of a hiring committee who can fill one role
But the standard qualitative interpretation might break down when there is a large space of possible roles.
I know this isn’t at all an original point and I’m sure there are better write ups elsewhere, but one thing my brain keeps coming back to is that “2*15=30.”
A more appropriate model/question might be: “what is n such that 1.1^n=100^7?”
I didn’t know what the answer was when writing it down, and didn’t have much of a guess. Exponentials are almost never intuitive, so my type-I brain I wouldn’t have been surprised to see “n=19.43” or “n=6.2*10^14” pop up.
The implication here (very tentatively, I haven’t thought much about this) is that finding (and putting to work) 338 people who are 1.1xs is just as good and important as finding seven 100xs.
I think (once again, tentatively) that this literal equation might be a not-terrible model of the actual situation.
From the perspective of EA as a whole, the right response to a heavy tailed distribution might be to enthusiastically find and make good use of as many 1.1xs as we can!
I tentatively think the magnitude 1.1 is justifiably as high as it is because both:
(1) there is probably a range of IQ (maybe like 110-130)? that is sufficient to have a decent-good grasp of EA concepts and yet not sufficient to push the cutting edge of intellectual work.
(2) This means that you can’t buy this labor on the market, since the macroeconomy hasn’t yet adjusted to the existence of EA megadonors!
In theory with an ‘infinite EA money printer’, we’d eventually see education+industry trying to crank out people who can do the types of jobs I’m imagining from market incentives alone! I don’t think this is going to happen any time soon.
In economics it’s pretty important (and I’m pretty sure empirically true quite often) that low and high skilled labor can be compliments. That is, increasing the marginal product of low skilled labor causally increases the MP of high skilled labor.
Worth noting though that the actual macroeconomy isn’t a great model for EA because a big chunk of the labor market is low-skilled, whereas in EA it is not.
A final point: I am virtually certain that most people reading this who haven’t looked at some relevant statistics recently are overestimating the abilities of the average person or person of any nationality. As just one stat I quickly came across, “according to the U.S. Department of Education, 54% of U.S. adults 16-74 years old—about 130 million people—lack proficiency in literacy, reading below the equivalent of a sixth-grade level.”
I’m mostly saying this to emphasize that there is a real zone of people like Olivia who can be described as not merely “smarter than average” and “dumber than average in EA” but “way smarter than average, capable of doing useful cognitive work, capable of understanding core EA ideas, and also “dumber than the average EA.”
I think at least some more work can and should go to figuring out what a person who falls in this category should do.
I’m also saying to tell (as far as I have read) literally every single person commenting of the “i’M nOt tHat SmarT I dIdN’T gO tO aN iVy LeAgUe CoLlEgE” are wrong, and it’s worth noting that no standard signal of intelligence suffices to get a person accepted to any top 10 US school.
The chart below is of applicants to Stanford from my (public) high school.
Note that a weighted GPA of 5 would indicate that a person took exclusively AP classes during high school, which isn’t possible because of mandatory non-APs like P.E. and health.
Well, looks like “I started writing a comment but… ” comment has itself metastasized 🙃
The idea is that currently, CB+hiring seems think that finding seven (or any small integer) of people who can each multiply some idea/project/program’s success by 100 is a big win, because this multiplies the whole thing by 10^14! This is the kind of thing an “impact is heavy tailed” model naively seems to imply we should do.
But then I’m asking “how many people who can each add 10% to a thing’s value would be as good as finding those seven superstars”?
If the answer was like 410,000, it would seem like maybe finding the seven is the easier thing to do. But since the answer is 338, I think it would be easier to find and put to use those 338 people
Aside, which I’m adding after having written the rest of the comment: I think most EAs would agree that intelligence isn’t a moral virtue. Nonetheless, I think there can be a tendency in EA to praise intelligence (and its signals) in a way that borders on or insinuates moralization.
In a more just world, being called “unintelligent” or even “stupid” wouldn’t seem much different than being called “unattractive,” and being called smart or even “brilliant” wouldn’t leave anyone gushing with pride.
Nice post. I started writing a comment that turned into a couple pages, which I hope will become a post along the lines of:
No really, what about ‘dumb’ EAs? Have there been any attempts to really answer this question?
There seems to be a consensus among smart, well-connected EAs that impact is heavy-tailed, so everyone but the top 0.1-10% (of people who call themselves EA!) or something is a rounding error.
I think this is largely true from the perspective of a hiring committee who can fill one role
But the standard qualitative interpretation might break down when there is a large space of possible roles.
I know this isn’t at all an original point and I’m sure there are better write ups elsewhere, but one thing my brain keeps coming back to is that “2*15=30.”
A more appropriate model/question might be: “what is n such that 1.1^n=100^7?”
I didn’t know what the answer was when writing it down, and didn’t have much of a guess. Exponentials are almost never intuitive, so my type-I brain I wouldn’t have been surprised to see “n=19.43” or “n=6.2*10^14” pop up.
The real answer is n≈338.
The implication here (very tentatively, I haven’t thought much about this) is that finding (and putting to work) 338 people who are 1.1xs is just as good and important as finding seven 100xs.
I think (once again, tentatively) that this literal equation might be a not-terrible model of the actual situation.
From the perspective of EA as a whole, the right response to a heavy tailed distribution might be to enthusiastically find and make good use of as many 1.1xs as we can!
I tentatively think the magnitude 1.1 is justifiably as high as it is because both:
(1) there is probably a range of IQ (maybe like 110-130)? that is sufficient to have a decent-good grasp of EA concepts and yet not sufficient to push the cutting edge of intellectual work.
(2) This means that you can’t buy this labor on the market, since the macroeconomy hasn’t yet adjusted to the existence of EA megadonors!
In theory with an ‘infinite EA money printer’, we’d eventually see education+industry trying to crank out people who can do the types of jobs I’m imagining from market incentives alone! I don’t think this is going to happen any time soon.
In economics it’s pretty important (and I’m pretty sure empirically true quite often) that low and high skilled labor can be compliments. That is, increasing the marginal product of low skilled labor causally increases the MP of high skilled labor.
Worth noting though that the actual macroeconomy isn’t a great model for EA because a big chunk of the labor market is low-skilled, whereas in EA it is not.
A final point: I am virtually certain that most people reading this who haven’t looked at some relevant statistics recently are overestimating the abilities of the average person or person of any nationality. As just one stat I quickly came across, “according to the U.S. Department of Education, 54% of U.S. adults 16-74 years old—about 130 million people—lack proficiency in literacy, reading below the equivalent of a sixth-grade level.”
I’m mostly saying this to emphasize that there is a real zone of people like Olivia who can be described as not merely “smarter than average” and “dumber than average in EA” but “way smarter than average, capable of doing useful cognitive work, capable of understanding core EA ideas, and also “dumber than the average EA.”
I think at least some more work can and should go to figuring out what a person who falls in this category should do.
I’m also saying to tell (as far as I have read) literally every single person commenting of the “i’M nOt tHat SmarT I dIdN’T gO tO aN iVy LeAgUe CoLlEgE” are wrong, and it’s worth noting that no standard signal of intelligence suffices to get a person accepted to any top 10 US school.
The chart below is of applicants to Stanford from my (public) high school.
Note that a weighted GPA of 5 would indicate that a person took exclusively AP classes during high school, which isn’t possible because of mandatory non-APs like P.E. and health.
Well, looks like “I started writing a comment but… ” comment has itself metastasized 🙃
I don’t understand the relevancy of this question. Can you elaborate a bit? :)
Yeah, that was written way too hastily haha.
The idea is that currently, CB+hiring seems think that finding seven (or any small integer) of people who can each multiply some idea/project/program’s success by 100 is a big win, because this multiplies the whole thing by 10^14! This is the kind of thing an “impact is heavy tailed” model naively seems to imply we should do.
But then I’m asking “how many people who can each add 10% to a thing’s value would be as good as finding those seven superstars”?
If the answer was like 410,000, it would seem like maybe finding the seven is the easier thing to do. But since the answer is 338, I think it would be easier to find and put to use those 338 people
Hmm, I’m skeptical of this model. It seems like it would be increasingly difficult to achieve a constant 1.1x multiplier as you add more people.
For example, it would be much harder for Apple’s 300th employee to increase their share price by 10% compared to their 5th employee.
Maybe edit your original comment? I think it’s information that is worth explaining more clearly.