Qualitatively, I think the appropriate claim from both my (shallow) understanding of the intelligence ∩ work performance literature and some other literature on related topics, plus personal impressions/anecdotes/intuitions goes
Intelligence (general mental ability) is the most general predictive feature for performance that we have, but it’s still not all that predictive in absolute terms.
Quantitatively, my current best estimate is that correlation between intelligence and impact* among self-identified highly-engaged EAs is ~0.55** (explains ~30% of variance). My guess is that we do not have substantial data to do better than ~0.7 (~50% of variance explained).
I don’t know whether other EAs agree with me here. My current guess is that numerically sensitive ones probably have numbers that aren’t too far off (maybe slightly lower?), while people who are less numerically/statistically sensitive will initially claim correlations that are higher.
However, this (if true) would likely be a general bias, rather than an intelligence-specific bias. I would further predict that EAs (at least ones who haven’t read this comment) will systematically overestimate the importance of other predictors as well, across a wide range of fields.
I think these numbers may seem pretty low compared to our intuitions for how important smarts are. I don’t know how to reconcile these intuitions exactly, except to again note that there are many other fields where intuitions dramatically overestimate correlations relative to reality.
*Here impact is operationalized loosely as “on a log-scale, what prediction-evaluation setups would say about someone’s past impact five years from now.”
Thanks for attempting to hold yourself to the standards you wish to see in others (although hypocrisy can be warranted sometimes). :)
Just to be clear, is your claim causal? I.e., would you claim that if we magically increased a sufficiently large sample of random HEA’s IQ by 10%, then we’d see a 3% increase in the groups five-year impact compared to a control group? (please take this as a purely hypothetical scenario where you don’t have to worry about tractability of raising IQ, etc.)
That IQ is the greatest predictor of five-year impact compared to everything else we could plausibly measure psychometrically (e.g., Grit/conscientiousness, openness (one of the BIG5), self-efficacy, courage, and psychological well-being)?
What’s the best resource you have for this claim? I’d love a couple of concrete papers.
That with all of the data we currently have available for EAs we can’t predict more than 50% of the variance in impact?
I apologize for the technical soundingness of what I said below. I think the actual underlying ideas are not particularly difficult, but as a practical manner I don’t want to invest the time to translate them to more normal English right now. If things I say sound confusing, I do apologize. Assume by default it’s a communication failure on my end for relatively straightforward concepts.
Yes, I think it is, broadly speaking.
There are some nuances here where at sufficiently large scales, we run into issues where doubling quality-adjusted labor has a lower than 2x effect on total impact, but at smaller scales this shouldn’t be an issue.
To be clear, I think raising actual IQ by 10% will have a larger effect than increasing impact by 3%, because of the ways the scales work
IQ is normed such that mean is 100 and a standard deviation is 15. This means that a difference of 30 points is much more than a 30% change.
In my original comment, I’m saying that raising someone by 10% of a standard deviation (s.d.) in intelligence will have, in expectation, a 3% increase in s.d.’s of impact.
10 IQ points is ~.67 s.d.s, so this means a .22 s.d. in impact increased, on a log-scale
Embarrassingly I don’t have a strong/consistent intuition for what the log-scale of impact actually is (is it closer to 1.6x? 2x? 5x? 10x?), so I don’t have a coherent view of what this translates to in terms of actual impact
But for most plausible parameters I think this will cash out to greater than 3%.
Yes, if we are talking about psychologically valid constructs that have a coherent English-language meaning.
A caveat here is that you can imagine that in the future we develop a scale “impact-ness” that correlate more strongly with actual future impact.
Either because of (bad) overfitting, or because of (mostly good) attempts to develop a really good psychometric scale, at the cost of being intuitively sound as a construct.
And then after you develop that scale, you can rename “impact-ness” to something like “agentiness” or “moral courage” or “effectiveness mindset” or w/e, but we should be aware that it’s unlikely that a scale that’s developed for prediction shares our common-language intuitions for what that scale represents.
Obviously, I’m not aware of papers on moral impact. The closest I have is things like the work performance literature:
I have an intuition that the world of humans is rather unpredictable (and this is borne out in most social science-y things I’ve read) such that getting >>50% on predictable individual variation is quite hard . I could be wrong about generalizability, e.g. because EAs are out-of-distribution in predictable ways, or because there’s enough range-restriction among self-identified EAs that makes the prediction task easier.
(though the latter will be a bit surprising, I think usually range-restriction makes predicting variance harder).
I’m willing to be corrected on the overall point to be clear, I’m sure many people on the Forum read more social science than me.
It does seem important to understand the underlying scale dynamic. However, it’s still unclear to me how to evaluate this claim as it depends a lot on the underlying Thery of Impact for the impact scale. E.g., I’d imagine that it’d be more or less relevant depending on the role (e.g., it might hold more true for a researcher than a community-builder or coach). Practically, I’d also claim that a strong focus on IQ among existing HEAs are less valuable. I.e., the answer to “how can we best increase the expected impact of HEA?” is unlikely to involve things directly related to IQ. E.g., anecdotally, I can say things such as emotional stability (opposite of neuroticism) and concrete ways of increasing conscientiousness is likely much more likely to come up (if I restrain the search query to validated constructs).
We might already have such a scale with the proto-EA scale. Additionally, I think it’s valuable to look for other proxies for impact (e.g., having done impressive things like starting a non-profit at an early age).
Thanks. That paper does seem to propose correlations in the ballpark you’re suggesting although I haven’t had the time to think about to what extent I find this convincing.
I agree. Especially because our model of what’s impactful is likely to change quite substantially across time (5-10 years).
Here are my own attempts to answer this:
Qualitatively, I think the appropriate claim from both my (shallow) understanding of the intelligence ∩ work performance literature and some other literature on related topics, plus personal impressions/anecdotes/intuitions goes
Quantitatively, my current best estimate is that correlation between intelligence and impact* among self-identified highly-engaged EAs is ~0.55** (explains ~30% of variance). My guess is that we do not have substantial data to do better than ~0.7 (~50% of variance explained).
I don’t know whether other EAs agree with me here. My current guess is that numerically sensitive ones probably have numbers that aren’t too far off (maybe slightly lower?), while people who are less numerically/statistically sensitive will initially claim correlations that are higher.
However, this (if true) would likely be a general bias, rather than an intelligence-specific bias. I would further predict that EAs (at least ones who haven’t read this comment) will systematically overestimate the importance of other predictors as well, across a wide range of fields.
I think these numbers may seem pretty low compared to our intuitions for how important smarts are. I don’t know how to reconcile these intuitions exactly, except to again note that there are many other fields where intuitions dramatically overestimate correlations relative to reality.
*Here impact is operationalized loosely as “on a log-scale, what prediction-evaluation setups would say about someone’s past impact five years from now.”
**precision of numbers do not imply confidence.
Thanks for attempting to hold yourself to the standards you wish to see in others (although hypocrisy can be warranted sometimes). :)
Just to be clear, is your claim causal? I.e., would you claim that if we magically increased a sufficiently large sample of random HEA’s IQ by 10%, then we’d see a 3% increase in the groups five-year impact compared to a control group? (please take this as a purely hypothetical scenario where you don’t have to worry about tractability of raising IQ, etc.)
That IQ is the greatest predictor of five-year impact compared to everything else we could plausibly measure psychometrically (e.g., Grit/conscientiousness, openness (one of the BIG5), self-efficacy, courage, and psychological well-being)?
What’s the best resource you have for this claim? I’d love a couple of concrete papers.
That with all of the data we currently have available for EAs we can’t predict more than 50% of the variance in impact?
I apologize for the technical soundingness of what I said below. I think the actual underlying ideas are not particularly difficult, but as a practical manner I don’t want to invest the time to translate them to more normal English right now. If things I say sound confusing, I do apologize. Assume by default it’s a communication failure on my end for relatively straightforward concepts.
Yes, I think it is, broadly speaking.
There are some nuances here where at sufficiently large scales, we run into issues where doubling quality-adjusted labor has a lower than 2x effect on total impact, but at smaller scales this shouldn’t be an issue.
To be clear, I think raising actual IQ by 10% will have a larger effect than increasing impact by 3%, because of the ways the scales work
IQ is normed such that mean is 100 and a standard deviation is 15. This means that a difference of 30 points is much more than a 30% change.
In my original comment, I’m saying that raising someone by 10% of a standard deviation (s.d.) in intelligence will have, in expectation, a 3% increase in s.d.’s of impact.
10 IQ points is ~.67 s.d.s, so this means a .22 s.d. in impact increased, on a log-scale
Embarrassingly I don’t have a strong/consistent intuition for what the log-scale of impact actually is (is it closer to 1.6x? 2x? 5x? 10x?), so I don’t have a coherent view of what this translates to in terms of actual impact
But for most plausible parameters I think this will cash out to greater than 3%.
Yes, if we are talking about psychologically valid constructs that have a coherent English-language meaning.
A caveat here is that you can imagine that in the future we develop a scale “impact-ness” that correlate more strongly with actual future impact.
Either because of (bad) overfitting, or because of (mostly good) attempts to develop a really good psychometric scale, at the cost of being intuitively sound as a construct.
And then after you develop that scale, you can rename “impact-ness” to something like “agentiness” or “moral courage” or “effectiveness mindset” or w/e, but we should be aware that it’s unlikely that a scale that’s developed for prediction shares our common-language intuitions for what that scale represents.
Obviously, I’m not aware of papers on moral impact. The closest I have is things like the work performance literature:
This is the paper I get most intuitions from: https://psycnet.apa.org/record/1998-10661-006?doi=1
I have an intuition that the world of humans is rather unpredictable (and this is borne out in most social science-y things I’ve read) such that getting >>50% on predictable individual variation is quite hard . I could be wrong about generalizability, e.g. because EAs are out-of-distribution in predictable ways, or because there’s enough range-restriction among self-identified EAs that makes the prediction task easier.
(though the latter will be a bit surprising, I think usually range-restriction makes predicting variance harder).
I’m willing to be corrected on the overall point to be clear, I’m sure many people on the Forum read more social science than me.
Thanks for the disclaimer.
It does seem important to understand the underlying scale dynamic. However, it’s still unclear to me how to evaluate this claim as it depends a lot on the underlying Thery of Impact for the impact scale. E.g., I’d imagine that it’d be more or less relevant depending on the role (e.g., it might hold more true for a researcher than a community-builder or coach). Practically, I’d also claim that a strong focus on IQ among existing HEAs are less valuable. I.e., the answer to “how can we best increase the expected impact of HEA?” is unlikely to involve things directly related to IQ. E.g., anecdotally, I can say things such as emotional stability (opposite of neuroticism) and concrete ways of increasing conscientiousness is likely much more likely to come up (if I restrain the search query to validated constructs).
We might already have such a scale with the proto-EA scale. Additionally, I think it’s valuable to look for other proxies for impact (e.g., having done impressive things like starting a non-profit at an early age).
Thanks. That paper does seem to propose correlations in the ballpark you’re suggesting although I haven’t had the time to think about to what extent I find this convincing.
I agree. Especially because our model of what’s impactful is likely to change quite substantially across time (5-10 years).