The article claims that an important trait (smarts) is overrated as a precondition to impact, while giving some caveats and mostly specific reasons for why smarts is not maximally predictive. But this is not much evidence that a factor is overrated! (Unless you are trying to argue against a correlation factor of 1, which as OP noted, nobody actually believes). The only exception here is the runaway IQ signaling point, which is indeed an argument (bias) for us to be wrong about relevant values rather than just a claim about factors for absolute values. However, OP do not consider biases that may cause us to underrate smarts, making this not very helpful even for a qualitative judgmental take.
Without a numerical score for what you think the current community is at with regard to how smarts is rated, plus a numerical score for what is correct or where you think the community should be, it’s very hard for me to evaluate the correctness of this claim. In the absence of a quantitative score, I’d have benefited from a rank ordering or some more precise qualitative claims.
More precisely, I’d like to see:
How much you think “smarts” explains absolute variance in impact among EAs.
How much you think “smarts” explains predictable variance in impact among EAs (if smarts explains 10%, but 90% is noise, then smarts is the best and in fact only metric we care about)
How much you think the community currently believes “smarts” explain absolute variance in impact among EAs.
How much you think the community currently believes “smarts” explains predictable variance in impact among EAs
I’m guilty of the pattern of making a relative claim without mentioning levels myself (see point #4), so it feels hypocritical to point this out. Nonetheless, we can grow stronger as a community if people are willing to be hypocritical for the greater good. :)
Qualitatively, I think the appropriate claim from both my (shallow) understanding of the intelligence ∩ work performance literature and some other literature on related topics, plus personal impressions/anecdotes/intuitions goes
Intelligence (general mental ability) is the most general predictive feature for performance that we have, but it’s still not all that predictive in absolute terms.
Quantitatively, my current best estimate is that correlation between intelligence and impact* among self-identified highly-engaged EAs is ~0.55** (explains ~30% of variance). My guess is that we do not have substantial data to do better than ~0.7 (~50% of variance explained).
I don’t know whether other EAs agree with me here. My current guess is that numerically sensitive ones probably have numbers that aren’t too far off (maybe slightly lower?), while people who are less numerically/statistically sensitive will initially claim correlations that are higher.
However, this (if true) would likely be a general bias, rather than an intelligence-specific bias. I would further predict that EAs (at least ones who haven’t read this comment) will systematically overestimate the importance of other predictors as well, across a wide range of fields.
I think these numbers may seem pretty low compared to our intuitions for how important smarts are. I don’t know how to reconcile these intuitions exactly, except to again note that there are many other fields where intuitions dramatically overestimate correlations relative to reality.
*Here impact is operationalized loosely as “on a log-scale, what prediction-evaluation setups would say about someone’s past impact five years from now.”
Thanks for attempting to hold yourself to the standards you wish to see in others (although hypocrisy can be warranted sometimes). :)
Just to be clear, is your claim causal? I.e., would you claim that if we magically increased a sufficiently large sample of random HEA’s IQ by 10%, then we’d see a 3% increase in the groups five-year impact compared to a control group? (please take this as a purely hypothetical scenario where you don’t have to worry about tractability of raising IQ, etc.)
That IQ is the greatest predictor of five-year impact compared to everything else we could plausibly measure psychometrically (e.g., Grit/conscientiousness, openness (one of the BIG5), self-efficacy, courage, and psychological well-being)?
What’s the best resource you have for this claim? I’d love a couple of concrete papers.
That with all of the data we currently have available for EAs we can’t predict more than 50% of the variance in impact?
I apologize for the technical soundingness of what I said below. I think the actual underlying ideas are not particularly difficult, but as a practical manner I don’t want to invest the time to translate them to more normal English right now. If things I say sound confusing, I do apologize. Assume by default it’s a communication failure on my end for relatively straightforward concepts.
Yes, I think it is, broadly speaking.
There are some nuances here where at sufficiently large scales, we run into issues where doubling quality-adjusted labor has a lower than 2x effect on total impact, but at smaller scales this shouldn’t be an issue.
To be clear, I think raising actual IQ by 10% will have a larger effect than increasing impact by 3%, because of the ways the scales work
IQ is normed such that mean is 100 and a standard deviation is 15. This means that a difference of 30 points is much more than a 30% change.
In my original comment, I’m saying that raising someone by 10% of a standard deviation (s.d.) in intelligence will have, in expectation, a 3% increase in s.d.’s of impact.
10 IQ points is ~.67 s.d.s, so this means a .22 s.d. in impact increased, on a log-scale
Embarrassingly I don’t have a strong/consistent intuition for what the log-scale of impact actually is (is it closer to 1.6x? 2x? 5x? 10x?), so I don’t have a coherent view of what this translates to in terms of actual impact
But for most plausible parameters I think this will cash out to greater than 3%.
Yes, if we are talking about psychologically valid constructs that have a coherent English-language meaning.
A caveat here is that you can imagine that in the future we develop a scale “impact-ness” that correlate more strongly with actual future impact.
Either because of (bad) overfitting, or because of (mostly good) attempts to develop a really good psychometric scale, at the cost of being intuitively sound as a construct.
And then after you develop that scale, you can rename “impact-ness” to something like “agentiness” or “moral courage” or “effectiveness mindset” or w/e, but we should be aware that it’s unlikely that a scale that’s developed for prediction shares our common-language intuitions for what that scale represents.
Obviously, I’m not aware of papers on moral impact. The closest I have is things like the work performance literature:
I have an intuition that the world of humans is rather unpredictable (and this is borne out in most social science-y things I’ve read) such that getting >>50% on predictable individual variation is quite hard . I could be wrong about generalizability, e.g. because EAs are out-of-distribution in predictable ways, or because there’s enough range-restriction among self-identified EAs that makes the prediction task easier.
(though the latter will be a bit surprising, I think usually range-restriction makes predicting variance harder).
I’m willing to be corrected on the overall point to be clear, I’m sure many people on the Forum read more social science than me.
It does seem important to understand the underlying scale dynamic. However, it’s still unclear to me how to evaluate this claim as it depends a lot on the underlying Thery of Impact for the impact scale. E.g., I’d imagine that it’d be more or less relevant depending on the role (e.g., it might hold more true for a researcher than a community-builder or coach). Practically, I’d also claim that a strong focus on IQ among existing HEAs are less valuable. I.e., the answer to “how can we best increase the expected impact of HEA?” is unlikely to involve things directly related to IQ. E.g., anecdotally, I can say things such as emotional stability (opposite of neuroticism) and concrete ways of increasing conscientiousness is likely much more likely to come up (if I restrain the search query to validated constructs).
We might already have such a scale with the proto-EA scale. Additionally, I think it’s valuable to look for other proxies for impact (e.g., having done impressive things like starting a non-profit at an early age).
Thanks. That paper does seem to propose correlations in the ballpark you’re suggesting although I haven’t had the time to think about to what extent I find this convincing.
I agree. Especially because our model of what’s impactful is likely to change quite substantially across time (5-10 years).
It’s a fair point that my post was quite vague on some key points, and your comment provides a great invitation for me to try to clarify my claims and views a bit.
The article claims that an important trait (smarts) is overrated as a precondition to impact
I actually wouldn’t say that that’s my core claim, although I do agree with it.
My claim about overemphasis relates more to the level of actions, norms, and practical focus than it relates to predictions about how much variance in impact IQ accounts for. (This is somewhat apropos the distinction between procedural vs. declarative knowledge as well as the intention-behavior gap.)
That is, it’s possible that we’re mostly right about how much variance different factors predict (or at least that we would be right on reflection, cf. your note in the other comment about how our immediate intuitions might be wrong), yet that we’re nonetheless off in terms of how much we focus on developing and selecting for those respective factors in practice (including, and perhaps especially, when it comes to less tangible “focus promoters” such as norms, informal prestige conferral, and daydreams).
So I think IQ is probably somewhat descriptively overrated (more on this below), but I think the degree to which it is overemphasized at the level of norms, actions, and salient decision criteria is considerably stronger. One line of evidence I have for this is how often I see references to smarts, including in internal discussions related to career and hiring decisions, compared to other important traits.
How much do I think these other things are underemphasized, in quantitative terms? It is difficult for me to put a precise number on it, but my sense is that it would be good if most of the other traits and virtues I listed were to receive at least twice as much attention as they currently do, both in terms of how much time people devote to cultivating them in personal development efforts as well as in terms of how often these virtues are emphasized in the broader discourse among aspiring effective altruists. And beyond neglectedness, a reason to focus more on these other traits relative to smarts at the level of what we seek to develop individually and incentivize collectively is that those other traits and virtues likely are more elastic and improvable than is IQ — which isn’t to say that IQ cannot also be improved.
How well does IQ predict “impact”?
Next, regarding the question of how well IQ predicts impact, I think this depends critically on how we define “impact”. This may feel like a trivial point, but please bear with me as I try to explain where I’m coming from. :)
I like that you specified the following in your other comment, namely that you estimated impact roughly in terms of “what prediction-evaluation setups would say about someone’s past impact five years from now”. That’s a clearly specified point in time.
However, I think it’s likely that impact assessments will diverge substantially depending on the timeframe (cf. our vast uncertainty over time and the “Three Mile Island effect”). This also relates to the virtues I listed in the post.
For example, I think it’s possible (perhaps ~10 percent likely) that the community ends up going in a highly suboptimal direction due to focusing too exclusively on metrics such as “number of publications” or “useful theoretical insights provided” over, say, a five-year period, while neglecting less tangible factors such as interpersonal kindness and social health, which may gradually — in less noticeable ways that might only become apparent over longer timespans — lead to corrosion, burnout, or conflicts. (And the lack of emphasis on such less tangible factors might also be driving people away in the short term, in ways that are probably easy to miss by potential evaluators of impact.)
Likewise, it could be that factors such as “attention to social aspects” explain relatively little individual variation in impact, yet that they are nonetheless critical in terms of the community’s success or failure. (Similar to how individual variation in some traits is less predictive of certain outcomes than is country-level variation. Indeed, individual-level success is not always conducive to collective success — sometimes it’s even detrimental to it; altruistic behaviors that are too babbler bird-esque might be a concrete example of that.)
Finally, I think the point about clarifying fundamental issues, specifically fundamental values, is critical. After all, an impact evaluation that is made relative to some pre-specified set of values (that is held constant) may diverge greatly from an evaluation — even a five-year evaluation — that also factors in moral reflection, and which evaluates impact based on the updated values endorsed on reflection. Such reflection and consequently updated evaluative criteria may even flip the sign of one’s impact.
I’d expect IQ to be significantly better correlated with impact based on the former kind of evaluation (where I might roughly agree with your estimates in the case of a five-year assessment*) vs. the latter evaluation (which in idealized terms one could think of as “an impact evaluation made relative to the values that the person would endorse if they had focused chiefly on value exploration their entire life” — something that more limited value reflection efforts could presumably approximate).
In the latter case, IQ might still come close to being the main predictor, but I suspect that a construct tracking “focus on fundamental values” might do even better among aspiring EAs (not least because changes in fundamental values can change the consequent evaluations a lot). That’s one of the reasons I think it’s worth focusing much more on fundamental values. :)
Like Linch, I do not see how you present any arguments for your main conclusion in the post. You argue that EA overrates IQ but present no arguments that this is the case. Your response also doesn’t present any arguments for that conclusion
As noted above, my main claim is not that “EA overrates IQ” at a purely descriptive level, but rather that other important traits deserve more focus in practice (because those other important traits seem neglected relative to smarts, and also because — at the level of what we seek to develop and incentivize — those other traits seem more elastic and improvable).
I noted in the comment above that:
one line of evidence I have for this is how often I see references to smarts, including in internal discussions related to career and hiring decisions, compared to other important traits.
Without directly quoting anyone, I can, to be more specific, say that I’ve seen relatively senior people in EA imply that certain EA organizations (including CRS, where I work) will be eager to hire applicants if they are extremely smart. That’s the kind of sentiment I feel I’ve seen quite often, and with which I strongly disagree, because being “extremely smart” is far from being sufficient, even if the person in question has altruistic values.
How much you think “smarts” explains absolute variance in impact among EAs.
How much you think “smarts” explains predictable variance in impact among EAs (if smarts explains 10%, but 90% is noise, then smarts is the best and in fact only metric we care about)
How much you think the community currently believes “smarts” explain absolute variance in impact among EAs.
How much you think the community currently believes “smarts” explains predictable variance in impact among EAs
A very quick response by someone not very numerical and lacking much recent information on the relevant literature related to IQ:
1/2- a lot (say 50%) if you assume we measure impact via something like research publications, and assume the presence of mediators such as individual and independent tasks (i.e., no collaboration), good (mental) health, and static agents (e.g., no feedback loops from agents engaging in regular reflection/self-improvement/recalibration loops and changing career paths), and motivation etc. Maybe 10% beyond an IQ of 120 if you assume a variance of impacts (e.g., introducing high competence people/organisations to EA, doing operations work to amplify the impact of intelligent people, and taking personal risks to setting up needed projects that have high expected value), while not assuming that any of the above mediators (e.g., mental health) are present.
3⁄4 − 50% but without realising the assumptions that are plugged in and mentioned above. Most of us know smarter people who are not able to work with others, not in good mental health, not as strongly EA aligned, healthy, not very motivated to do work or not very interested in improving themselves or changing their minds on things.
As this suggests, I think that EAs tend to assume that intelligence is more sufficient for impact than I think they should. Part of this is my expectation that they tend to I) think of simple single impact/assessment scenarios and ii) assume the presence of other needed ingredients.
Some tangential thoughts:
Much if not most impact probably comes via collaboration with other smart people. However, some of the smartest people I know could not easily collaborate in a startup type collaboration and were therefore, from a entrepreneurial perspective, less valuable than less intelligent but more socially skilled/patient/humble alternatives etc. In such cases hiring based on intelligent could produce bad outcomes.
As I see it, many of the the highest impacts in EA come from bringing good people into the community rather than actually doing work that is seen as high value. This does not seem to load on intelligence much and is instead more about other competencies, such as social skills, access to networks and networking interest and ability). However, my experience of hiring decisions here suggest that signals of intelligence are overweighted relative to social skills.
The article claims that an important trait (smarts) is overrated as a precondition to impact, while giving some caveats and mostly specific reasons for why smarts is not maximally predictive. But this is not much evidence that a factor is overrated! (Unless you are trying to argue against a correlation factor of 1, which as OP noted, nobody actually believes). The only exception here is the runaway IQ signaling point, which is indeed an argument (bias) for us to be wrong about relevant values rather than just a claim about factors for absolute values. However, OP do not consider biases that may cause us to underrate smarts, making this not very helpful even for a qualitative judgmental take.
Without a numerical score for what you think the current community is at with regard to how smarts is rated, plus a numerical score for what is correct or where you think the community should be, it’s very hard for me to evaluate the correctness of this claim. In the absence of a quantitative score, I’d have benefited from a rank ordering or some more precise qualitative claims.
More precisely, I’d like to see:
How much you think “smarts” explains absolute variance in impact among EAs.
How much you think “smarts” explains predictable variance in impact among EAs (if smarts explains 10%, but 90% is noise, then smarts is the best and in fact only metric we care about)
How much you think the community currently believes “smarts” explain absolute variance in impact among EAs.
How much you think the community currently believes “smarts” explains predictable variance in impact among EAs
I’m guilty of the pattern of making a relative claim without mentioning levels myself (see point #4), so it feels hypocritical to point this out. Nonetheless, we can grow stronger as a community if people are willing to be hypocritical for the greater good. :)
Here are my own attempts to answer this:
Qualitatively, I think the appropriate claim from both my (shallow) understanding of the intelligence ∩ work performance literature and some other literature on related topics, plus personal impressions/anecdotes/intuitions goes
Quantitatively, my current best estimate is that correlation between intelligence and impact* among self-identified highly-engaged EAs is ~0.55** (explains ~30% of variance). My guess is that we do not have substantial data to do better than ~0.7 (~50% of variance explained).
I don’t know whether other EAs agree with me here. My current guess is that numerically sensitive ones probably have numbers that aren’t too far off (maybe slightly lower?), while people who are less numerically/statistically sensitive will initially claim correlations that are higher.
However, this (if true) would likely be a general bias, rather than an intelligence-specific bias. I would further predict that EAs (at least ones who haven’t read this comment) will systematically overestimate the importance of other predictors as well, across a wide range of fields.
I think these numbers may seem pretty low compared to our intuitions for how important smarts are. I don’t know how to reconcile these intuitions exactly, except to again note that there are many other fields where intuitions dramatically overestimate correlations relative to reality.
*Here impact is operationalized loosely as “on a log-scale, what prediction-evaluation setups would say about someone’s past impact five years from now.”
**precision of numbers do not imply confidence.
Thanks for attempting to hold yourself to the standards you wish to see in others (although hypocrisy can be warranted sometimes). :)
Just to be clear, is your claim causal? I.e., would you claim that if we magically increased a sufficiently large sample of random HEA’s IQ by 10%, then we’d see a 3% increase in the groups five-year impact compared to a control group? (please take this as a purely hypothetical scenario where you don’t have to worry about tractability of raising IQ, etc.)
That IQ is the greatest predictor of five-year impact compared to everything else we could plausibly measure psychometrically (e.g., Grit/conscientiousness, openness (one of the BIG5), self-efficacy, courage, and psychological well-being)?
What’s the best resource you have for this claim? I’d love a couple of concrete papers.
That with all of the data we currently have available for EAs we can’t predict more than 50% of the variance in impact?
I apologize for the technical soundingness of what I said below. I think the actual underlying ideas are not particularly difficult, but as a practical manner I don’t want to invest the time to translate them to more normal English right now. If things I say sound confusing, I do apologize. Assume by default it’s a communication failure on my end for relatively straightforward concepts.
Yes, I think it is, broadly speaking.
There are some nuances here where at sufficiently large scales, we run into issues where doubling quality-adjusted labor has a lower than 2x effect on total impact, but at smaller scales this shouldn’t be an issue.
To be clear, I think raising actual IQ by 10% will have a larger effect than increasing impact by 3%, because of the ways the scales work
IQ is normed such that mean is 100 and a standard deviation is 15. This means that a difference of 30 points is much more than a 30% change.
In my original comment, I’m saying that raising someone by 10% of a standard deviation (s.d.) in intelligence will have, in expectation, a 3% increase in s.d.’s of impact.
10 IQ points is ~.67 s.d.s, so this means a .22 s.d. in impact increased, on a log-scale
Embarrassingly I don’t have a strong/consistent intuition for what the log-scale of impact actually is (is it closer to 1.6x? 2x? 5x? 10x?), so I don’t have a coherent view of what this translates to in terms of actual impact
But for most plausible parameters I think this will cash out to greater than 3%.
Yes, if we are talking about psychologically valid constructs that have a coherent English-language meaning.
A caveat here is that you can imagine that in the future we develop a scale “impact-ness” that correlate more strongly with actual future impact.
Either because of (bad) overfitting, or because of (mostly good) attempts to develop a really good psychometric scale, at the cost of being intuitively sound as a construct.
And then after you develop that scale, you can rename “impact-ness” to something like “agentiness” or “moral courage” or “effectiveness mindset” or w/e, but we should be aware that it’s unlikely that a scale that’s developed for prediction shares our common-language intuitions for what that scale represents.
Obviously, I’m not aware of papers on moral impact. The closest I have is things like the work performance literature:
This is the paper I get most intuitions from: https://psycnet.apa.org/record/1998-10661-006?doi=1
I have an intuition that the world of humans is rather unpredictable (and this is borne out in most social science-y things I’ve read) such that getting >>50% on predictable individual variation is quite hard . I could be wrong about generalizability, e.g. because EAs are out-of-distribution in predictable ways, or because there’s enough range-restriction among self-identified EAs that makes the prediction task easier.
(though the latter will be a bit surprising, I think usually range-restriction makes predicting variance harder).
I’m willing to be corrected on the overall point to be clear, I’m sure many people on the Forum read more social science than me.
Thanks for the disclaimer.
It does seem important to understand the underlying scale dynamic. However, it’s still unclear to me how to evaluate this claim as it depends a lot on the underlying Thery of Impact for the impact scale. E.g., I’d imagine that it’d be more or less relevant depending on the role (e.g., it might hold more true for a researcher than a community-builder or coach). Practically, I’d also claim that a strong focus on IQ among existing HEAs are less valuable. I.e., the answer to “how can we best increase the expected impact of HEA?” is unlikely to involve things directly related to IQ. E.g., anecdotally, I can say things such as emotional stability (opposite of neuroticism) and concrete ways of increasing conscientiousness is likely much more likely to come up (if I restrain the search query to validated constructs).
We might already have such a scale with the proto-EA scale. Additionally, I think it’s valuable to look for other proxies for impact (e.g., having done impressive things like starting a non-profit at an early age).
Thanks. That paper does seem to propose correlations in the ballpark you’re suggesting although I haven’t had the time to think about to what extent I find this convincing.
I agree. Especially because our model of what’s impactful is likely to change quite substantially across time (5-10 years).
Thanks for your comment, Linch. :)
It’s a fair point that my post was quite vague on some key points, and your comment provides a great invitation for me to try to clarify my claims and views a bit.
I actually wouldn’t say that that’s my core claim, although I do agree with it.
My claim about overemphasis relates more to the level of actions, norms, and practical focus than it relates to predictions about how much variance in impact IQ accounts for. (This is somewhat apropos the distinction between procedural vs. declarative knowledge as well as the intention-behavior gap.)
That is, it’s possible that we’re mostly right about how much variance different factors predict (or at least that we would be right on reflection, cf. your note in the other comment about how our immediate intuitions might be wrong), yet that we’re nonetheless off in terms of how much we focus on developing and selecting for those respective factors in practice (including, and perhaps especially, when it comes to less tangible “focus promoters” such as norms, informal prestige conferral, and daydreams).
So I think IQ is probably somewhat descriptively overrated (more on this below), but I think the degree to which it is overemphasized at the level of norms, actions, and salient decision criteria is considerably stronger. One line of evidence I have for this is how often I see references to smarts, including in internal discussions related to career and hiring decisions, compared to other important traits.
How much do I think these other things are underemphasized, in quantitative terms? It is difficult for me to put a precise number on it, but my sense is that it would be good if most of the other traits and virtues I listed were to receive at least twice as much attention as they currently do, both in terms of how much time people devote to cultivating them in personal development efforts as well as in terms of how often these virtues are emphasized in the broader discourse among aspiring effective altruists. And beyond neglectedness, a reason to focus more on these other traits relative to smarts at the level of what we seek to develop individually and incentivize collectively is that those other traits and virtues likely are more elastic and improvable than is IQ — which isn’t to say that IQ cannot also be improved.
How well does IQ predict “impact”?
Next, regarding the question of how well IQ predicts impact, I think this depends critically on how we define “impact”. This may feel like a trivial point, but please bear with me as I try to explain where I’m coming from. :)
I like that you specified the following in your other comment, namely that you estimated impact roughly in terms of “what prediction-evaluation setups would say about someone’s past impact five years from now”. That’s a clearly specified point in time.
However, I think it’s likely that impact assessments will diverge substantially depending on the timeframe (cf. our vast uncertainty over time and the “Three Mile Island effect”). This also relates to the virtues I listed in the post.
For example, I think it’s possible (perhaps ~10 percent likely) that the community ends up going in a highly suboptimal direction due to focusing too exclusively on metrics such as “number of publications” or “useful theoretical insights provided” over, say, a five-year period, while neglecting less tangible factors such as interpersonal kindness and social health, which may gradually — in less noticeable ways that might only become apparent over longer timespans — lead to corrosion, burnout, or conflicts. (And the lack of emphasis on such less tangible factors might also be driving people away in the short term, in ways that are probably easy to miss by potential evaluators of impact.)
Likewise, it could be that factors such as “attention to social aspects” explain relatively little individual variation in impact, yet that they are nonetheless critical in terms of the community’s success or failure. (Similar to how individual variation in some traits is less predictive of certain outcomes than is country-level variation. Indeed, individual-level success is not always conducive to collective success — sometimes it’s even detrimental to it; altruistic behaviors that are too babbler bird-esque might be a concrete example of that.)
Finally, I think the point about clarifying fundamental issues, specifically fundamental values, is critical. After all, an impact evaluation that is made relative to some pre-specified set of values (that is held constant) may diverge greatly from an evaluation — even a five-year evaluation — that also factors in moral reflection, and which evaluates impact based on the updated values endorsed on reflection. Such reflection and consequently updated evaluative criteria may even flip the sign of one’s impact.
I’d expect IQ to be significantly better correlated with impact based on the former kind of evaluation (where I might roughly agree with your estimates in the case of a five-year assessment*) vs. the latter evaluation (which in idealized terms one could think of as “an impact evaluation made relative to the values that the person would endorse if they had focused chiefly on value exploration their entire life” — something that more limited value reflection efforts could presumably approximate).
In the latter case, IQ might still come close to being the main predictor, but I suspect that a construct tracking “focus on fundamental values” might do even better among aspiring EAs (not least because changes in fundamental values can change the consequent evaluations a lot). That’s one of the reasons I think it’s worth focusing much more on fundamental values. :)
Like Linch, I do not see how you present any arguments for your main conclusion in the post. You argue that EA overrates IQ but present no arguments that this is the case. Your response also doesn’t present any arguments for that conclusion
As noted above, my main claim is not that “EA overrates IQ” at a purely descriptive level, but rather that other important traits deserve more focus in practice (because those other important traits seem neglected relative to smarts, and also because — at the level of what we seek to develop and incentivize — those other traits seem more elastic and improvable).
I noted in the comment above that:
Without directly quoting anyone, I can, to be more specific, say that I’ve seen relatively senior people in EA imply that certain EA organizations (including CRS, where I work) will be eager to hire applicants if they are extremely smart. That’s the kind of sentiment I feel I’ve seen quite often, and with which I strongly disagree, because being “extremely smart” is far from being sufficient, even if the person in question has altruistic values.
“my main claim is not that “EA overrates IQ” at a purely descriptive level, but rather that other important traits deserve more focus in practice”
The claim that EA overrates IQ is the same as the claim that other traits deserve more attention
A very quick response by someone not very numerical and lacking much recent information on the relevant literature related to IQ:
1/2- a lot (say 50%) if you assume we measure impact via something like research publications, and assume the presence of mediators such as individual and independent tasks (i.e., no collaboration), good (mental) health, and static agents (e.g., no feedback loops from agents engaging in regular reflection/self-improvement/recalibration loops and changing career paths), and motivation etc. Maybe 10% beyond an IQ of 120 if you assume a variance of impacts (e.g., introducing high competence people/organisations to EA, doing operations work to amplify the impact of intelligent people, and taking personal risks to setting up needed projects that have high expected value), while not assuming that any of the above mediators (e.g., mental health) are present.
3⁄4 − 50% but without realising the assumptions that are plugged in and mentioned above. Most of us know smarter people who are not able to work with others, not in good mental health, not as strongly EA aligned, healthy, not very motivated to do work or not very interested in improving themselves or changing their minds on things.
As this suggests, I think that EAs tend to assume that intelligence is more sufficient for impact than I think they should. Part of this is my expectation that they tend to I) think of simple single impact/assessment scenarios and ii) assume the presence of other needed ingredients.
Some tangential thoughts:
Much if not most impact probably comes via collaboration with other smart people. However, some of the smartest people I know could not easily collaborate in a startup type collaboration and were therefore, from a entrepreneurial perspective, less valuable than less intelligent but more socially skilled/patient/humble alternatives etc. In such cases hiring based on intelligent could produce bad outcomes.
As I see it, many of the the highest impacts in EA come from bringing good people into the community rather than actually doing work that is seen as high value. This does not seem to load on intelligence much and is instead more about other competencies, such as social skills, access to networks and networking interest and ability). However, my experience of hiring decisions here suggest that signals of intelligence are overweighted relative to social skills.