I apologize for the technical soundingness of what I said below. I think the actual underlying ideas are not particularly difficult, but as a practical manner I don’t want to invest the time to translate them to more normal English right now. If things I say sound confusing, I do apologize. Assume by default it’s a communication failure on my end for relatively straightforward concepts.
Yes, I think it is, broadly speaking.
There are some nuances here where at sufficiently large scales, we run into issues where doubling quality-adjusted labor has a lower than 2x effect on total impact, but at smaller scales this shouldn’t be an issue.
To be clear, I think raising actual IQ by 10% will have a larger effect than increasing impact by 3%, because of the ways the scales work
IQ is normed such that mean is 100 and a standard deviation is 15. This means that a difference of 30 points is much more than a 30% change.
In my original comment, I’m saying that raising someone by 10% of a standard deviation (s.d.) in intelligence will have, in expectation, a 3% increase in s.d.’s of impact.
10 IQ points is ~.67 s.d.s, so this means a .22 s.d. in impact increased, on a log-scale
Embarrassingly I don’t have a strong/consistent intuition for what the log-scale of impact actually is (is it closer to 1.6x? 2x? 5x? 10x?), so I don’t have a coherent view of what this translates to in terms of actual impact
But for most plausible parameters I think this will cash out to greater than 3%.
Yes, if we are talking about psychologically valid constructs that have a coherent English-language meaning.
A caveat here is that you can imagine that in the future we develop a scale “impact-ness” that correlate more strongly with actual future impact.
Either because of (bad) overfitting, or because of (mostly good) attempts to develop a really good psychometric scale, at the cost of being intuitively sound as a construct.
And then after you develop that scale, you can rename “impact-ness” to something like “agentiness” or “moral courage” or “effectiveness mindset” or w/e, but we should be aware that it’s unlikely that a scale that’s developed for prediction shares our common-language intuitions for what that scale represents.
Obviously, I’m not aware of papers on moral impact. The closest I have is things like the work performance literature:
I have an intuition that the world of humans is rather unpredictable (and this is borne out in most social science-y things I’ve read) such that getting >>50% on predictable individual variation is quite hard . I could be wrong about generalizability, e.g. because EAs are out-of-distribution in predictable ways, or because there’s enough range-restriction among self-identified EAs that makes the prediction task easier.
(though the latter will be a bit surprising, I think usually range-restriction makes predicting variance harder).
I’m willing to be corrected on the overall point to be clear, I’m sure many people on the Forum read more social science than me.
It does seem important to understand the underlying scale dynamic. However, it’s still unclear to me how to evaluate this claim as it depends a lot on the underlying Thery of Impact for the impact scale. E.g., I’d imagine that it’d be more or less relevant depending on the role (e.g., it might hold more true for a researcher than a community-builder or coach). Practically, I’d also claim that a strong focus on IQ among existing HEAs are less valuable. I.e., the answer to “how can we best increase the expected impact of HEA?” is unlikely to involve things directly related to IQ. E.g., anecdotally, I can say things such as emotional stability (opposite of neuroticism) and concrete ways of increasing conscientiousness is likely much more likely to come up (if I restrain the search query to validated constructs).
We might already have such a scale with the proto-EA scale. Additionally, I think it’s valuable to look for other proxies for impact (e.g., having done impressive things like starting a non-profit at an early age).
Thanks. That paper does seem to propose correlations in the ballpark you’re suggesting although I haven’t had the time to think about to what extent I find this convincing.
I agree. Especially because our model of what’s impactful is likely to change quite substantially across time (5-10 years).
I apologize for the technical soundingness of what I said below. I think the actual underlying ideas are not particularly difficult, but as a practical manner I don’t want to invest the time to translate them to more normal English right now. If things I say sound confusing, I do apologize. Assume by default it’s a communication failure on my end for relatively straightforward concepts.
Yes, I think it is, broadly speaking.
There are some nuances here where at sufficiently large scales, we run into issues where doubling quality-adjusted labor has a lower than 2x effect on total impact, but at smaller scales this shouldn’t be an issue.
To be clear, I think raising actual IQ by 10% will have a larger effect than increasing impact by 3%, because of the ways the scales work
IQ is normed such that mean is 100 and a standard deviation is 15. This means that a difference of 30 points is much more than a 30% change.
In my original comment, I’m saying that raising someone by 10% of a standard deviation (s.d.) in intelligence will have, in expectation, a 3% increase in s.d.’s of impact.
10 IQ points is ~.67 s.d.s, so this means a .22 s.d. in impact increased, on a log-scale
Embarrassingly I don’t have a strong/consistent intuition for what the log-scale of impact actually is (is it closer to 1.6x? 2x? 5x? 10x?), so I don’t have a coherent view of what this translates to in terms of actual impact
But for most plausible parameters I think this will cash out to greater than 3%.
Yes, if we are talking about psychologically valid constructs that have a coherent English-language meaning.
A caveat here is that you can imagine that in the future we develop a scale “impact-ness” that correlate more strongly with actual future impact.
Either because of (bad) overfitting, or because of (mostly good) attempts to develop a really good psychometric scale, at the cost of being intuitively sound as a construct.
And then after you develop that scale, you can rename “impact-ness” to something like “agentiness” or “moral courage” or “effectiveness mindset” or w/e, but we should be aware that it’s unlikely that a scale that’s developed for prediction shares our common-language intuitions for what that scale represents.
Obviously, I’m not aware of papers on moral impact. The closest I have is things like the work performance literature:
This is the paper I get most intuitions from: https://psycnet.apa.org/record/1998-10661-006?doi=1
I have an intuition that the world of humans is rather unpredictable (and this is borne out in most social science-y things I’ve read) such that getting >>50% on predictable individual variation is quite hard . I could be wrong about generalizability, e.g. because EAs are out-of-distribution in predictable ways, or because there’s enough range-restriction among self-identified EAs that makes the prediction task easier.
(though the latter will be a bit surprising, I think usually range-restriction makes predicting variance harder).
I’m willing to be corrected on the overall point to be clear, I’m sure many people on the Forum read more social science than me.
Thanks for the disclaimer.
It does seem important to understand the underlying scale dynamic. However, it’s still unclear to me how to evaluate this claim as it depends a lot on the underlying Thery of Impact for the impact scale. E.g., I’d imagine that it’d be more or less relevant depending on the role (e.g., it might hold more true for a researcher than a community-builder or coach). Practically, I’d also claim that a strong focus on IQ among existing HEAs are less valuable. I.e., the answer to “how can we best increase the expected impact of HEA?” is unlikely to involve things directly related to IQ. E.g., anecdotally, I can say things such as emotional stability (opposite of neuroticism) and concrete ways of increasing conscientiousness is likely much more likely to come up (if I restrain the search query to validated constructs).
We might already have such a scale with the proto-EA scale. Additionally, I think it’s valuable to look for other proxies for impact (e.g., having done impressive things like starting a non-profit at an early age).
Thanks. That paper does seem to propose correlations in the ballpark you’re suggesting although I haven’t had the time to think about to what extent I find this convincing.
I agree. Especially because our model of what’s impactful is likely to change quite substantially across time (5-10 years).