Returns to intelligence are very large, thanks to the heavy-tail nature of most tasks due to being log-normal or power law distribution, not a normal distribution. From Gwern on human intelligence:
Turning to human intelligence, the absolute range of human intelligence is very small: differences in reaction times are small, backwards digit spans range from 3–7, brain imaging studies have difficulty spotting neurological differences, the absolute genetic influence on intelligence is on net minimal, and this narrow range may be a general phenomenon about humans (Wechsler 1935); and yet, in human society, how critical are these tiny absolute differences in determining who will become rich or poor, who will become a criminal, who will do cutting-edge scientific research, who will get into the Ivy Leagues, who will be a successful politician, and this holds true as high as IQ can be measured reliably (see TIP / SMPY etc).
Something similar has been noted about human intelligence—while any particular day-to-day decision has little to do with intelligence, the effects of intelligence are consistently beneficial and accumulate over a lifetime, so the random noise starts to cancel out, and intelligence is seen to have strong correlations with long-term outcomes (eg. Gottfredson 1997). More abstractly, many career or intellectual outcomes have been noticed to follow a roughly log-normal distribution; a log-normal distributed can be generated when an outcome is the end result of a ‘leaky pipeline’ (scientific output might be due to motivation times intelligence times creativity…), in which case a small improvement on each variable can yield a large improvement in the output. Such a leaky pipeline might be simply a long sequence of actions, where advantage can build up (eg. if there is a small chance of making a blunder with each action).
My claim is not about the returns to high IQ in general, but about the returns to high IQ in EA. It’s about the value of intelligence as a criterion that we use to reach out to EAs. I think EA places an incredibly high premium on analytical intelligence right now, which is justified given the advantages you point out. But EA is a movement that needs all kinds of people, and in particular it needs capabilities that may be completely orthogonal to analytical intelligence. I gave the example of how intelligence may be much less important than EQ/emotional intelligence for someone who is doing community building or communicating EA to a larger audience.
I think you understand me as staking a claim in this larger debate about human intelligence. I am not.
Returns to intelligence are very large, thanks to the heavy-tail nature of most tasks due to being log-normal or power law distribution, not a normal distribution. From Gwern on human intelligence:
My claim is not about the returns to high IQ in general, but about the returns to high IQ in EA. It’s about the value of intelligence as a criterion that we use to reach out to EAs. I think EA places an incredibly high premium on analytical intelligence right now, which is justified given the advantages you point out. But EA is a movement that needs all kinds of people, and in particular it needs capabilities that may be completely orthogonal to analytical intelligence. I gave the example of how intelligence may be much less important than EQ/emotional intelligence for someone who is doing community building or communicating EA to a larger audience.
I think you understand me as staking a claim in this larger debate about human intelligence. I am not.
Thank you, I’ll remove the downvotes. On priors a lot of discussion on this topic tends to go bad places fast, so I wanted to head that off.