I’m also in psychology research, and I echo your frustrations about a lot of AI research having a very vague, misguided, and outdated notion of what human intelligence is.
Specifically, psychologists use ‘intelligence’ in at least two ways: (1) it can refer (e.g. in cognitive psychology or evolutionary psychology) to universal cognitive abilities shared across humans, but (2) it can also refer (in IQ research and psychometrics) to individual differences in cognitive abilities. Notably ‘general intelligence’ (aka the g factor, as indexed by IQ scores) is a psychometric concept, not a description of a cognitive ability.
The idea that humans have a ‘general intelligence’ as a distinctive mental faculty is a serious misunderstanding of the last 120 years of intelligence research, and makes it pretty confusing with AI researchers talk about ‘Artificial General Intelligence’.
(I’ve written about these issues in my books ‘The Mating Mind’ and ‘Mating Intelligence’, and in lots of papers available here, under the headings ‘Cognitive evolution’ and ‘Intelligence’:
Seems like the problem is that the field of AI uses a different definition of intelligence? Chapter 4 of Human Compatible:
Before we can understand how to create intelligence, it helps to understand what it is. The answer is not to be found in IQ tests or even in Turing tests, but in a simple relationship between what we perceive, what we want, and what we do. Roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.
To me, this definition seems much broader than g factor. As an illustrative example, Russell discusses how E. coli exhibits intelligent behavior.
As E. coli floats about in its liquid home (your lower intestine), it alternates between rotating its flagella clockwise, causing it to tumble in place, and counterclockwise, causing the flagella to twine together into a kind of propeller, so the bacterium swims in a straight line. Thus, E. coli does a sort of random walk—swim, tumble, swim, tumble—that allows it to find and consume glucose rather than staying put and dying of starvation. If this were the whole story, we wouldn’t say that E. coli is particularly intelligent, because its actions would not depend in any way on its environment. It wouldn’t be making any decisions, just executing a fixed behavior that evolution has built into its genes. But this isn’t the whole story. When E. coli senses an increasing concentration of glucose, it swims longer and tumbles less, and it does the opposite when it senses a decreasing concentration of glucose. So what it does (swim toward glucose) is likely to achieve what it wants (more glucose, let’s assume) given what it has perceived (an increasing glucose concentration).
Perhaps you were thinking “But evolution built this into its genes too, how does that make it intelligent?” This is a dangerous line of reasoning, because evolution built the basic design of your brain into your genes too, and presumably you wouldn’t wish to deny your own intelligence on that basis. The point is that what evolution into E. coli’s genes, as it has into yours, is a mechanism whereby the bacterium’s behavior varies according to what it perceives in its environment. Evolution doesn’t know in advance where the glucose is going to be or where your keys are, so putting the capability to find them into the organism is the next best thing.
Yes, I think we’re in agreement—the Stuart Russell definition is much closer to my meaning (1) for ‘intelligence’ (ie a universal cognitive ability shared across individuals) than to my meaning (2) for ‘intelligence’ (i.e. the psychometric g factor).
The trouble comes mostly when the two are conflated, e.g. when we imagine that ‘superintelligence’ will basically be like an IQ 900 person (whatever that would mean), or when we confuse ‘general intelligence’ as indexed by the g factor with truly ‘domain-general intelligence’ that could help an agents do whatever it wants to achieve, in any domain, given any possible perceptual input.
There’s a lot more to say about this issue; I should write a longer form post about it soon.
Aris—great question.
I’m also in psychology research, and I echo your frustrations about a lot of AI research having a very vague, misguided, and outdated notion of what human intelligence is.
Specifically, psychologists use ‘intelligence’ in at least two ways: (1) it can refer (e.g. in cognitive psychology or evolutionary psychology) to universal cognitive abilities shared across humans, but (2) it can also refer (in IQ research and psychometrics) to individual differences in cognitive abilities. Notably ‘general intelligence’ (aka the g factor, as indexed by IQ scores) is a psychometric concept, not a description of a cognitive ability.
The idea that humans have a ‘general intelligence’ as a distinctive mental faculty is a serious misunderstanding of the last 120 years of intelligence research, and makes it pretty confusing with AI researchers talk about ‘Artificial General Intelligence’.
(I’ve written about these issues in my books ‘The Mating Mind’ and ‘Mating Intelligence’, and in lots of papers available here, under the headings ‘Cognitive evolution’ and ‘Intelligence’:
Seems like the problem is that the field of AI uses a different definition of intelligence? Chapter 4 of Human Compatible:
To me, this definition seems much broader than g factor. As an illustrative example, Russell discusses how E. coli exhibits intelligent behavior.
Yes, I think we’re in agreement—the Stuart Russell definition is much closer to my meaning (1) for ‘intelligence’ (ie a universal cognitive ability shared across individuals) than to my meaning (2) for ‘intelligence’ (i.e. the psychometric g factor).
The trouble comes mostly when the two are conflated, e.g. when we imagine that ‘superintelligence’ will basically be like an IQ 900 person (whatever that would mean), or when we confuse ‘general intelligence’ as indexed by the g factor with truly ‘domain-general intelligence’ that could help an agents do whatever it wants to achieve, in any domain, given any possible perceptual input.
There’s a lot more to say about this issue; I should write a longer form post about it soon.