By “general intelligence” I mean “whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems”.
Human brains aren’t perfectly general, and not all narrow AIs/animals are equally narrow. (E.g., AlphaZero is more general than AlphaGo.) But it sure is interesting that humans evolved cognitive abilities that unlock all of these sciences at once, with zero evolutionary fine-tuning of the brain aimed at equipping us for any of those sciences. Evolution just stumbled into a solution to other problems, that happened to generalize to billions of wildly novel tasks.
To get more concrete:
AlphaGo is a very impressive reasoner, but its hypothesis space is limited to sequences of Go board states rather than sequences of states of the physical universe. Efficiently reasoning about the physical universe requires solving at least some problems (which might be solved by the AGI’s programmer, and/or solved by the algorithm that finds the AGI in program-space; and some such problems may be solved by the AGI itself in the course of refining its thinking) that are different in kind from what AlphaGo solves.
E.g., the physical world is too complex to simulate in full detail, unlike a Go board state. An effective general intelligence needs to be able to model the world at many different levels of granularity, and strategically choose which levels are relevant to think about, as well as which specific pieces/aspects/properties of the world at those levels are relevant to think about.
More generally, being a general intelligence requires an enormous amount of laserlike strategicness about which thoughts you do or don’t think: a large portion of your compute needs to be ruthlessly funneled into exactly the tiny subset of questions about the physical world that bear on the question you’re trying to answer or the problem you’re trying to solve. If you fail to be ruthlessly targeted and efficient in “aiming” your cognition at the most useful-to-you things, you can easily spend a lifetime getting sidetracked by minutiae / directing your attention at the wrong considerations / etc.
And given the variety of kinds of problems you need to solve in order to navigate the physical world well / do science / etc., the heuristics you use to funnel your compute to the exact right things need to themselves be very general, rather than all being case-specific. (Whereas we can more readily imagine that many of the heuristics AlphaGo uses to avoid thinking about the wrong aspects of the game state, or thinking about the wrong topics altogether, are Go-specific heuristics.)
GPT-3 is a very impressive reasoner in a different sense (it successfully recognizes many patterns in human language, including a lot of very subtle or conjunctive ones like “when A and B and C and D and E and F and G and H and I are all true, humans often say X”), but it too isn’t doing the “model full physical world-states and trajectories thereof” thing (though an optimal predictor of human text would need to be a general intelligence, and a superhumanly capable one at that).
Some examples of abilities I expect humans to only automate once we’ve built AGI (if ever):
The ability to perform open-heart surgery with a high success rate, in a messy non-standardized ordinary surgical environment.
The ability to match smart human performance in a specific hard science field, across all the scientific work humans do in that field.
In principle, I suspect you could build a narrow system that is good at those tasks while lacking the basic mental machinery required to do par-human reasoning about all the hard sciences. In practice, I very strongly expect humans to find ways to build general reasoners to perform those tasks, before we figure out how to build narrow reasoners that can do them. (For the same basic reason evolution stumbled on general intelligence so early in the history of human tech development.)
(Of course, if your brain has all the basic mental machinery required to do other sciences, that doesn’t mean that you have the knowledge required to actually do well in those sciences. An artificial general intelligence could lack physics ability for the same reason many smart humans can’t solve physics problems.)
When I say “general intelligence is very powerful”, a lot of what I mean is that science is very powerful, and that having all the sciences at once is a lot more powerful than the sum of each science’s impact.
(E.g., because different sciences can synergize, and because you can invent new scientific fields and subfields, and more generally chain one novel insight into dozens of other new insights that critically depended on the first insight.)
Another large piece of what I mean is that general intelligence is a very high-impact sort of thing to automate because AGI is likely to blow human intelligence out of the water immediately, or very soon after its invention.
80K gives the (non-representative) example of how AlphaGo and its immediate successors compared to the human ability range on Go:
In the span of a year, AI had advanced from being too weak to win a single [Go] match against the worst human professionals, to being impossible for even the best players in the world to defeat.
I expect “general STEM AI” to blow human science ability out of the water in a similar fashion. Reasons for this include:
Software (unlike human intelligence) scales with more compute.
Current ML uses far more compute to find reasoners than to run reasoners. This is very likely to hold true for AGI as well.
We probably have more than enough compute already, and are mostly waiting on new ideas for how to get to AGI efficiently, as opposed to waiting on more hardware to throw at old ideas.
Empirically, humans aren’t near a cognitive ceiling, and even narrow AI often suddenly blows past the human reasoning ability range on the task it’s designed for. It would be weird if scientific reasoning were an exception.
Empirically, human brains are full of cognitive biases and inefficiencies. It’s doubly weird if scientific reasoning is an exception even though it’s visibly a mess with tons of blind spots, inefficiencies, motivated cognitive processes, and historical examples of scientists and mathematicians taking decades to make technically simple advances.
Empirically, human brains are extremely bad at some of the most basic cognitive processes underlying STEM. E.g., consider that human brains can barely do basic mental math at all.
Human brains underwent no direct optimization for STEM ability in our ancestral environment, beyond things like “can distinguish four objects in my visual field from five objects”. In contrast, human engineers can deliberately optimize AGI systems’ brains for math, engineering, etc. capabilities.
More generally, the sciences (and many other aspects of human life, like written language) are a very recent development. So evolution has had very little time to refine and improve on our reasoning ability in many of the ways that matter.
Human engineers have an enormous variety of tools available to build general intelligence that evolution lacked. This is often noted as a reason for optimism that we can align AGI to our goals, even though evolution failed to align humans to its “goal”. It’s additionally a reason to expect AGI to have greater cognitive ability, if engineers try to achieve great cognitive ability.
The hypothesis that AGI will outperform humans has a disjunctive character: there are many different advantages that individually suffice for this, even if AGI doesn’t start off with any other advantages. (E.g., speed, math ability, scalability with hardware, skill at optimizing hardware...)
By “general intelligence” I mean “whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems”.
Human brains aren’t perfectly general, and not all narrow AIs/animals are equally narrow. (E.g., AlphaZero is more general than AlphaGo.) But it sure is interesting that humans evolved cognitive abilities that unlock all of these sciences at once, with zero evolutionary fine-tuning of the brain aimed at equipping us for any of those sciences. Evolution just stumbled into a solution to other problems, that happened to generalize to billions of wildly novel tasks.
To get more concrete:
AlphaGo is a very impressive reasoner, but its hypothesis space is limited to sequences of Go board states rather than sequences of states of the physical universe. Efficiently reasoning about the physical universe requires solving at least some problems (which might be solved by the AGI’s programmer, and/or solved by the algorithm that finds the AGI in program-space; and some such problems may be solved by the AGI itself in the course of refining its thinking) that are different in kind from what AlphaGo solves.
E.g., the physical world is too complex to simulate in full detail, unlike a Go board state. An effective general intelligence needs to be able to model the world at many different levels of granularity, and strategically choose which levels are relevant to think about, as well as which specific pieces/aspects/properties of the world at those levels are relevant to think about.
More generally, being a general intelligence requires an enormous amount of laserlike strategicness about which thoughts you do or don’t think: a large portion of your compute needs to be ruthlessly funneled into exactly the tiny subset of questions about the physical world that bear on the question you’re trying to answer or the problem you’re trying to solve. If you fail to be ruthlessly targeted and efficient in “aiming” your cognition at the most useful-to-you things, you can easily spend a lifetime getting sidetracked by minutiae / directing your attention at the wrong considerations / etc.
And given the variety of kinds of problems you need to solve in order to navigate the physical world well / do science / etc., the heuristics you use to funnel your compute to the exact right things need to themselves be very general, rather than all being case-specific. (Whereas we can more readily imagine that many of the heuristics AlphaGo uses to avoid thinking about the wrong aspects of the game state, or thinking about the wrong topics altogether, are Go-specific heuristics.)
GPT-3 is a very impressive reasoner in a different sense (it successfully recognizes many patterns in human language, including a lot of very subtle or conjunctive ones like “when A and B and C and D and E and F and G and H and I are all true, humans often say X”), but it too isn’t doing the “model full physical world-states and trajectories thereof” thing (though an optimal predictor of human text would need to be a general intelligence, and a superhumanly capable one at that).
Some examples of abilities I expect humans to only automate once we’ve built AGI (if ever):
The ability to perform open-heart surgery with a high success rate, in a messy non-standardized ordinary surgical environment.
The ability to match smart human performance in a specific hard science field, across all the scientific work humans do in that field.
In principle, I suspect you could build a narrow system that is good at those tasks while lacking the basic mental machinery required to do par-human reasoning about all the hard sciences. In practice, I very strongly expect humans to find ways to build general reasoners to perform those tasks, before we figure out how to build narrow reasoners that can do them. (For the same basic reason evolution stumbled on general intelligence so early in the history of human tech development.)
(Of course, if your brain has all the basic mental machinery required to do other sciences, that doesn’t mean that you have the knowledge required to actually do well in those sciences. An artificial general intelligence could lack physics ability for the same reason many smart humans can’t solve physics problems.)
When I say “general intelligence is very powerful”, a lot of what I mean is that science is very powerful, and that having all the sciences at once is a lot more powerful than the sum of each science’s impact.
(E.g., because different sciences can synergize, and because you can invent new scientific fields and subfields, and more generally chain one novel insight into dozens of other new insights that critically depended on the first insight.)
Another large piece of what I mean is that general intelligence is a very high-impact sort of thing to automate because AGI is likely to blow human intelligence out of the water immediately, or very soon after its invention.
80K gives the (non-representative) example of how AlphaGo and its immediate successors compared to the human ability range on Go:
I expect “general STEM AI” to blow human science ability out of the water in a similar fashion. Reasons for this include:
Software (unlike human intelligence) scales with more compute.
Current ML uses far more compute to find reasoners than to run reasoners. This is very likely to hold true for AGI as well.
We probably have more than enough compute already, and are mostly waiting on new ideas for how to get to AGI efficiently, as opposed to waiting on more hardware to throw at old ideas.
Empirically, humans aren’t near a cognitive ceiling, and even narrow AI often suddenly blows past the human reasoning ability range on the task it’s designed for. It would be weird if scientific reasoning were an exception.
See also AlphaGo Zero and the Foom Debate.
Empirically, human brains are full of cognitive biases and inefficiencies. It’s doubly weird if scientific reasoning is an exception even though it’s visibly a mess with tons of blind spots, inefficiencies, motivated cognitive processes, and historical examples of scientists and mathematicians taking decades to make technically simple advances.
Empirically, human brains are extremely bad at some of the most basic cognitive processes underlying STEM. E.g., consider that human brains can barely do basic mental math at all.
Human brains underwent no direct optimization for STEM ability in our ancestral environment, beyond things like “can distinguish four objects in my visual field from five objects”. In contrast, human engineers can deliberately optimize AGI systems’ brains for math, engineering, etc. capabilities.
More generally, the sciences (and many other aspects of human life, like written language) are a very recent development. So evolution has had very little time to refine and improve on our reasoning ability in many of the ways that matter.
Human engineers have an enormous variety of tools available to build general intelligence that evolution lacked. This is often noted as a reason for optimism that we can align AGI to our goals, even though evolution failed to align humans to its “goal”. It’s additionally a reason to expect AGI to have greater cognitive ability, if engineers try to achieve great cognitive ability.
The hypothesis that AGI will outperform humans has a disjunctive character: there are many different advantages that individually suffice for this, even if AGI doesn’t start off with any other advantages. (E.g., speed, math ability, scalability with hardware, skill at optimizing hardware...)
See also Sources of advantage for digital intelligence.