[EDIT 5/3/23: My original (fuzzy) definition drew inspiration from this paper by Legg and Hutter. They define an “agent” as “an entity which is interacting with an external environment, problem or situation,” and they define intelligence as a property of some agents.
An agent’s intelligence is related to its ability to succeed in an environment. This implies that the agent has some kind of an objective. Perhaps we could consider an agent intelligent, in an abstract sense, without having any objective. However without any objective what so ever, the agent’s intelligence would have no observable consequences. Intelligence then, at least the concrete kind that interests us, comes into effect when an agent has an objective to apply its intelligence to. Here we will refer to this as its goal.
Notably, their notion of “goals” is more general (whatever it means to “succeed”) than other notions of “goal-directedness.”
Similarly, the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig defines an agent as “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.” In Russell’s book, Human Compatible, he further elaborates by stating, “roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.”
See this paper for many other possible definitions of intelligence.]
Let’s say an agent is something that takes actions to pursue its goals (e.g. a thermostat, E. coli, humans). Intelligence (in the sense of “general problem-solving ability”; there are many different definitions) is the thing that lets an agent choose effective actions for achieving its goals (specifically the “identify which actions will be effective” part; this is only part of an agent’s overall “ability to achieve its goals,” which some might define as power). Narrow intelligence is when an agent does a particular task like chess and uses domain-specific skills to succeed. General intelligence is when an agent does a broad range of different tasks with help from domain-general cognitive skills such as logic, planning, pattern recognition, remembering, abstraction, learning (figuring out how to do things without knowing how to do them first), etc.
When using the term “intelligence,” we also care about responding to changes in the environment (e.g. a chess AI will win even if the human tries many different strategies). Agents with “general intelligence” should succeed even in radically unfamiliar environments (e.g. I can still find food if I travel to a foreign country that I’ve never visited before; I can learn calculus despite no practice over the course of evolution); they should be good at adapting to new circumstances.
Artificial general intelligence (AGI) is general intelligence at around the human level. A short and vague way of checking this is “a system that can do any cognitive task as well as a human or better”; although maybe you only care about economically relevant cognitive tasks. Note that it’s unlikely for a system to achieve exactly human level on all tasks; an AGI will probably be way better than humans at quickly multiplying large numbers (calculators are already superhuman).
However, this definition is fuzzy and imprecise. The features I’ve described are not perfectly compatible. But this doesn’t seem to be a huge problem. Richard Ngo points out that many important concepts started out this way (e.g. “energy” in 17th-century physics; “fitness” in early-19th-century biology; “computation” in early-20th-century mathematics). Even “numbers” weren’t formalized until Zermelo–Fraenkel set theory and the construction of the real numbers during the 1800s and earlier 1900s.
[EDIT 5/3/23: My original (fuzzy) definition drew inspiration from this paper by Legg and Hutter. They define an “agent” as “an entity which is interacting with an external environment, problem or situation,” and they define intelligence as a property of some agents.
Notably, their notion of “goals” is more general (whatever it means to “succeed”) than other notions of “goal-directedness.”
Similarly, the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig defines an agent as “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.” In Russell’s book, Human Compatible, he further elaborates by stating, “roughly speaking, an entity is intelligent to the extent that what it does is likely to achieve what it wants, given what it has perceived.”
Note that these definitions of “agent” neglect the concept of embedded agency. It is also important to note that the term “agent” has a different meaning in economics.
See this paper for many other possible definitions of intelligence.]
Let’s say an agent is something that takes actions to pursue its goals (e.g. a thermostat, E. coli, humans). Intelligence (in the sense of “general problem-solving ability”; there are many different definitions) is the thing that lets an agent choose effective actions for achieving its goals (specifically the “identify which actions will be effective” part; this is only part of an agent’s overall “ability to achieve its goals,” which some might define as power). Narrow intelligence is when an agent does a particular task like chess and uses domain-specific skills to succeed. General intelligence is when an agent does a broad range of different tasks with help from domain-general cognitive skills such as logic, planning, pattern recognition, remembering, abstraction, learning (figuring out how to do things without knowing how to do them first), etc.
When using the term “intelligence,” we also care about responding to changes in the environment (e.g. a chess AI will win even if the human tries many different strategies). Agents with “general intelligence” should succeed even in radically unfamiliar environments (e.g. I can still find food if I travel to a foreign country that I’ve never visited before; I can learn calculus despite no practice over the course of evolution); they should be good at adapting to new circumstances.
Artificial general intelligence (AGI) is general intelligence at around the human level. A short and vague way of checking this is “a system that can do any cognitive task as well as a human or better”; although maybe you only care about economically relevant cognitive tasks. Note that it’s unlikely for a system to achieve exactly human level on all tasks; an AGI will probably be way better than humans at quickly multiplying large numbers (calculators are already superhuman).
However, this definition is fuzzy and imprecise. The features I’ve described are not perfectly compatible. But this doesn’t seem to be a huge problem. Richard Ngo points out that many important concepts started out this way (e.g. “energy” in 17th-century physics; “fitness” in early-19th-century biology; “computation” in early-20th-century mathematics). Even “numbers” weren’t formalized until Zermelo–Fraenkel set theory and the construction of the real numbers during the 1800s and earlier 1900s.