Recently, the Future Fund posted a challenge, asking for essays on three propositions on the timing and impact of Artificial General Intelligence (AGI). While the challenge may generate some interesting discussion about the technology of artificial intelligence and scenario building for its impact on the world, it is fundamentally focused on the wrong questions.
Two of the propositions focus on when AGI will arrive. This makes it seem like AGI is a natural event, like an asteroid strike or earthquake. But AGI is something we will create, if we create it. And while the 20th century was dominatedby a view of science and technology that only asked whether something was technically possible, the 21st century is increasingly recognizing that the crucial question is less can we do this, than should we. Human beings will have to choose to pursue and develop AGI. One question is, why should we?
AGI might be helpful for thinking about complex human problems, but it is doubtful that it would be better than task specific AI. Task specific AI has already proven successful at useful, difficult jobs (such as cancer screening for tissue samples and hypothesizing protein folding structures). Part of what has enabled such successful applications is the task specificity. That allows for clear success/fail training and ongoing evaluation measures.
The more general applications of AI (such as automated driving) have not yet proven so successful. General navigation of the world involves far more complex judgments, including ethical judgments, that AI is not good at.
Now, AI might get better at these more real world contexts, or it might not. We might learn that the complicated judgments needed to drive a car can only be assisted by AI (lane maintenance assistance, parking assistance, etc.), and not taken over from human drivers generally. But it is important to note that even if AI could fully drive a car in all the contexts and challening situations in which humans can drive a car, that would still be far removed from AGI.
Successful AI car driving could be evaluated by the increased safety of driving (the great hope) and the social acceptability of AI (successful crossing of the valley of the uncanny). But such success is still task specific. How would know we had successful AGI if/when we created it? It would nothing like human intelligence, which is shaped not only by information processing, but by embodiment and the emotions central to human existence. Empathy, love, grief, fear, anger, and our other emotions don’t just shape what we know but shape how we value and how we act with what we know. Both physical and emotional pain set the crucial stakes for our learning. AI cannot be fully imbued with these attributes—even if we could make failure in some sense physically painful for the AI, emotional pain, the far more potent teacher, is out of reach for AI. We are not even clear on how it works in humans to model it properly.
So AGI cannot be like human intelligence. Its generality cannot be tied to the potency of human stakes, including the fact of human mortality and human vulnerability. We set the stakes for AI with task specific applications, and so can train such limited application AI. Such stakes for AGI are irreducibly murky or out of reach.
This raises the question of why then pursue AGI? Recall we don’t have to; no scientist has to. Task specific AI is useful. What would an AGI utterly different from our own intelligence do for us?
Such reflection clarifies what we should be asking at this point. Why should we pursue AGI? Just because we can is not an adequate answer.
And if we do pursue it, how might such pursuit be done responsibly? What would count as success for an AGI (recognizing it will not be like us)? And should we ever allow it to have access to controlling real things in the real world (like cars, packages, airplanes, etc.)? An AGI that controls nothing in the world but serves as an unusual interlocutor for humans might be interesting, but it is not even clear it would be helpful, as it would not be aligned with what is important to us. Why have this kind of thing around?
The development of AGI is up to us as intelligent human actors. We make these choices, and we should make them asking the right questions.
Future Fund should be grappling with these questions, and questions of how to assist scientists making choices about what to pursue and how to pursue it. Societal responsibility in science is now an endemic feature of scientific research. How to grapple well with this responsibility is something worth incentivizing, not how to change minds about probabilities of future events.
On Artificial General Intelligence: Asking the Right Questions
Recently, the Future Fund posted a challenge, asking for essays on three propositions on the timing and impact of Artificial General Intelligence (AGI). While the challenge may generate some interesting discussion about the technology of artificial intelligence and scenario building for its impact on the world, it is fundamentally focused on the wrong questions.
Two of the propositions focus on when AGI will arrive. This makes it seem like AGI is a natural event, like an asteroid strike or earthquake. But AGI is something we will create, if we create it. And while the 20th century was dominatedby a view of science and technology that only asked whether something was technically possible, the 21st century is increasingly recognizing that the crucial question is less can we do this, than should we. Human beings will have to choose to pursue and develop AGI. One question is, why should we?
AGI might be helpful for thinking about complex human problems, but it is doubtful that it would be better than task specific AI. Task specific AI has already proven successful at useful, difficult jobs (such as cancer screening for tissue samples and hypothesizing protein folding structures). Part of what has enabled such successful applications is the task specificity. That allows for clear success/fail training and ongoing evaluation measures.
The more general applications of AI (such as automated driving) have not yet proven so successful. General navigation of the world involves far more complex judgments, including ethical judgments, that AI is not good at.
Now, AI might get better at these more real world contexts, or it might not. We might learn that the complicated judgments needed to drive a car can only be assisted by AI (lane maintenance assistance, parking assistance, etc.), and not taken over from human drivers generally. But it is important to note that even if AI could fully drive a car in all the contexts and challening situations in which humans can drive a car, that would still be far removed from AGI.
Successful AI car driving could be evaluated by the increased safety of driving (the great hope) and the social acceptability of AI (successful crossing of the valley of the uncanny). But such success is still task specific. How would know we had successful AGI if/when we created it? It would nothing like human intelligence, which is shaped not only by information processing, but by embodiment and the emotions central to human existence. Empathy, love, grief, fear, anger, and our other emotions don’t just shape what we know but shape how we value and how we act with what we know. Both physical and emotional pain set the crucial stakes for our learning. AI cannot be fully imbued with these attributes—even if we could make failure in some sense physically painful for the AI, emotional pain, the far more potent teacher, is out of reach for AI. We are not even clear on how it works in humans to model it properly.
So AGI cannot be like human intelligence. Its generality cannot be tied to the potency of human stakes, including the fact of human mortality and human vulnerability. We set the stakes for AI with task specific applications, and so can train such limited application AI. Such stakes for AGI are irreducibly murky or out of reach.
This raises the question of why then pursue AGI? Recall we don’t have to; no scientist has to. Task specific AI is useful. What would an AGI utterly different from our own intelligence do for us?
Such reflection clarifies what we should be asking at this point. Why should we pursue AGI? Just because we can is not an adequate answer.
And if we do pursue it, how might such pursuit be done responsibly? What would count as success for an AGI (recognizing it will not be like us)? And should we ever allow it to have access to controlling real things in the real world (like cars, packages, airplanes, etc.)? An AGI that controls nothing in the world but serves as an unusual interlocutor for humans might be interesting, but it is not even clear it would be helpful, as it would not be aligned with what is important to us. Why have this kind of thing around?
The development of AGI is up to us as intelligent human actors. We make these choices, and we should make them asking the right questions.
Future Fund should be grappling with these questions, and questions of how to assist scientists making choices about what to pursue and how to pursue it. Societal responsibility in science is now an endemic feature of scientific research. How to grapple well with this responsibility is something worth incentivizing, not how to change minds about probabilities of future events.