I agree with Miles that EA often over-emphasizes AGI time-lines, and that this has less utility than generally assumed. I’d just add two additional points, one about the historical context of machine learning and AI research, and one about the relative risks of domain-specific versus ‘general’ AI.
My historical perspective comes from having worked on machine learning since the late 1980s. My first academic publication in 1989 developed a method of using genetic algorithms to design neural network architectures, and has been cited about 1,100 times since then. There was a lot of excitement in the late 80s about the new back-propagation algorithm for supervised learning in multi-layer neural networks. We expected that it would yield huge breakthroughs in many domains of AI in the next decade, the 1990s. We also vaguely expected that AGI would be developed within a couple of decades after that—probably by 2020. Back-propagation led to lots of cool work, but practical progress was slow, and we eventually lapse into the ‘AI winter’ of the 1990s, until deep learning methods were developed in the 2005-2010 era.
In the last decade, based on deep learning plus fast computers plus huge training datasets, we’ve seen awesome progress in many domain-specific applications of AI, from face recognition to chatbots to visual arts. But have we really made much progress in understanding how to get from domain-specific AI to true AGI of the sort that would impose sudden and unprecedented existential risks on humanity? How we even learned enough to seriously update our AGI timelines compared to what we expected in the late 1980s? I don’t think so. AGI still seems about 15-30 years away—just as it always has since the 1950s.
Even worse, I don’t think the cognitive sciences have really made much serious progress on understanding what an AGI cognitive architecture would even look like—or how it would plausibly lead to existential risks. (I’ll write more about this in due course.)
My bigger concern is that a fixation on AGI timelines in relation to X risk can distract attention from domain-specific progress in AI that could impose much more immediate, plausible, and concrete global catastrophic risks on humanity.
I’d like to see AI timelines for developing cheap, reliable autonomous drone swarms capable of assassinating heads of state and provoking major military conflicts. Or AI timelines for developing automated financial technologies capable of hacking major asset markets or crypto protocols with severe enough consequences that they impose high risks of systemic liquidation cascades in the global financial system, resulting in mass economic suffering. Or AI timelines for developing good enough automated deepfake video technologies that citizens can’t trust any video news sources, and military units can’t trust any orders from their own commanders-in-chief.
There are so many ways that near-term, domain-specific AI could seriously mess up our lives, and I think they deserve more attention. An over-emphasis on fine-tuning our AGI timelines seems to have distracted quite a few talented EAs from addressing those issues.
(Of course, a cynical take would be that under-researching the near-term global catastrophic risks of domain-specific AI will increase the probability that those risks get realized in the next 10-20 years, and they will cause such social, economic, and technological disruption that AGI research is delayed by many decades. Which, I guess, could be construed as one clever but counter-intuitive way to reduce AGI X risk.)
I agree with Miles that EA often over-emphasizes AGI time-lines, and that this has less utility than generally assumed. I’d just add two additional points, one about the historical context of machine learning and AI research, and one about the relative risks of domain-specific versus ‘general’ AI.
My historical perspective comes from having worked on machine learning since the late 1980s. My first academic publication in 1989 developed a method of using genetic algorithms to design neural network architectures, and has been cited about 1,100 times since then. There was a lot of excitement in the late 80s about the new back-propagation algorithm for supervised learning in multi-layer neural networks. We expected that it would yield huge breakthroughs in many domains of AI in the next decade, the 1990s. We also vaguely expected that AGI would be developed within a couple of decades after that—probably by 2020. Back-propagation led to lots of cool work, but practical progress was slow, and we eventually lapse into the ‘AI winter’ of the 1990s, until deep learning methods were developed in the 2005-2010 era.
In the last decade, based on deep learning plus fast computers plus huge training datasets, we’ve seen awesome progress in many domain-specific applications of AI, from face recognition to chatbots to visual arts. But have we really made much progress in understanding how to get from domain-specific AI to true AGI of the sort that would impose sudden and unprecedented existential risks on humanity? How we even learned enough to seriously update our AGI timelines compared to what we expected in the late 1980s? I don’t think so. AGI still seems about 15-30 years away—just as it always has since the 1950s.
Even worse, I don’t think the cognitive sciences have really made much serious progress on understanding what an AGI cognitive architecture would even look like—or how it would plausibly lead to existential risks. (I’ll write more about this in due course.)
My bigger concern is that a fixation on AGI timelines in relation to X risk can distract attention from domain-specific progress in AI that could impose much more immediate, plausible, and concrete global catastrophic risks on humanity.
I’d like to see AI timelines for developing cheap, reliable autonomous drone swarms capable of assassinating heads of state and provoking major military conflicts. Or AI timelines for developing automated financial technologies capable of hacking major asset markets or crypto protocols with severe enough consequences that they impose high risks of systemic liquidation cascades in the global financial system, resulting in mass economic suffering. Or AI timelines for developing good enough automated deepfake video technologies that citizens can’t trust any video news sources, and military units can’t trust any orders from their own commanders-in-chief.
There are so many ways that near-term, domain-specific AI could seriously mess up our lives, and I think they deserve more attention. An over-emphasis on fine-tuning our AGI timelines seems to have distracted quite a few talented EAs from addressing those issues.
(Of course, a cynical take would be that under-researching the near-term global catastrophic risks of domain-specific AI will increase the probability that those risks get realized in the next 10-20 years, and they will cause such social, economic, and technological disruption that AGI research is delayed by many decades. Which, I guess, could be construed as one clever but counter-intuitive way to reduce AGI X risk.)