AGI will be possiblevery soon (5-10yrs): neural-to-symbolic work, generalization out-of-distribution, where it is able to learn from fewer and fewer examples, and with equivariance, as well as Mixtures of Experts & Hinton’s GLOM are all fragments of a general intelligence.
AGI alignment is likely impossible. We would be pulling a Boltzmann brain out of a hat, instead of relying upon hundreds of millions of years of common sense and socialization. I’d be more likely to trust an alien, because they had to evolve and maintain a civilization, first.
Yet, I posit that narrow AI will likely provide comparable or superior performance, given the same quantity of compute, for almost all tasks. (While the human brain has 100 Trillion synapses, and our neurons are quite complex, we must contrast that with the most recent language AI, which can perform nearly as well as us with only 7 Billion parameters—a “brain” 14,000 times smaller! It does seem that narrow AI is a better use of resources.)
Because narrow AI will be generally sufficient, then we would NOT be at a disadvantage if we “play it safe” by NOT pursuing AGI. If “playing it safe is safe”, that’s a game humanity might win. :)
I understand that full compliance for an AGI-ban is unlikely, yet I see our best chances, strategically, from pursuing narrow AI until that domain is tapped-out, while vehemently fighting attempts at AGI. Only after narrow AI sees diminished returns will we have a clear understanding of “what else is there left to do, that makes AGI so important?” My bet is that AGI would not supply a large enough margin compared to narrow AI to be worth the risks, ever.
Side-bar Prediction: we are already in a lumpy, decades-long “FOOMishness” where narrow AI are finding improvements for us, accelerating those same narrow AI. Algorithms cannot become infinitely more powerful, however, so we are likely to see diminishing returns in coming years. That will make the job of the first AGI very difficult -because each next leap of intelligence will take vastly more resources and time than the last (especially considering the decades of brain-legions to get us this far...).
I imagine many others here (myself included) will be skeptical of the parts: 1. Narrow AI will be just as good, particularly for similar development costs. (To me it seems dramatically more work-intense to make enough narrow AIs) 2. The idea that really powerful narrow AIs won’t speed up AGIs and similar. 3. Your timelines (very soon) might be more soon than others here, though it’s not clear exactly what “possible” means. (I’m sure some here would put some probability mass 5-10yrs out, just a different amount)
I think that very carefully selected narrows AIs could be really great (See Ought, for example), but am not sure how far and broad I’d recommend making narrow AIs.
Oh, I am well aware that most folks are skeptical of “narrow AI is good enough” and “5-10yrs to AGI”! :) I am not bothered by sounding wrong to the majority. [For example, when I wrote in Feb. 2020, (back when the stock market had only begun to dip and Italy hadn’t yet locked-down) that the coronavirus would cause a supply-chain disruption at the moment we sought to recover & re-open, which would add a few percent to prices and prolonged lag to the system, everyone else thought we would see a “sharp V recovery in a couple months”. I usually sound crazy at first.]
…and I meant “possible” in the sense that doing so would be “within the budget of a large institution”. Whether they take that gamble or not is where I focus the “playing it safe might be safe” strategy. If narrow AI is good enough, then we aren’t flat-footed for avoiding AGI. Promoting narrow AI applications, as a result, diminishes the allure of implementing AGI.
Additionally, I should clarify that I think narrow AI is already starting to “FOOM” a little, in the sense that it is feeding itself gains with less and less of our own creative input. A self-accelerating self-improvement, though narrow AI still has humans-in-the-loop.
These self-discovered improvements will accelerate AGI as well. Numerous processes, from chip layout to the material science of fabrication, and even the discovery of superior algorithms to run on quantum computers, all will see multiples that feed back into the whole program of AGI development, a sort of “distributed FOOM”.
Algorithms for intelligence themselves, however, probably have only a 100x or so improvement left, and those gains are likely to be lumpy. Additionally, narrow AI is likely to make enough of those discoveries soon that the work left-over for AGI is much more difficult, preventing the pot from FOOMing-over completely.
[[And, a side-note: we are only now approaching the 6 year anniversary of AlphaGo defeating Lee Sedol, demonstrating with move 37 that it could be creative and insightful about a highly intuitive strategy game. This last year, AlphaFold has found the form of every human protein, which we assumed would take a generation. Cerebras wafer-scale chips will be able to handle 120 trillion parameters, this next year, which is “brain-scale”. I see this progress as a sign that narrow AI will likely do well-enough, meaning we are safe if we stick to narrow-only, AND that AGI will be achievable before 2032, so we should try to stop it with urgency.]]
[[Addendum: narrow AI now only needs ten examples from a limited training set, in order to generalize outside that distribution… so, designing numerous narrow AI will likely be easy & automated, too, and they will proliferate and diversify the same way arthropods have. Even the language-model Codex can write functioning code for an AI system, so AutoML in general makes narrow AI feasible. I expect most AI should be as dumb as possible without failing often. And never let paperclip-machines learn about missiles!]]
Another, VERY different stance on AGI:
AGI will be possible very soon (5-10yrs): neural-to-symbolic work, generalization out-of-distribution, where it is able to learn from fewer and fewer examples, and with equivariance, as well as Mixtures of Experts & Hinton’s GLOM are all fragments of a general intelligence.
AGI alignment is likely impossible. We would be pulling a Boltzmann brain out of a hat, instead of relying upon hundreds of millions of years of common sense and socialization. I’d be more likely to trust an alien, because they had to evolve and maintain a civilization, first.
Yet, I posit that narrow AI will likely provide comparable or superior performance, given the same quantity of compute, for almost all tasks. (While the human brain has 100 Trillion synapses, and our neurons are quite complex, we must contrast that with the most recent language AI, which can perform nearly as well as us with only 7 Billion parameters—a “brain” 14,000 times smaller! It does seem that narrow AI is a better use of resources.)
Because narrow AI will be generally sufficient, then we would NOT be at a disadvantage if we “play it safe” by NOT pursuing AGI. If “playing it safe is safe”, that’s a game humanity might win. :)
I understand that full compliance for an AGI-ban is unlikely, yet I see our best chances, strategically, from pursuing narrow AI until that domain is tapped-out, while vehemently fighting attempts at AGI. Only after narrow AI sees diminished returns will we have a clear understanding of “what else is there left to do, that makes AGI so important?” My bet is that AGI would not supply a large enough margin compared to narrow AI to be worth the risks, ever.
Side-bar Prediction: we are already in a lumpy, decades-long “FOOMishness” where narrow AI are finding improvements for us, accelerating those same narrow AI. Algorithms cannot become infinitely more powerful, however, so we are likely to see diminishing returns in coming years. That will make the job of the first AGI very difficult -because each next leap of intelligence will take vastly more resources and time than the last (especially considering the decades of brain-legions to get us this far...).
Interesting, thanks for sharing!
I imagine many others here (myself included) will be skeptical of the parts:
1. Narrow AI will be just as good, particularly for similar development costs. (To me it seems dramatically more work-intense to make enough narrow AIs)
2. The idea that really powerful narrow AIs won’t speed up AGIs and similar.
3. Your timelines (very soon) might be more soon than others here, though it’s not clear exactly what “possible” means. (I’m sure some here would put some probability mass 5-10yrs out, just a different amount)
I think that very carefully selected narrows AIs could be really great (See Ought, for example), but am not sure how far and broad I’d recommend making narrow AIs.
Oh, I am well aware that most folks are skeptical of “narrow AI is good enough” and “5-10yrs to AGI”! :) I am not bothered by sounding wrong to the majority. [For example, when I wrote in Feb. 2020, (back when the stock market had only begun to dip and Italy hadn’t yet locked-down) that the coronavirus would cause a supply-chain disruption at the moment we sought to recover & re-open, which would add a few percent to prices and prolonged lag to the system, everyone else thought we would see a “sharp V recovery in a couple months”. I usually sound crazy at first.]
…and I meant “possible” in the sense that doing so would be “within the budget of a large institution”. Whether they take that gamble or not is where I focus the “playing it safe might be safe” strategy. If narrow AI is good enough, then we aren’t flat-footed for avoiding AGI. Promoting narrow AI applications, as a result, diminishes the allure of implementing AGI.
Additionally, I should clarify that I think narrow AI is already starting to “FOOM” a little, in the sense that it is feeding itself gains with less and less of our own creative input. A self-accelerating self-improvement, though narrow AI still has humans-in-the-loop.
These self-discovered improvements will accelerate AGI as well. Numerous processes, from chip layout to the material science of fabrication, and even the discovery of superior algorithms to run on quantum computers, all will see multiples that feed back into the whole program of AGI development, a sort of “distributed FOOM”.
Algorithms for intelligence themselves, however, probably have only a 100x or so improvement left, and those gains are likely to be lumpy. Additionally, narrow AI is likely to make enough of those discoveries soon that the work left-over for AGI is much more difficult, preventing the pot from FOOMing-over completely.
[[And, a side-note: we are only now approaching the 6 year anniversary of AlphaGo defeating Lee Sedol, demonstrating with move 37 that it could be creative and insightful about a highly intuitive strategy game. This last year, AlphaFold has found the form of every human protein, which we assumed would take a generation. Cerebras wafer-scale chips will be able to handle 120 trillion parameters, this next year, which is “brain-scale”. I see this progress as a sign that narrow AI will likely do well-enough, meaning we are safe if we stick to narrow-only, AND that AGI will be achievable before 2032, so we should try to stop it with urgency.]]
[[Addendum: narrow AI now only needs ten examples from a limited training set, in order to generalize outside that distribution… so, designing numerous narrow AI will likely be easy & automated, too, and they will proliferate and diversify the same way arthropods have. Even the language-model Codex can write functioning code for an AI system, so AutoML in general makes narrow AI feasible. I expect most AI should be as dumb as possible without failing often. And never let paperclip-machines learn about missiles!]]