My argument against AGI

This is the third post about my argument to try and convince the Future Fund Worldview Prize judges that “all of this AI stuff is a misguided sideshow”. My first post was an extensive argument that unfortunately confused many people.

(The probability that Artificial General Intelligence will be develop)

My second post was much more straightforward but ended up focusing mostly on revealing the reaction that some “AI luminaries” have shown to my argument
(Don’t expect AGI anytime soon)

Now, as a result of answering many excellent questions that exposed the confusions caused by my argument, I believe I am in a position to make a very clear and brief summary of the argument in point form.

To set the scene, the Future Fund is interested in predicting when we will have AI systems that can match human level cognition: “This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs.” This is a pretty tall order. It means systems with advanced planning and decision making capabilities. But this is not the first time people predicted that we will have such machines. In my first article I reference a 1960 paper which states that the US Air Force predicted such a machine by 1980. The prediction was based on the same “look how much progress we have made, so AGI can’t be too far away” argument we see today. There must be a new argument/​belief if today’s AGI predictions are to bear more fruit than they did in 1960. My argument identifies this new belief. Then it shows why the belief is wrong.

Part 1

  1. Most of the prevailing cognitive theories involve classical symbol processing systems (with a combinatorial syntax and semantics, like formal logic). For example, theories of reasoning and planning involve logic like processes and natural language is thought by many to involve phrase structure grammars, like for example Python does.

  2. Good old-fashioned AI was (largely) based on the same assumption, that classical symbol systems are necessary for AI.

  3. Good old-fashioned AI failed, showing the limitations of classical symbol systems.

  4. Deep Learning (DL) is an alternative form of computation that does not involve classical symbol systems, and its amazing success shows that human intelligence is not based on classical symbolic systems. In fact, Geoff Hinton in his Turing Award Speech proclaimed that “the success of machine translation is the last nail in the coffin of symbolic AI”.

  5. DL will be much more successful than symbolic AI because it is based on a better model of cognition: the brain. That is, the brain is a neural network, so clearly neural networks are going to be better models.

  6. But hang on. DL is now very good at producing syntactically correct Python programs. But argument 4. should make us conclude that Python does not involve classical symbolic systems because a non-symbolic DL model can write Python. Which is patently false. The argument becomes a reductio ad absurdum. One of the steps in the argument must be wrong, and the obvious choice is 4, which gives us 7.

  7. The success of DL in performing some human task tells us nothing about the underlying human competence needed for the task. For example, natural language might well be the production of a generative grammar in spite of the fact that statistical methods are currently better than methods based on parsing.

  8. Point 7. defeats point 5. There is no scientific reason to believe DL will be much more successful than symbolic AI was in attaining some kind of general intelligence.

Part 2

  1. In fact, some of my work is already done for me as many of the top experts concede that DL alone is not enough for “AGI”. They propose a need for a symbolic system to supplement DL, in order to be able to do planning, high level reasoning, abductive reasoning, and so on.

  2. The symbolic system should be non-classical because of Part 1 point 2 and 3. That is, we need something better than classical systems because good old-fashioned AI failed as a result of its assumptions about symbol systems.

  3. DL-symbol systems (whatever those are) will be much better because DL has already shown that classical symbol systems are not the right way to model cognitive abilities.

  4. But Part 1 point 7 defeats Part 2 point 3. We don’t know that DL-symbol systems (whatever those are) will be much better than classical AI because DL has not shown anything about the nature of human cognition.

  5. We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route. The fact that DL can do Python shows that it is good at mimicking symbolic systems when lots of example productions are available, like language and Python. But it struggles in tasks like planning where such examples aren’t there.

  6. We should instead focus our attention of human-machine symbiosis, which explicitly designs systems that supplement rather than replace human intelligence.

Crossposted to LessWrong (7 points, 5 comments)