Don’t expect AGI anytime soon

This is a brief follow up to my previous post, The probability that Artificial General Intelligence will be developed by 2043 is Zero, which I think was maybe a bit too long for many people to read. In this post I will show some reactions from some of the top people in AI to my argument as I made it briefly on Twitter.
First Yann LeCun himself, when I reacted to the Browning and LeCun paper I discuss in my previous post:

As you see, LeCun’s response was that the argument is “ridiculous”. The reason, because LeCun can’t win. At least he understands the argument … which is really a proof that his position is wrong because either option he takes to defend it will fail. So instead of trying to defend, he calls the argument “ridiculous”.

In another discussion with Christopher Manning, an influential NLP researcher at Stanford, I debate the plausibility of DL as models of language. As opposed to LeCun, he actually takes my argument seriously, but drops out when I show that his position is not winnable. That is, the fact that “Language Models” learn Python proves that they are not models of language. (The link to the tweets is https://​​​​rogerkmoore/​​status/​​1530809220744073216?s=20&t=iT9-8JuylpTGgjPiOoyv2A)

The fact is, Python changes everything because we know it works as a classical symbolic system. We don’t know how natural language or human cognition works. Many of us suspect it has components that are classical symbolic processes. Neural network proponents deny this. But they can’t deny that Python is a classical symbolic language. So they must somehow deal with the fact that their models can mimic these processes in some way. And they have no way to prove that the same models are not mimicking human symbolic processes in the same way. My claim is that in both cases the mimicking will take you a long way, but not all the way. DL can learn the mappings where the symbolic system produces lots of examples, like language and Python. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn. I think in ten years everyone will realize this and AI will look pretty silly (again).

In the meantime, we will continue to make progress in many technological areas. Automation will continue to improve. We will have programs that can generate sequences of video to make amazing video productions. Noam Chomsky likens these technological artefacts to bulldozers—if you want to build bulldozers, fine. Nothing wrong with that. We will have amazing bulldozers. But not “intelligent” ones.

Crossposted to LessWrong (−14 points, 6 comments)