It’s more like these deep learning systems are mimicking Python very well . There’s no actual symbolic reasoning. You believe this...right?
Zooming out and untangling this a bit, I think the following is a bit closer to the issue?
Deep Learning (DL) is an alternative form of computation that does not involve classical symbol systems, and its amazing success shows that human intelligence is not based on classical symbolic systems. In fact, Geoff Hinton in his Turing Award Speech proclaimed that “the success of machine translation is the last nail in the coffin of symbolic AI”.
Why is this right?
There’s no reason think that any particular computational performance is connected to human intelligence. Why do you believe this? A smartphone is amazingly better than humans at a lot of tasks but that doesn’t seem to mean anything obvious about the nature of human intelligence.
Zooming out more here, it reads like there’s some sort of beef/framework/grand theory/assertion related to symbolic logic, human intelligence, and AGI that you are strongly engaged in. It reads like you got really into this theory and built up your own argument, but it’s unclear why the claims of this underlying theory are true (or even what they are).
The resulting argument has a lot of nested claims and red herrings (the Python thing) and it’s hard to untangle.
I don’t think the question of whether intelligence is pattern recognition, or symbolic logic, is the essence of people’s concerns about AGI. Do you agree or not?
I’m not sure this statement is correct or meaningful (in the context of your argument) because learning Python syntactically isn’t what’s hard, but expressing logic in Python is, and I don’t know what this expression of logic means in your theory. I don’t think you addressed it and I can’t really fill in where it fits in your theory.
Charles, you are right, there is a deep theoretical “beef” behind the issues, but it is not my beef. The debate between “connectionist” neural network theories and symbol based theories raged very much in the 1980s, 1990s. These were really nice scientific debates based on empirical results. Connectionism faded away because it did not prove to be adequate in explaining a lot of challenges. Geoff Hinton was a big part of that debate.
When compute power and data availability grew so fantastically in the 2010s, DL started to have practical success as you see today. Hinton re emerged victoriously and has been wildly attacking believers in symbolic systems ever since. In fact there is a video of him deriding the EU for being tricked into continued funding of symbolic AI research!
I prefer to stay with scientific argumentation and claim that the fact that DL can produce Python defeats Hinton’s claim (not mine) that DL machine translation proves that language is not a symbolic process.
I literally read your post for over ~30 minutes to try to figure out what is going on. I don’t think what I wrote above is relevant/the issue anymore.
Basically, I think what you did was write a narration to yourself, with things that are individually basically true, but that no one claims is important. You also slip in claims like “human cognition must resemble AGI for AGI to happen”, but without making a tight argument.
You then point this resulting reasoning at your final point: “We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.”.
Also, it’s really hard to follow this, there’s things in this argument that seem to be like a triple negative.
Honestly, both my decision to read this and my subsequent performance in untangling this, makes me think I’m pretty dumb.
For example, you say that “DL is much more successful than symbolic AI because it’s closer to the human brain”, and you say this is “defeated” later. Ok. That seems fine.
Later you “defeat” the claim that:
“DL-symbol systems (whatever those are) will be much better because DL has already shown that classical symbol systems are not the right way to model cognitive abilities.”
You say this means:
We don’t know that DL-symbol systems (whatever those are) will be much better than classical AI because DL has not shown anything about the nature of human cognition.
But no one is talking about the nature of human cognition being related to AI?
This is your final point before claiming that AGI can’t come from DL or “symbol-DL”.
Charles, thanks for spending so much time trying to understand my argument. I hope my previous answer helps. Also I added a paragraph to clarify my stance before I give my points.
Also you say that “You also slip in claims like “human cognition must resemble AGI for AGI to happen”″. I don’t think I said that. If I did I must correct it.