This observation might have been made in one of the papers under discussion, but:
Gary Marcus who argues that, in order to make further progress, AI needs to combine symbolic and DL solutions into hybrid systems. As a rebuttal to Marcus, Jacob Browning and Yann LeCun argue that there is no need for such hybrids because symbolic representations can “emerge” from neural networks. They argue that “the neural network approach has traditionally held that we don’t need to hand-craft symbolic reasoning but can instead learn it: Training a machine on examples of symbols engaging in the right kinds of reasoning will allow it to be learned as a matter of abstract pattern completion.
I would say that “human intelligence” has been substantially boosted by the invention of programmable computers and formal mathematical systems. Both of these inventions situate the rules for symbol manipulation mostly outside of the human brain. Thus it seems like you could say humans + computers are a “hybrid system”, and that humans are much more responsible for the non-symbolic part of the system than for the symbolic parts.
Regarding the post: joint probabilities over sequences of characters are perfectly capable of encoding mappings from strings specifying grammars to classifiers that assess whether a certain sequence obeys the grammar or not. Are you saying that DL-based language models can’t do this, even in principle? This seems wrong to me.
Thanks David. Indeed, I completely agree that humans use external symbolic systems to enhance their ability to think. Writing is a clear example. Shopping lists too.
And to answer your last question—indeed I am saying exactly that DL based language models CAN do this. i.e. they can classify grammatical strings. But by doing this they act as a tool that can perhaps simplify the task. The correct way to check the grammar of a Python string is to look up the BNF. But you can also take shortcuts especially with simple strings.
What I’m saying is that a joint probability can encode “how to check a Python string against the relevant grammar”. Learning such a joint probability (a procedure which may not involve actually seeing any Python strings) seems difficult, but seeming difficulty isn’t nearly enough to convince me that it’s impossible.
Right .. but I see that as a problem if you claim that Python doesn’t have a relevant grammar. Of course everyone knows it has a grammar so no one claims this. But people DO claim that natural language does not have a grammar. This is what I have a problem with. If they said natural language has a grammar and “neural networks can check a natural language string against the relevant grammar”, I would have no problems. But then these people would not be in a position to claim that they have discovered something new about language, just like we are not in a position to claim that we have discovered anything about Python.
This observation might have been made in one of the papers under discussion, but:
I would say that “human intelligence” has been substantially boosted by the invention of programmable computers and formal mathematical systems. Both of these inventions situate the rules for symbol manipulation mostly outside of the human brain. Thus it seems like you could say humans + computers are a “hybrid system”, and that humans are much more responsible for the non-symbolic part of the system than for the symbolic parts.
Regarding the post: joint probabilities over sequences of characters are perfectly capable of encoding mappings from strings specifying grammars to classifiers that assess whether a certain sequence obeys the grammar or not. Are you saying that DL-based language models can’t do this, even in principle? This seems wrong to me.
Thanks David. Indeed, I completely agree that humans use external symbolic systems to enhance their ability to think. Writing is a clear example. Shopping lists too.
And to answer your last question—indeed I am saying exactly that DL based language models CAN do this. i.e. they can classify grammatical strings. But by doing this they act as a tool that can perhaps simplify the task. The correct way to check the grammar of a Python string is to look up the BNF. But you can also take shortcuts especially with simple strings.
What I’m saying is that a joint probability can encode “how to check a Python string against the relevant grammar”. Learning such a joint probability (a procedure which may not involve actually seeing any Python strings) seems difficult, but seeming difficulty isn’t nearly enough to convince me that it’s impossible.
Right .. but I see that as a problem if you claim that Python doesn’t have a relevant grammar. Of course everyone knows it has a grammar so no one claims this. But people DO claim that natural language does not have a grammar. This is what I have a problem with. If they said natural language has a grammar and “neural networks can check a natural language string against the relevant grammar”, I would have no problems. But then these people would not be in a position to claim that they have discovered something new about language, just like we are not in a position to claim that we have discovered anything about Python.