I think I may have said something to confuse the issue. Artificial neural networks certainly ARE capable of representing classical symbolic computations. In fact the first neural networks (e.g. perceptron) did just that. They typically do that with local representations where individual nodes assume the role of representing a given variable. But these were not very good at other tasks like generalisation.
More advanced distributed networks emerged with DL being the newest incarnation. These have representations which makes it very difficult (if not impossible) to dedicate nodes to variables. Which does not worry the architects because they specifically believe that the non-localised representation is what makes them so powerful (see Bengio, LeCun and Hinton’s article for their Turing award)
Turning to real neurons, the fact is that we really don’t know all that much about how they represent knowledge. We know where they tend to fire in response to given stimuli, we know how they are connected, and we know that they have some hierarchical representations. So I can’t give you a biological explanation of how neural ensembles can represent variables. All I can do is give you arguments that humans DO perform symbolic manipulation on variables, so somehow their brain has to be able to encode this.
If you can make an artificial network somehow do this eventually then fine. I will support those efforts. But we are nowhere near that, and the main actors are not even pushing in that direction.
That last comment seems very far from the original post which claimed
We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.
If we don’t have a biological representation of how BNNs can represent and perform symbolic representation, why do we have reason to believe that we know ANNs can’t?
Without an ability to point to the difference, this isn’t anything close to a reductio, it’s just saying “yeah I don’t buy it dude, I don’t reckon AI will be that good”
Sorry I think you are misunderstanding the reductio argument. That argument simply undermines the claim that natural language is not based on a generative phrase structure grammar. That is, that non symbolic DL is the “proper” model of language. In fact they are called “language models”. I claim they are not models of language, and therefore there is no reason to discard symbolic models … which is where the need for symbol manipulation comes from. Hence a very different sort of architecture than current DL
And of course we can point to the difference between artificial and biological networks. I didn’t because there are too many! One of the big ones is back propagation. THE major reason we have ANNs in the first place, completely implausible biologically. No back propagation in the brain.
I think I may have said something to confuse the issue. Artificial neural networks certainly ARE capable of representing classical symbolic computations. In fact the first neural networks (e.g. perceptron) did just that. They typically do that with local representations where individual nodes assume the role of representing a given variable. But these were not very good at other tasks like generalisation.
More advanced distributed networks emerged with DL being the newest incarnation. These have representations which makes it very difficult (if not impossible) to dedicate nodes to variables. Which does not worry the architects because they specifically believe that the non-localised representation is what makes them so powerful (see Bengio, LeCun and Hinton’s article for their Turing award)
Turning to real neurons, the fact is that we really don’t know all that much about how they represent knowledge. We know where they tend to fire in response to given stimuli, we know how they are connected, and we know that they have some hierarchical representations. So I can’t give you a biological explanation of how neural ensembles can represent variables. All I can do is give you arguments that humans DO perform symbolic manipulation on variables, so somehow their brain has to be able to encode this.
If you can make an artificial network somehow do this eventually then fine. I will support those efforts. But we are nowhere near that, and the main actors are not even pushing in that direction.
That last comment seems very far from the original post which claimed
If we don’t have a biological representation of how BNNs can represent and perform symbolic representation, why do we have reason to believe that we know ANNs can’t?
Without an ability to point to the difference, this isn’t anything close to a reductio, it’s just saying “yeah I don’t buy it dude, I don’t reckon AI will be that good”
Sorry I think you are misunderstanding the reductio argument. That argument simply undermines the claim that natural language is not based on a generative phrase structure grammar. That is, that non symbolic DL is the “proper” model of language. In fact they are called “language models”. I claim they are not models of language, and therefore there is no reason to discard symbolic models … which is where the need for symbol manipulation comes from. Hence a very different sort of architecture than current DL
And of course we can point to the difference between artificial and biological networks. I didn’t because there are too many! One of the big ones is back propagation. THE major reason we have ANNs in the first place, completely implausible biologically. No back propagation in the brain.