Yes, humans made Python because we have the ability for symbolic thought.
And I am not saying that non-classical systems can’t create something symbolic. In fact this is the crux of my argument that Symbolic-Neuro symbolic architectures (see my first post) DO create symbol strings. It is the process with which they create the strings that is in question.
If you agree that bundles of biological neurons can have the capacity for symbolic thought, and that non-classical systems can create something symbolic, I don’t understand why you think anything you’ve said shows that DL cannot scale to AGI, even granting your unstated assumption that symbolic thought is necessary for AGI.
(I think that last assumption is false, but don’t think it’s a crux here so I’m keen to grant it for now, and only discuss once we’ve cleared up the other thing)
Biological neutrons have very different properties from artificial networks in very many ways. These are well documented. I would never deny that ensembles of biological neutrons have the capacity for symbol manipulation.
I also believe that non-classical systems can learn mappings between symbols, because this is in fact what they do. Language models map from word tokens to word tokens.
What they don’t do, as the inventors of DL insist, is learn classical symbol manipulation with rules defined over symbols.
Could you mechanistically explain how any of the ‘very many ways’ biological neurons are different mean that the the capacity for symbol manipulation is unique to them?
They’re obviously very different, but what I don’t think you’ve done is show that the differences are responsible for the impossibility of symbolic manipulation in artificial neural networks.
I think I may have said something to confuse the issue. Artificial neural networks certainly ARE capable of representing classical symbolic computations. In fact the first neural networks (e.g. perceptron) did just that. They typically do that with local representations where individual nodes assume the role of representing a given variable. But these were not very good at other tasks like generalisation.
More advanced distributed networks emerged with DL being the newest incarnation. These have representations which makes it very difficult (if not impossible) to dedicate nodes to variables. Which does not worry the architects because they specifically believe that the non-localised representation is what makes them so powerful (see Bengio, LeCun and Hinton’s article for their Turing award)
Turning to real neurons, the fact is that we really don’t know all that much about how they represent knowledge. We know where they tend to fire in response to given stimuli, we know how they are connected, and we know that they have some hierarchical representations. So I can’t give you a biological explanation of how neural ensembles can represent variables. All I can do is give you arguments that humans DO perform symbolic manipulation on variables, so somehow their brain has to be able to encode this.
If you can make an artificial network somehow do this eventually then fine. I will support those efforts. But we are nowhere near that, and the main actors are not even pushing in that direction.
That last comment seems very far from the original post which claimed
We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.
If we don’t have a biological representation of how BNNs can represent and perform symbolic representation, why do we have reason to believe that we know ANNs can’t?
Without an ability to point to the difference, this isn’t anything close to a reductio, it’s just saying “yeah I don’t buy it dude, I don’t reckon AI will be that good”
Sorry I think you are misunderstanding the reductio argument. That argument simply undermines the claim that natural language is not based on a generative phrase structure grammar. That is, that non symbolic DL is the “proper” model of language. In fact they are called “language models”. I claim they are not models of language, and therefore there is no reason to discard symbolic models … which is where the need for symbol manipulation comes from. Hence a very different sort of architecture than current DL
And of course we can point to the difference between artificial and biological networks. I didn’t because there are too many! One of the big ones is back propagation. THE major reason we have ANNs in the first place, completely implausible biologically. No back propagation in the brain.
Yes, humans made Python because we have the ability for symbolic thought.
And I am not saying that non-classical systems can’t create something symbolic. In fact this is the crux of my argument that Symbolic-Neuro symbolic architectures (see my first post) DO create symbol strings. It is the process with which they create the strings that is in question.
If you agree that bundles of biological neurons can have the capacity for symbolic thought, and that non-classical systems can create something symbolic, I don’t understand why you think anything you’ve said shows that DL cannot scale to AGI, even granting your unstated assumption that symbolic thought is necessary for AGI.
(I think that last assumption is false, but don’t think it’s a crux here so I’m keen to grant it for now, and only discuss once we’ve cleared up the other thing)
Biological neutrons have very different properties from artificial networks in very many ways. These are well documented. I would never deny that ensembles of biological neutrons have the capacity for symbol manipulation.
I also believe that non-classical systems can learn mappings between symbols, because this is in fact what they do. Language models map from word tokens to word tokens.
What they don’t do, as the inventors of DL insist, is learn classical symbol manipulation with rules defined over symbols.
Could you mechanistically explain how any of the ‘very many ways’ biological neurons are different mean that the the capacity for symbol manipulation is unique to them?
They’re obviously very different, but what I don’t think you’ve done is show that the differences are responsible for the impossibility of symbolic manipulation in artificial neural networks.
I think I may have said something to confuse the issue. Artificial neural networks certainly ARE capable of representing classical symbolic computations. In fact the first neural networks (e.g. perceptron) did just that. They typically do that with local representations where individual nodes assume the role of representing a given variable. But these were not very good at other tasks like generalisation.
More advanced distributed networks emerged with DL being the newest incarnation. These have representations which makes it very difficult (if not impossible) to dedicate nodes to variables. Which does not worry the architects because they specifically believe that the non-localised representation is what makes them so powerful (see Bengio, LeCun and Hinton’s article for their Turing award)
Turning to real neurons, the fact is that we really don’t know all that much about how they represent knowledge. We know where they tend to fire in response to given stimuli, we know how they are connected, and we know that they have some hierarchical representations. So I can’t give you a biological explanation of how neural ensembles can represent variables. All I can do is give you arguments that humans DO perform symbolic manipulation on variables, so somehow their brain has to be able to encode this.
If you can make an artificial network somehow do this eventually then fine. I will support those efforts. But we are nowhere near that, and the main actors are not even pushing in that direction.
That last comment seems very far from the original post which claimed
If we don’t have a biological representation of how BNNs can represent and perform symbolic representation, why do we have reason to believe that we know ANNs can’t?
Without an ability to point to the difference, this isn’t anything close to a reductio, it’s just saying “yeah I don’t buy it dude, I don’t reckon AI will be that good”
Sorry I think you are misunderstanding the reductio argument. That argument simply undermines the claim that natural language is not based on a generative phrase structure grammar. That is, that non symbolic DL is the “proper” model of language. In fact they are called “language models”. I claim they are not models of language, and therefore there is no reason to discard symbolic models … which is where the need for symbol manipulation comes from. Hence a very different sort of architecture than current DL
And of course we can point to the difference between artificial and biological networks. I didn’t because there are too many! One of the big ones is back propagation. THE major reason we have ANNs in the first place, completely implausible biologically. No back propagation in the brain.