Well, I don’t know how serial RL algorithms are, but even highly parallel animals can be interpreted as doing some sort of RL—“operant conditioning” is the term from psychology.
I agree that brain emulation is unlikely to happen. The analogy with the brain does not mean we have to emulate it very closely. Artificial neural networks are already highly successful without a close correspondence to actual neural networks.
Inference stage—aren’t we obviously both at inference and training stage at the same time, unlike current ML models? We can clearly learn things everyday, and we only use our very parallel wetware. The way we got brains, through natural selection, is indeed a different matter, but I would not necessarily label this the training stage. Clearly some information is hardwired from the evolutionary process, but this is only a small fraction of what a human brain does in fact learn.
And okay, so NC≠P has not been proven, but it is clearly well-supported by the available evidence.
Certainly agree that we are learning right now (I hope :)).
“this is only a small fraction of what a human brain does in fact learn”
Disagree here. The description size of my brain (in CS analogy, the size of the circuit) seems much much larger than the total amount of information I have ever learned or ever will learn (one argument: I have fewer bits of knowledge than Wikipedia, describing my brain in the size of Wikipedia would be an huge advance in neuroscience). Even worse, the description size of the circuit doesn’t (unless P=NP) provide any nontrivial bound on the amount of computation we need to invest to find it.
Surely the information transferred from natural selection to the brain must be a fraction of the information in the genome. Which is much less:
https://en.m.wikipedia.org/wiki/Human_genome#Information_content
The organism, including the brain, seems to be roughly a decompressed genome. And actually the environment can provide a lot of information through the senses. We can’t memorize the Wikipedia, but that may be because we are not optimized for storing plain text efficiently. We still can recall quite a bit of visual and auditory information.
Well, I don’t know how serial RL algorithms are, but even highly parallel animals can be interpreted as doing some sort of RL—“operant conditioning” is the term from psychology.
I agree that brain emulation is unlikely to happen. The analogy with the brain does not mean we have to emulate it very closely. Artificial neural networks are already highly successful without a close correspondence to actual neural networks.
Inference stage—aren’t we obviously both at inference and training stage at the same time, unlike current ML models? We can clearly learn things everyday, and we only use our very parallel wetware. The way we got brains, through natural selection, is indeed a different matter, but I would not necessarily label this the training stage. Clearly some information is hardwired from the evolutionary process, but this is only a small fraction of what a human brain does in fact learn.
And okay, so NC≠P has not been proven, but it is clearly well-supported by the available evidence.
Certainly agree that we are learning right now (I hope :)).
“this is only a small fraction of what a human brain does in fact learn”
Disagree here. The description size of my brain (in CS analogy, the size of the circuit) seems much much larger than the total amount of information I have ever learned or ever will learn (one argument: I have fewer bits of knowledge than Wikipedia, describing my brain in the size of Wikipedia would be an huge advance in neuroscience). Even worse, the description size of the circuit doesn’t (unless P=NP) provide any nontrivial bound on the amount of computation we need to invest to find it.
Surely the information transferred from natural selection to the brain must be a fraction of the information in the genome. Which is much less: https://en.m.wikipedia.org/wiki/Human_genome#Information_content The organism, including the brain, seems to be roughly a decompressed genome. And actually the environment can provide a lot of information through the senses. We can’t memorize the Wikipedia, but that may be because we are not optimized for storing plain text efficiently. We still can recall quite a bit of visual and auditory information.