This (pop science) article provides two interesting critiques of the analogy between the human brain and neural nets.
“Neural nets are typically trained by “supervised learning”. This is very different from how humans typically learn. Most human learning is “unsupervised”, which means we’re not explicitly told what the “right” response is for a given stimulus. We have to work this out ourselves.”
“Another difference is the sheer scale of data used to train AI. The GPT-3 model was trained on 400 billion words, mostly taken from the internet. At a rate of 150 words per minute, it would take a human nearly 4,000 years to read this much text.”
I’m not sure the direct implication for timelines here. You might be able to argue that these disanalogies mean that neural nets will require less compute than the brain. But an interesting point of disanalogy, to correct any misconceptions that neural networks are “just like the brain”.
This (pop science) article provides two interesting critiques of the analogy between the human brain and neural nets.
“Neural nets are typically trained by “supervised learning”. This is very different from how humans typically learn. Most human learning is “unsupervised”, which means we’re not explicitly told what the “right” response is for a given stimulus. We have to work this out ourselves.”
“Another difference is the sheer scale of data used to train AI. The GPT-3 model was trained on 400 billion words, mostly taken from the internet. At a rate of 150 words per minute, it would take a human nearly 4,000 years to read this much text.”
I’m not sure the direct implication for timelines here. You might be able to argue that these disanalogies mean that neural nets will require less compute than the brain. But an interesting point of disanalogy, to correct any misconceptions that neural networks are “just like the brain”.