If today’s AI research is predominantly led by people with a tinkering and engineering background, does that mean that disciplines like theoretical neuroscience have less to say about AI than we currently think, or can more theoretical fields still inform the development of AI? For example, I know that neural networks are only loosely based on the brain and the idea of neural plasticity, but there may be reason to think that making AI even more similar to the brain can bring it closer to human-like intelligence (https://www.nature.com/articles/d41586-019-02212-4). If mathematical theory about the brain can inform the development of more cutting-edge AI algorithms, particularly unsupervised learning algorithms, wouldn’t that contradict the notion that modern AI is the purview of engineering? As the article stated, a consequence of the guesswork that we do when choosing AI techniques and their underlying methods is that the inner workings of deep neural networks are often not transparent. Wouldn’t it be up to more theoretical disciplines to decipher what is really going on under the hood?
If today’s AI research is predominantly led by people with a tinkering and engineering background, does that mean that disciplines like theoretical neuroscience have less to say about AI than we currently think, or can more theoretical fields still inform the development of AI? For example, I know that neural networks are only loosely based on the brain and the idea of neural plasticity, but there may be reason to think that making AI even more similar to the brain can bring it closer to human-like intelligence (https://www.nature.com/articles/d41586-019-02212-4). If mathematical theory about the brain can inform the development of more cutting-edge AI algorithms, particularly unsupervised learning algorithms, wouldn’t that contradict the notion that modern AI is the purview of engineering? As the article stated, a consequence of the guesswork that we do when choosing AI techniques and their underlying methods is that the inner workings of deep neural networks are often not transparent. Wouldn’t it be up to more theoretical disciplines to decipher what is really going on under the hood?