Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy “evolutional transfer learning” argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isn’t a “general-purpose Tabula Rasa”. But I do agree with you that probably we’ll find a better algorithm that doesn’t scale this badly with data and can extract knowledge more efficiently.
However, I’d argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that we’ll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, it’ll have to start generating new knowledge itself, which is much more cumbersome thant “just” absorbing existing knowledge.
Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy “evolutional transfer learning” argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isn’t a “general-purpose Tabula Rasa”. But I do agree with you that probably we’ll find a better algorithm that doesn’t scale this badly with data and can extract knowledge more efficiently.
However, I’d argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that we’ll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, it’ll have to start generating new knowledge itself, which is much more cumbersome thant “just” absorbing existing knowledge.