Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy āevolutional transfer learningā argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isnāt a āgeneral-purpose Tabula Rasaā. But I do agree with you that probably weāll find a better algorithm that doesnāt scale this badly with data and can extract knowledge more efficiently.
However, Iād argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that weāll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, itāll have to start generating new knowledge itself, which is much more cumbersome thant ājustā absorbing existing knowledge.
Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy āevolutional transfer learningā argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isnāt a āgeneral-purpose Tabula Rasaā. But I do agree with you that probably weāll find a better algorithm that doesnāt scale this badly with data and can extract knowledge more efficiently.
However, Iād argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that weāll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, itāll have to start generating new knowledge itself, which is much more cumbersome thant ājustā absorbing existing knowledge.