Hey Steve, thanks for those thoughts! I think Iām not more qualified than the wikipedia community to argue for or against Mooreās law, thatās why I just quoted them. So canāt give more thoughts on that unfortunately.
But even if Mooreās law would continue forever, I think that the data argument would kick in. If we have infinite compute but limited information to learn from, thatās still a limited model. Applying infinite compute to the MNIST dataset will give you a model that wonāt be much better than the latest Kaggle competitor on that dataset.
So then we end up again at the more hand-wavy arguments for limits to the growth of knowledge and predictability of our world in general. Would be curious where Iām losing you there.
Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy āevolutional transfer learningā argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isnāt a āgeneral-purpose Tabula Rasaā. But I do agree with you that probably weāll find a better algorithm that doesnāt scale this badly with data and can extract knowledge more efficiently.
However, Iād argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that weāll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, itāll have to start generating new knowledge itself, which is much more cumbersome thant ājustā absorbing existing knowledge.