Hey Steve, thanks for those thoughts! I think Iâm not more qualified than the wikipedia community to argue for or against Mooreâs law, thatâs why I just quoted them. So canât give more thoughts on that unfortunately.
But even if Mooreâs law would continue forever, I think that the data argument would kick in. If we have infinite compute but limited information to learn from, thatâs still a limited model. Applying infinite compute to the MNIST dataset will give you a model that wonât be much better than the latest Kaggle competitor on that dataset.
So then we end up again at the more hand-wavy arguments for limits to the growth of knowledge and predictability of our world in general. Would be curious where Iâm losing you there.
Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy âevolutional transfer learningâ argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isnât a âgeneral-purpose Tabula Rasaâ. But I do agree with you that probably weâll find a better algorithm that doesnât scale this badly with data and can extract knowledge more efficiently.
However, Iâd argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that weâll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, itâll have to start generating new knowledge itself, which is much more cumbersome thant âjustâ absorbing existing knowledge.