However, time and again, we’ve found that deep learning systems improve more through scaling, of either the data or the model.
Jaime Sevilla from Epoch mentioned here that scaling of compute and algorithms are both responsible for half of the progress:
roughly historically it has turned out that the two main drivers of progress have been the scaling of compute and these algorithmic improvements. And I will say that they are like 50
Jaime also mentions that data has not been a bottleneck.
Thanks for the post, Quintin!
Jaime Sevilla from Epoch mentioned here that scaling of compute and algorithms are both responsible for half of the progress:
Jaime also mentions that data has not been a bottleneck.