Makes sense that this would be a big factor in what to do with our time, and AI timelines. And we’re surprised too by how AI can overperform expectations, like in the sources you cited.
We’d still say the best way of characterizing the problem of creating synthetic data is that it’s a wide open problem, rather than high confidence that naive approaches using current LMs will just work. How about a general intuition instead of parsing individual sources. We wouldn’t expect making the dataset bigger by just repeating the same example over and over to work. We generate data by having ‘models’ of the original data generators, humans. If we knew what exactly made human data ‘good,’ we could optimize directly for it and simplify massively (this runs into the well-defined eval problem again—we can craft datasets to beat benchmarks of course).
An analogy (a disputed one, to be fair) is Ted Chiang’s lossy compression. So for every case of synthetic data working, there’s also cases where it fails, like Shumailov et el. we cited. If we knew exactly what made human data ‘good,’ we’d argue you wouldn’t see labs continue to ramp up hiring contractors specifically to generate high-quality data in expert domains, like programming.
A fun exercise—take a very small open-source dataset, train your own very small LM, and have it augment (double!) its own dataset. Try different prompts, plot n-gram distributions vs the original data. Can you get one behavior out of the next generation that looks like magic compared to the previous, or does improvement plateau? May have nitpicks with this experiment, but I don’t think it’s that different to what’s happening at large scale.
Makes sense that this would be a big factor in what to do with our time, and AI timelines. And we’re surprised too by how AI can overperform expectations, like in the sources you cited.
We’d still say the best way of characterizing the problem of creating synthetic data is that it’s a wide open problem, rather than high confidence that naive approaches using current LMs will just work. How about a general intuition instead of parsing individual sources. We wouldn’t expect making the dataset bigger by just repeating the same example over and over to work. We generate data by having ‘models’ of the original data generators, humans. If we knew what exactly made human data ‘good,’ we could optimize directly for it and simplify massively (this runs into the well-defined eval problem again—we can craft datasets to beat benchmarks of course).
An analogy (a disputed one, to be fair) is Ted Chiang’s lossy compression. So for every case of synthetic data working, there’s also cases where it fails, like Shumailov et el. we cited. If we knew exactly what made human data ‘good,’ we’d argue you wouldn’t see labs continue to ramp up hiring contractors specifically to generate high-quality data in expert domains, like programming.
A fun exercise—take a very small open-source dataset, train your own very small LM, and have it augment (double!) its own dataset. Try different prompts, plot n-gram distributions vs the original data. Can you get one behavior out of the next generation that looks like magic compared to the previous, or does improvement plateau? May have nitpicks with this experiment, but I don’t think it’s that different to what’s happening at large scale.