Yes, what you are scaling matters just as much as the fact that you are scaling. So now developers are scaling RL post training and pretraining using higher quality synthetic data pipelines. If the point is just that training on average internet text provides diminishing real world returns in many real-world use cases, then that seems defensible; that certainly doesn’t seem to be the main recipe any company is using for pushing the frontier right now. But it seems like people often mistake this for something stronger like “all training is now facing insurmountable barriers to continued real world gains” or “scaling laws are slowing down across the board” or “it didn’t produce significant gains on meaningful tasks so scaling is done.” I mentioned SWE-Bench because that seems to suggest significant real world utility improvements rather than trivial prediction loss decrease. I also don’t think it’s clear that there is such an absolute separation here—to model the data you have to model the world in some sense. If you continue feeding multimodal LLM agents the right data in the right way, they continue improving on real world tasks.
Yes, what you are scaling matters just as much as the fact that you are scaling. So now developers are scaling RL post training and pretraining using higher quality synthetic data pipelines. If the point is just that training on average internet text provides diminishing real world returns in many real-world use cases, then that seems defensible; that certainly doesn’t seem to be the main recipe any company is using for pushing the frontier right now. But it seems like people often mistake this for something stronger like “all training is now facing insurmountable barriers to continued real world gains” or “scaling laws are slowing down across the board” or “it didn’t produce significant gains on meaningful tasks so scaling is done.” I mentioned SWE-Bench because that seems to suggest significant real world utility improvements rather than trivial prediction loss decrease. I also don’t think it’s clear that there is such an absolute separation here—to model the data you have to model the world in some sense. If you continue feeding multimodal LLM agents the right data in the right way, they continue improving on real world tasks.