That’s a good summary and pretty in-line with my own thoughts on the overall upshots. I’d say that absent new scaling approaches the strong tailwind to AI progress from compute increases will soon weaken substantially. But it wouldn’t completely disappear, there may be new scaling approaches, and there remains progress via AI research. Overall, I’d say it lengthens timelines somewhat, makes raw compute/finances less of an overwhelming advantage, and may require different approaches to compute governance.
Strong agree that absent new approaches the tailwind isn’t enough—but it seems unclear that pretraining scaling doesn’t have farther to go, and it seems that current approaches with synthetic data and training via RL to enhance one-shot performance have room left for significant improvement.
I also don’t know how much room there is left until we hit genius level AGI or beyond, and at that point even if we hit a wall, more scaling isn’t required, as the timeline basically ends.
That’s a good summary and pretty in-line with my own thoughts on the overall upshots. I’d say that absent new scaling approaches the strong tailwind to AI progress from compute increases will soon weaken substantially. But it wouldn’t completely disappear, there may be new scaling approaches, and there remains progress via AI research. Overall, I’d say it lengthens timelines somewhat, makes raw compute/finances less of an overwhelming advantage, and may require different approaches to compute governance.
Strong agree that absent new approaches the tailwind isn’t enough—but it seems unclear that pretraining scaling doesn’t have farther to go, and it seems that current approaches with synthetic data and training via RL to enhance one-shot performance have room left for significant improvement.
I also don’t know how much room there is left until we hit genius level AGI or beyond, and at that point even if we hit a wall, more scaling isn’t required, as the timeline basically ends.