The Scaling Series Discussion Thread: with Toby Ord
Weāre trying something a bit new this week. Over the last year, Toby Ord has been writing about the implications of the fact that improvements in AI require exponentially more compute. Only one of these posts so far has been put on the EA forum.
This week weāve put the entire series on the Forum and made this thread for you to discuss your reactions to the posts. Toby Ord will check in once a day to respond to your comments[1].
Feel free to also comment directly on the individual posts that make up this sequence, but you can treat this as a central discussion space for both general takes and more specific questions.
If you havenāt read the series yetā¦
Read it here...or choose a post to start with:
Are the Costs of AI Agents Also Rising Exponentially?
Agents can do longer and longer tasks, but their dollar cost to do these tasks may be growing even faster.
How Well Does RL Scale?
I show that RL-training for LLMs scales much worse than inference or pre-training.
Evidence that Recent AI Gains are Mostly from Inference-Scaling
I show how most of the recent AI gains in reasoning come from spending much more compute every time the model is run.
The Extreme Inefficiency of RL for Frontier Models
The new RL scaling paradigm for AI reduces the amount of information a model could learn per hour of training by a factor of 1,000 to 1,000,000. What follows?
Is There a Half-Life for the Success Rates of AI Agents?
The declining success rates of AI agents on longer-duration tasks can be explained by a simple mathematical model ā a constant rate of failing during each minute a human would take to do the task.
Inference Scaling Reshapes AI Governance
The shift towards inference scaling may mean the end of an era for AI governance. I explore the many consequences.
Inference Scaling and the Log-x Chart
The new trend to scaling up inference compute in AI has come hand-in-hand with an unusual new type of chart that can be highly misleading.
The Scaling Paradox
The scaling up of frontier AI models has been a huge success. But the scaling laws that inspired it actually show extremely poor returns to scale. Whatās going on?
- ^
Heās not committing to respond to every comment.
Super cool! Great to see others digging into the costs of Agent performance. I agree that more people should be looking into this.
Iām particularly interested in predicting the growth of costs for Agentic AI safety evaluations. So I was wondering if you had any takes on this given this recent series. Here are a few more specific questions along those lines for you, Toby
- Given the cost trends youāve identified, do you expect the costs of running agents to take up an increasing share of the total costs of AI safety evaluations (including researcher costs)?
- Which dynamics do you think will drive how the costs of AI safety evaluations change over the next few years?
- Any thoughts on under what conditions it would be better to elicit the maximum capabilities of models using a few very expensive safety evaluations, or better to prioritise a larger quantity of evaluations that get close to plateau performance (i.e. hitting that sweet spot where their hourly cost /ā performance is lowest, or alternatively their saturation point)? Presumably a mix is a best, but how do we determine what a good mix looks like? What might you recommend to an AI Labās Safety/āPreparedness team? Iām thinking about how this might inform evaluation requirements for AI labs.
Many thanks for the excellent series! You have a knack for finding elegant and intuitive ways to explain the trends from the data. Despite knowing this data well, I feel like I learn something new with every post. Looking forward to the next thing.
Thanks Paolo,
I was only able to get weak evidence of a noisy trend from the limited METR data, so it is hard to draw many conclusions from that. Moreover, METRās desire to measure the exponentially growing length of useful work tasks is potentially more exposed to an exponential rise in compute costs than more safety-related tasks. But overall, Iād think that the year-on-year growth in the amount of useful compute you can use on safety evaluations is probably growing faster than one can sustainably grow the number of staff.
Iām not sure how the dynamics will shake out for safety evals over a few years. e.g. a lot of recent capability gain has come from RL, which I think isnāt sustainable, and I also think the growth in ability via inference compute will limit both ability to serve the model and ability for people to afford it, so I suspect weāll see some returning to eke what they can out of more pretraining. i.e. that the reasoning era saw them shift to finding capabilities in new areas with comparably low costs of scaling, but once they reach the optimal mix, weāll see a mixture of all three going forward. So the future might look a bit less like 2025 and more like a mix of that and 2022-24.
Unfortunately I donāt have much insight on the question of ideal mixes of safety evals!
Iām excited about this series!
I would be curious what your take is on this blog post from OpenAI, particularly these two graphs:
While their argument is not very precise, I understand them to be saying something like, āSure, itās true that the costs of both inference and training are increasing exponentially. However, the value delivered by these improvements is also increasing exponentially. So the economics check out.ā
A naive interpretation of e.g. the METR graph would disagree: humans are modeled as having a constant hourly wage, so being able to do a task which is 2x as long is precisely 2x as valuable (and therefore canāt offset a >2x increase in compute costs). But this seems like an implausible simplification.
Do we have any evidence on how the value of models changes with their capabilities?
That is quite a surprising graph ā the annual tripling and the correlation between the compute and revenue are much more perfect than I think anyone would have expected. Indeed they are so perfect that Iām a bit skeptical of what is going on.
One thing to note is that it isnāt clear what the compute graph is of (e.g. is it inference + training compute, but not R&D?). Another thing to note is that it is year-end figures vs year total figures on the right, but given they are exponentials with the same doubling time and different units, that isnāt a big deal.
There are a number of things I disagree with in the post. The main one relevant to this graph is the implication that the graph on the left causes the graph on the right. That would be genuinely surprising. Weāve seen that the slope on the famous scaling law graphs is about ā0.05 for compute ā so you need to double compute 20 times to get log-loss to halve. Whereas this story of 3x compute leading to 3x the revenue implies that the exponent for a putative scaling law of compute vs R&D is extremely close to 1.0. And that it remains flukishly close to that magic number despite the transition from pretraining scaling to RL+inference scaling. I could believe a power law exponent of 1.0 for some things that are quite mathematical of physical, but not for the extremely messy relationship of compute to total revenue which depends on details of:
the changing relationship between compute and intelligence,
the utility of more intelligence to people, the market dynamics between competitors,
running out of new customers and having to shift to more revenue per customer,
the change from a big upfront cost (training compute) to mostly per-customer charges (inference compute)
More likely is something like reverse causation ā that the growth in revenue is driving the amount of compute they can afford. Or it could be that the prices they need to charge increase with the amount of investment they received in order to buy compute ā so they are charging the minimum they can in order to allow revenue growth to match investment growth.
Overall, Iād say that I believe these are real numbers, but I donāt believe the implied model. e.g. I donāt believe this trend will continue in the long run and I donāt think that if they had been able to 10x compute in one of those years that the revenue would have also jumped by 10x (unless that is because they effectively choose how much revenue to take by trading market growth for revenue in order to make this graph work to convince investors).
A few points to clarify my overarching view:
All kinds of compute scaling are quite inefficient on most standard metrics. There are steady gains, but they are coming from exponentially increasing inputs. These canāt continue forever, so all these kinds of gains from compute scaling are naturally time-limited. The exponential growth in inputs may also be masking fundamental deficiencies in the learning algorithms.
By ācompute scalingā Iām generally referring to the strategy of adding more GPUs to get more practically useful capabilities. I think this is running out of steam for pretraining and will soon start running out of steam for RL and inference scaling. This is possible even if the official āscaling lawsā of pretraining continue to hold (Iām generally neutral on whether they will).
It is possible that there will always be a new paradigm of compute scaling to take over when old ones run out of steam. If so, then like Mooreās Law, the longterm upward trend might be made out of a series of stacked S-curves. Iām mainly pointing out the limits of the current scaling paradigms, not denying the possibility of future ones.
I donāt think that companies are likely to completely stop scaling any of these forms of compute-scaling. The maths tends to recommend balancing the shares of compute that go to all of them in proportion to how much they improve capabilities per doubling of compute. e.g. perhaps a 3:1:2 ratio between pre-training, RL, and inference (though I expect the 3 to decline due to running out of high-quality text for pretraining).
But given that we donāt know if there will always be more paradigms delivered on time to save scaling, the limits on the current approaches should increase our credence that the practical process of scaling will provide less of a tail-wind to AI progress going forward. Overall, my view is something like: the strength of this tail-wind that has driven much of AI progress since 2020 will halve. (So it would still be important, but no longer the main determinant of who is in front, or of the pace of progress.)
As well as implications for the pace of progress, changes in what determines progress has implications for strategy and governance of AI. For example, AI researchers will be comparatively more important than in recent years, and if inference-scaling becomes the main form of scaling, that has big implications for compute governance and for the business model of AI companies.