It’s definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.
Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.
I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will need to happen when those evals find that, indeed, a model has developed such capabilities? Then we’ll need to implement a pause. Evals or a pause is mostly a choice about timing, not a fundamentally different approach.
With evals, however, we’ll move precisely to the brink, look straight into the abyss, and then we plan to halt at the last possible moment. Unfortunately, though, we’re in thick mist and we can’t see the abyss (this is true even when we apply evals, since we don’t know which capabilities will prove existentially dangerous, and since an existential event may already occur before running the evals).
And even if we would know where to halt: we’ll need to make sure that the leading labs will practically succeed in pausing themselves (there may be thousands of people working there), that the models aren’t getting leaked, that we’ll implement the policy that’s needed, that we’ll sign international agreements, and that we gain support from the general public. This is all difficult work that will realistically take time.
Pausing isn’t as simple as pressing a button, it’s a social process. No one knowns how long that process of getting everyone on the same page will take, but it could be quite a while. Is it wise to start that process at the last possible moment, namely when the evals turn red? I don’t think so. The sooner we start, the higher our chance of survival.
Also, there’s a separate point that I think is not sufficiently addressed yet: we don’t know how to implement a pause beyond a few years duration. If hardware and algorithms improve, frontier models could democratize. While I believe this problem can be solved by international (peaceful) regulation, I also think this will be hard and we will need good plans (hardware or data regulation proposals) for how to do this in advance. We currently don’t have these, so I think working on them should be a much higher priority.
My gut reaction is that the eval path is strongly inferior because it relies on a lot more conjunction. People need to still care about it when models get dangerous, it needs to still be relevant when they get dangerous, and the evaluations need to work at all. Compared to that, a pause seems like a more straight-forward good thing, even if it doesn’t solve the problem.
I agree that immediate pause or at least a slowdown (“moving bright line of a training compute cap”) is better/safer than a strategy that says “continue until evals find something dangerous, then hit the brakes everywhere.”
I also have some reservations to evals in the sense that I think they can easily make things worse if they’re implemented poorly (see my note here).
That said, evals could complement the pause strategy. For instance:
(1) The threshold for evals to trigger further slowing could be low. If the evals have to unearth even just rudimentary deception attempts rather than something already fairly dangerous, it may not be too late when they trigger. (2) Evals could be used in combination with a pause (or slowdown) to greenlight new research. For instance, maybe select labs are allowed to go over the training compute cap if they fulfill a bunch of strict safety and safety-culture requirements and if they use the training budget increase for alignment experiments and have evals set up to show that previous models of the same kind are behaving well in all relevant respects.
So, my point is we shouldn’t look at this as “evals as an idea are inherently in tension with pausing ASAP.”
There’s an important difference between pausing and evals: evals gets you loads of additional information. We can look at the results of the evals, discuss them and determine in what ways a model might have misuse potential (and thus try to mitigate it) or if the model is simply undeployable. If we’re still unsure, we can gather more data and additionally refine our ability to perform and interpret evals.
If we (i.e. the ML community) repeatedly do this we build up a better picture of where our current capabilities lie, how evals relate to real-world impact and so on. I think this makes evals much better, and the effect will compound over time. Evals also produce concrete data that can convince skeptics (such as me—I am currently pretty skeptical of much regulation but can easily imagine eval results that would convince me). To stick with your analogy, each time we do evals we thin out the fog a bit, with the intention of clearing it before we reach the edge, as well as improving our ability to stop.
To stick with your analogy, each time we do evals we thin out the fog a bit, with the intention of clearing it before we reach the edge, as well as improving our ability to stop.
How does doing evals improve your ability to stop? What concrete actions will you take when an eval shows a dangerous result? Do none of them overlap with pausing?
Evals showing dangerous capabilities (such as how to build a nuclear weapon) can be used to convince lawmakers that this stuff is real and imminent.
Of course, you don’t need that if lawmakers already agree with you – in that case, it’s strictly best to not tinker with anything dangerous.
But assuming that many lawmakers will remain skeptical, one function of evals could be “drawing out an AI warning shot, making it happen in a contained and controlled environment where there’s no damage.”
Of course, we wouldn’t want evals teams to come up with AI capability improvements, so evals shouldn’t become dangerous AI gain-of-function research. Still, it’s a spectrum because even just clever prompting or small tricks can sometimes unearth hidden capabilities that the model had to begin with, and that’s the sort of thing that evals should warn us about.
It’s definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.
Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.
I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will need to happen when those evals find that, indeed, a model has developed such capabilities? Then we’ll need to implement a pause. Evals or a pause is mostly a choice about timing, not a fundamentally different approach.
With evals, however, we’ll move precisely to the brink, look straight into the abyss, and then we plan to halt at the last possible moment. Unfortunately, though, we’re in thick mist and we can’t see the abyss (this is true even when we apply evals, since we don’t know which capabilities will prove existentially dangerous, and since an existential event may already occur before running the evals).
And even if we would know where to halt: we’ll need to make sure that the leading labs will practically succeed in pausing themselves (there may be thousands of people working there), that the models aren’t getting leaked, that we’ll implement the policy that’s needed, that we’ll sign international agreements, and that we gain support from the general public. This is all difficult work that will realistically take time.
Pausing isn’t as simple as pressing a button, it’s a social process. No one knowns how long that process of getting everyone on the same page will take, but it could be quite a while. Is it wise to start that process at the last possible moment, namely when the evals turn red? I don’t think so. The sooner we start, the higher our chance of survival.
Also, there’s a separate point that I think is not sufficiently addressed yet: we don’t know how to implement a pause beyond a few years duration. If hardware and algorithms improve, frontier models could democratize. While I believe this problem can be solved by international (peaceful) regulation, I also think this will be hard and we will need good plans (hardware or data regulation proposals) for how to do this in advance. We currently don’t have these, so I think working on them should be a much higher priority.
My gut reaction is that the eval path is strongly inferior because it relies on a lot more conjunction. People need to still care about it when models get dangerous, it needs to still be relevant when they get dangerous, and the evaluations need to work at all. Compared to that, a pause seems like a more straight-forward good thing, even if it doesn’t solve the problem.
I agree that immediate pause or at least a slowdown (“moving bright line of a training compute cap”) is better/safer than a strategy that says “continue until evals find something dangerous, then hit the brakes everywhere.”
I also have some reservations to evals in the sense that I think they can easily make things worse if they’re implemented poorly (see my note here).
That said, evals could complement the pause strategy. For instance:
(1) The threshold for evals to trigger further slowing could be low. If the evals have to unearth even just rudimentary deception attempts rather than something already fairly dangerous, it may not be too late when they trigger. (2) Evals could be used in combination with a pause (or slowdown) to greenlight new research. For instance, maybe select labs are allowed to go over the training compute cap if they fulfill a bunch of strict safety and safety-culture requirements and if they use the training budget increase for alignment experiments and have evals set up to show that previous models of the same kind are behaving well in all relevant respects.
So, my point is we shouldn’t look at this as “evals as an idea are inherently in tension with pausing ASAP.”
There’s an important difference between pausing and evals: evals gets you loads of additional information. We can look at the results of the evals, discuss them and determine in what ways a model might have misuse potential (and thus try to mitigate it) or if the model is simply undeployable. If we’re still unsure, we can gather more data and additionally refine our ability to perform and interpret evals.
If we (i.e. the ML community) repeatedly do this we build up a better picture of where our current capabilities lie, how evals relate to real-world impact and so on. I think this makes evals much better, and the effect will compound over time. Evals also produce concrete data that can convince skeptics (such as me—I am currently pretty skeptical of much regulation but can easily imagine eval results that would convince me). To stick with your analogy, each time we do evals we thin out the fog a bit, with the intention of clearing it before we reach the edge, as well as improving our ability to stop.
How does doing evals improve your ability to stop? What concrete actions will you take when an eval shows a dangerous result? Do none of them overlap with pausing?
Evals showing dangerous capabilities (such as how to build a nuclear weapon) can be used to convince lawmakers that this stuff is real and imminent.
Of course, you don’t need that if lawmakers already agree with you – in that case, it’s strictly best to not tinker with anything dangerous.
But assuming that many lawmakers will remain skeptical, one function of evals could be “drawing out an AI warning shot, making it happen in a contained and controlled environment where there’s no damage.”
Of course, we wouldn’t want evals teams to come up with AI capability improvements, so evals shouldn’t become dangerous AI gain-of-function research. Still, it’s a spectrum because even just clever prompting or small tricks can sometimes unearth hidden capabilities that the model had to begin with, and that’s the sort of thing that evals should warn us about.