My gut reaction is that the eval path is strongly inferior because it relies on a lot more conjunction. People need to still care about it when models get dangerous, it needs to still be relevant when they get dangerous, and the evaluations need to work at all. Compared to that, a pause seems like a more straight-forward good thing, even if it doesn’t solve the problem.
I agree that immediate pause or at least a slowdown (“moving bright line of a training compute cap”) is better/safer than a strategy that says “continue until evals find something dangerous, then hit the brakes everywhere.”
I also have some reservations to evals in the sense that I think they can easily make things worse if they’re implemented poorly (see my note here).
That said, evals could complement the pause strategy. For instance:
(1) The threshold for evals to trigger further slowing could be low. If the evals have to unearth even just rudimentary deception attempts rather than something already fairly dangerous, it may not be too late when they trigger. (2) Evals could be used in combination with a pause (or slowdown) to greenlight new research. For instance, maybe select labs are allowed to go over the training compute cap if they fulfill a bunch of strict safety and safety-culture requirements and if they use the training budget increase for alignment experiments and have evals set up to show that previous models of the same kind are behaving well in all relevant respects.
So, my point is we shouldn’t look at this as “evals as an idea are inherently in tension with pausing ASAP.”
My gut reaction is that the eval path is strongly inferior because it relies on a lot more conjunction. People need to still care about it when models get dangerous, it needs to still be relevant when they get dangerous, and the evaluations need to work at all. Compared to that, a pause seems like a more straight-forward good thing, even if it doesn’t solve the problem.
I agree that immediate pause or at least a slowdown (“moving bright line of a training compute cap”) is better/safer than a strategy that says “continue until evals find something dangerous, then hit the brakes everywhere.”
I also have some reservations to evals in the sense that I think they can easily make things worse if they’re implemented poorly (see my note here).
That said, evals could complement the pause strategy. For instance:
(1) The threshold for evals to trigger further slowing could be low. If the evals have to unearth even just rudimentary deception attempts rather than something already fairly dangerous, it may not be too late when they trigger. (2) Evals could be used in combination with a pause (or slowdown) to greenlight new research. For instance, maybe select labs are allowed to go over the training compute cap if they fulfill a bunch of strict safety and safety-culture requirements and if they use the training budget increase for alignment experiments and have evals set up to show that previous models of the same kind are behaving well in all relevant respects.
So, my point is we shouldn’t look at this as “evals as an idea are inherently in tension with pausing ASAP.”