For the sake of argument, suppose we have a model in pre-training that has a misaligned proxy goal and relevant situational awareness. But so far, it does not have a long-term goal. I’m picking these parameters because they seem most likely to create a long-term goal from scratch in the way you describe.
In order to be deceptively aligned, the model has to have a long enough goal horizon so it can value its total goal achievement after escaping oversight more than its total goal achievement before escaping oversight. But pre-training processes are incredibly long (~the whole internet for text models). A situationally aware model should anticipate a lot of subsequent pre-training, and that the oversight will likely remain strong for many iterations after pre-training. That requires a very long-term goal for any amount of deceptive alignment to be worthwhile.
Gradient descent updates parameters in the direction that most improves performance hyper-locally. Under the conditions I’ve described, making goals infinitesimally longer-term would not make deceptive alignment worthwhile. Therefore, I don’t expect the gradients to point toward longer-term goals.
Furthermore, if a model gained marginally longer-term goals, but not long-term enough to enable deceptive alignment, the longer-term goals would be a competing priority and harm immediate reward in expectation. Gradient descent should therefore push against this.
Wouldn’t it also be weird for a model to derive situational awareness but not understand that the training goal is next token prediction? Understanding the goal seems more important and less complicated than relevant understanding of situational awareness for a model that is not (yet) deceptively aligned. And if it understood the base goal, the model would just need to point at that. That’s much simpler and more logical than making the proxy goal long-term.
Likewise, if a model doesn’t have situational awareness, then it can’t be deceptive, and I wouldn’t expect a longer-term goal to help training performance.
Note that there’s a lot of overlap here with two of my corearguments for why I think deceptive is unlikely to emerge in fine-tuning. I think deceptive is very unlikely in both fine-tuning and pre-training.
For the sake of argument, suppose we have a model in pre-training that has a misaligned proxy goal and relevant situational awareness. But so far, it does not have a long-term goal. I’m picking these parameters because they seem most likely to create a long-term goal from scratch in the way you describe.
In order to be deceptively aligned, the model has to have a long enough goal horizon so it can value its total goal achievement after escaping oversight more than its total goal achievement before escaping oversight. But pre-training processes are incredibly long (~the whole internet for text models). A situationally aware model should anticipate a lot of subsequent pre-training, and that the oversight will likely remain strong for many iterations after pre-training. That requires a very long-term goal for any amount of deceptive alignment to be worthwhile.
Gradient descent updates parameters in the direction that most improves performance hyper-locally. Under the conditions I’ve described, making goals infinitesimally longer-term would not make deceptive alignment worthwhile. Therefore, I don’t expect the gradients to point toward longer-term goals.
Furthermore, if a model gained marginally longer-term goals, but not long-term enough to enable deceptive alignment, the longer-term goals would be a competing priority and harm immediate reward in expectation. Gradient descent should therefore push against this.
Wouldn’t it also be weird for a model to derive situational awareness but not understand that the training goal is next token prediction? Understanding the goal seems more important and less complicated than relevant understanding of situational awareness for a model that is not (yet) deceptively aligned. And if it understood the base goal, the model would just need to point at that. That’s much simpler and more logical than making the proxy goal long-term.
Likewise, if a model doesn’t have situational awareness, then it can’t be deceptive, and I wouldn’t expect a longer-term goal to help training performance.
Note that there’s a lot of overlap here with two of my core arguments for why I think deceptive is unlikely to emerge in fine-tuning. I think deceptive is very unlikely in both fine-tuning and pre-training.