How would the model develop situational awareness in pre-training when:
Unlike in fine-tuning, the vast majority of internet text prompts do not contain information relevant for the model figure out that it is an ML model. The model can’t infer context from the prompt in the vast majority of pre-training inputs.
Because predicting internet text for the next token is all that predicts reward, why would situational awareness help with reward unless the model were already deceptively aligned?
Situational awareness only produces deceptive alignment if the model already has long-term goals, and vice versa. Gradient descent is based on partial derivatives, so assuming that long-term goals and situational awareness are represented by different parameters:
If the model doesn’t already have long enough goal horizons for deceptive alignment, then marginally more situational awareness doesn’t increase deceptive alignment.
If the model doesn’t already have the kind of situational awareness necessary for deceptive alignment, then a marginally longer-term goal doesn’t increase deceptive alignment.
Therefore, the partial derivatives shouldn’t point toward either property unless the model already has one or the other.
How would the model develop situational awareness in pre-training when:
Unlike in fine-tuning, the vast majority of internet text prompts do not contain information relevant for the model figure out that it is an ML model. The model can’t infer context from the prompt in the vast majority of pre-training inputs.
Because predicting internet text for the next token is all that predicts reward, why would situational awareness help with reward unless the model were already deceptively aligned?
Situational awareness only produces deceptive alignment if the model already has long-term goals, and vice versa. Gradient descent is based on partial derivatives, so assuming that long-term goals and situational awareness are represented by different parameters:
If the model doesn’t already have long enough goal horizons for deceptive alignment, then marginally more situational awareness doesn’t increase deceptive alignment.
If the model doesn’t already have the kind of situational awareness necessary for deceptive alignment, then a marginally longer-term goal doesn’t increase deceptive alignment.
Therefore, the partial derivatives shouldn’t point toward either property unless the model already has one or the other.