How would the model develop situational awareness in pre-training when:
Unlike in fine-tuning, the vast majority of internet text prompts do not contain information relevant for the model figure out that it is an ML model. The model can’t infer context from the prompt in the vast majority of pre-training inputs.
Because predicting internet text for the next token is all that predicts reward, why would situational awareness help with reward unless the model were already deceptively aligned?
Situational awareness only produces deceptive alignment if the model already has long-term goals, and vice versa. Gradient descent is based on partial derivatives, so assuming that long-term goals and situational awareness are represented by different parameters:
If the model doesn’t already have long enough goal horizons for deceptive alignment, then marginally more situational awareness doesn’t increase deceptive alignment.
If the model doesn’t already have the kind of situational awareness necessary for deceptive alignment, then a marginally longer-term goal doesn’t increase deceptive alignment.
Therefore, the partial derivatives shouldn’t point toward either property unless the model already has one or the other.
Thanks for thoughtfully engaging with this topic! I’ve spent a lot of time exploring arguments that alignment is hard, and am also unconvinced. I’m particularly skeptical about deceptive alignment, which is closely related to your point b. I’m clearly not the right person to explain why people think the problem is hard, but I think it’s good to share alternative perspectives.
If you’re interested in more skeptical arguments, there’s a forum tag and a lesswrong tag. I particularly like Quintin Pope’s posts on the topic.