I just skimmed the section headers and a small amount of the content, but I’m extremely skeptical. E.g., the “counting argument” seems incredibly dubious to me because you can just as easily argue that text to image generators will internally create images of llamas in their early layers, which they then delete, before creating the actual asked for image in the later layers. There are many possible llama images, but “just one” network that straightforwardly implements the training objective, after all.
The issue is that this isn’t the correct way to do counting arguments on NN configurations. While there are indeed an exponentially large number of possible llama images that an NN might create internally, there are an even more exponentially large number of NNs that have random first layers, and then go on to do the actual thing in the later layers. Thus, the “inner llamaizers” are actually more rare in NN configuration space than the straightforward NN.
The key issue is that each additional computation you speculate an NN might be doing acts as an additional constraint on the possible parameters, since the NN has to internally contain circuits that implement those computations. The constraint that the circuits actually have to do “something” is a much stronger reduction in the number of possible configurations for those parameters than any additional configurations you can get out of there being multiple “somethings” that the circuits might be doing.
So in the case of deceptive alignment counting arguments, they seem to be speculating that the NN’s cognition looks something like:
[have some internal goal x] [backchain from wanting x to the stuff needed to get x (doing well at training)] [figure out how to do well at training] [actually do well at training]
and in comparison, the “honest” / direct solution looks like:
[figure out how to do well at training] [actually do well at training]
and then because there are so many different possibilities for “x”, they say there are more solutions that look like the deceptive cognition. My contention is that the steps “[have some internal goal x] [backchain from wanting x to the stuff needed to get x (doing well at training)]” in the deceptive cognition are actually unnecessary, and because implementing those steps requires that one have circuits that instantiate those computations, the requirement that the deceptive model perform those steps actually *constrains* the number of parameter configurations that implement the deceptive cognition, which reduces the volume of deceptive models in parameter space.
One obvious counterpoint I expect is to claim that the “[have some internal goal x] [backchain from wanting x to the stuff needed to get x (doing well at training)]” steps actually do contribute to the later steps, maybe because they’re a short way to compress a motivational pointer to “wanting” to do well on the training objective.
I don’t think this is how NN simplicity biases work. Under the “cognitive executions impose constraints on parameter settings” perspective, you don’t actually save any complexity by supposing that the model has some motive for figuring stuff out internally, because the circuits required to implement the “figure stuff out internally” computations themselves count as additional complexity. In contrast, if you have a view of simplicity that’s closer to program description length, then you’re not counting runtime execution against program complexity, and so a program that has short length in code but long runtime can count as simple.
@jkcarlsmith does seem more sympathetic to the “parameters as complexity” view than to the “code length as complexity” view. However, I go further and think that the “parameters as complexity” view actively points against deceptive alignment.
I also think NNs have an even stronger bias for short paths than you might expect from just the “parameters as complexity” view. Consider a problem that can either be solved with a single circuit of depth n, or by two parallel circuits of depth n/2 (so both solutions must set the values of an equal number of parameters, but one solution is more parallel than the other). I claim there are far more parameter configurations that implement the parallel solution than parameter configurations that implement the serial solution.
This is because the parallel solution has an entire permutation group that’s not available to the serial solution: the two parallel circuits can be moved to different relative depths with respect to each other, whereas all the parts of the serial solution must have fixed relative depths. Thus, the two parallel circuits represent less of a constraint on the possible configurations of the NN, and so there are far more NNs that implement the parallel solution.
As a consequence, I expect there are significant “short depth” biases in the NN simplicity prior, consistent with empirical results such as: https://arxiv.org/abs/1605.06431
Finally, I’m extremely skeptical of claims that NNs contain a ‘ghost of generalized instrumental reasoning’, able to perform very competent long term hidden scheming and deduce lots of world knowledge “in-context”. I think current empirical results point strongly against that being plausible.
For example, the “reversal curse” results (training on “A is B” doesn’t lead to models learning “B is A”). If the ghost can’t even infer from “A is B” to “B is A”, then I think stuff like inferring from “I have a goal x”, to “here is the specific task I must perform in order to maximize my reward” is pretty much out of the question. Thus, stories about how SGD might use arbitrary goals as a way to efficiently compress an (effective) desire for the NN to silently infer lots of very specific details about the training process seem incredibly implausible to me.
I expect objections of the form “I expect future training processes to not suffer from the reversal curse, and I’m worried about the future training processes.”
Obviously people will come up with training processes that don’t suffer from the reversal curse. However, comparing the simplicity of the reversal curse to the capability of current NNs is still evidence about the relative power of the ‘instrumental ghost’ in the model compared to the external capabilities of the model. If a similar ratio continues to hold for externally superintelligent AIs, then that casts enormous doubt on e.g., deceptive alignment scenarios where the model is internally and instrumentally deriving huge amounts of user-goal-related knowledge so that it can pursue its arbitrary mesaobjectives later down the line. I’m using the reversal curse to make a more generalized update about the types of internal cognition that are easy to learn and how they contribute to external capabilities.
Some other Tweets I wrote as part of the discussion:
The key points of my Tweet are basically “the better way to think about counting arguments is to compare constraints on parameter configurations”, and “corrected counting arguments introduce an implicit bias towards short, parallel solutions”, where both “counting the constrained parameters”, and “counting the permutations of those parameters” point in that direction.
I think shallow depth priors are pretty universal. E.g., they also make sense from a perspective of “any given step of reasoning could fail, so best to make as few sequential steps as possible, since each step is rolling the dice”, as well as a perspective of “we want to explore as many hypotheses as possible with as little compute as possible, so best have lots of cheap hypotheses”.
I’m not concerned about the training for goal achievement contributing to deceptive alignment, because such training processes ultimately come down to optimizing the model to imitate some mapping from “goal given by the training process” → “externally visible action sequence”. Feedback is always upweighting cognitive patterns that produce some externally visible action patterns (usually over short time horizons).
In contrast, it seems very hard to me to accidentally provide sufficient feedback to specify long-term goals that don’t distinguish themselves from short term one over short time horizons, given the common understanding in RL that credit assignment difficulties actively work against the formation of long term goals. It seems more likely to me that we’ll instill long term goals into AIs by “scaffolding” them via feedback over shorter time horizons. E.g., train GPT-N to generate text like “the company’s stock must go up” (short time horizon feedback), as well as text that represents GPT-N competently responding to a variety of situations and discussions about how to achieve long-term goals (more short time horizon feedback), and then putting GPT-N in a continuous loop of sampling from a combination of the behavioral patterns thereby constructed, in such a way that the overall effect is competent long term planning.
The point is: long term goals are sufficiently hard to form deliberately that I don’t think they’ll form accidentally.
...I think the llama analogy is exactly correct. It’s specifically designed to avoid triggering mechanistically ungrounded intuitions about “goals” and “tryingness”, which I think inappropriately upweight the compellingness of a conclusion that’s frankly ridiculous on the arguments themselves. Mechanistically, generating the intermediate llamas is just as causally upstream of generating the asked for images, as “having an inner goal” is causally upstream of the deceptive model doing well on the training objective. Calling one type of causal influence “trying” and the other not is an arbitrary distinction.
My point about the “instrumental ghost” wasn’t that NNs wouldn’t learn instrumental / flexible reasoning. It was that such capabilities were much more likely to derive from being straightforwardly trained to learn such capabilities, and then to be employed in a manner consistent with the target function of the training process. What I’m arguing *against* is the perspective that NNs will “accidentally” acquire such capabilities internally as a convergent result of their inductive biases, and direct them to purposes/along directions very different from what’s represented in the training data. That’s the sort of stuff I was talking about when I mentioned the “ghost”.
What I’m saying is there’s a difference between a model that can do flexible instrumental reasoning because it’s faithfully modeling a data distribution with examples of flexible instrumental reasoning, versus a model that acquired hidden flexible instrumental reasoning because NN inductive biases say the convergent best way to do well on tasks is to acquire hidden flexible instrumental reasoning and apply it to the task, even when the task itself doesn’t have any examples of such.
I’m seeing your main argument here as a version of what I call, in section 4.4, a “speed argument against schemers”—e.g., basically, that SGD will punish the extra reasoning that schemers need to perform.
(I’m generally happy to talk about this reasoning as a complexity penalty, and/or about the params it requires, and/or about circuit-depth—what matters is the overall “preference” that SGD ends up with. And thinking of this consideration as a different kind of counting argument *against* schemers seems like it might well be a productive frame. I do also think that questions about whether models will be bottlenecked on serial computation, and/or whether “shallower” computations will be selected for, are pretty relevant here, and the report includes a rough calculation in this respect in section 4.4.2 (see also summary here).)
Indeed, I think that maybe the strongest single argument against scheming is a combination of
“Because of the extra reasoning schemers perform, SGD would prefer non-schemers over schemers in a comparison re: final properties of the models” and
“The type of path-dependence/slack at stake in training is such that SGD will get the model that it prefers overall.”
My sense is that I’m less confident than you in both (1) and (2), but I think they’re both plausible (the report, in particular, argues in favor of (1)), and that the combination is a key source of hope. I’m excited to see further work fleshing out the case for both (including e.g. the sorts of arguments for (2) that I took you and Nora to be gesturing at on twitter—the report doesn’t spend a ton of time on assessing how much path-dependence to expect, and of what kind).
Re: your discussion of the “ghost of instrumental reasoning,” “deducing lots of world knowledge ‘in-context,’ and “the perspective that NNs will ‘accidentally’ acquire such capabilities internally as a convergent result of their inductive biases”—especially given that you only skimmed the report’s section headings and a small amount of the content, I have some sense, here, that you’re responding to other arguments you’ve seen about deceptive alignment, rather than to specific claims made in the report (I don’t, for example, make any claims about world knowledge being derived “in-context,” or about models “accidentally” acquiring flexible instrumental reasoning). Is your basic thought something like: sure, the models will develop flexible instrumental reasoning that could in principle be used in service of arbitrary goals, but they will only in fact use it in service of the specified goal, because that’s the thing training pressures them to do? If so, my feeling is something like: ok, but a lot of the question here is whether using the instrumental reasoning in service of some other goal (one that backchains into getting-reward) will be suitably compatible with/incentivized by training pressures as well. And I don’t see e.g. the reversal curse as strong evidence on this front.
Re: “mechanistically ungrounded intuitions about ‘goals’ and ‘tryingness’”—as I discuss in section 0.1, the report is explicitly setting aside disputes about whether the relevant models will be well-understood as goal-directed (my own take on that is in section 2.2.1 of my report on power-seeking AI here). The question in this report is whether, conditional on goal-directedness, we should expect scheming. That said, I do think that what I call the “messyness” of the relevant goal-directedness might be relevant to our overall assessment of the arguments for scheming in various ways, and that scheming might require an unusually high standard of goal-directedness in some sense. I discuss this in section 2.2.3, on “‘Clean’ vs. ‘messy’ goal-directedness,” and in various other places in the report.
Re: “long term goals are sufficiently hard to form deliberately that I don’t think they’ll form accidentally”—the report explicitly discusses cases where we intentionally train models to have long-term goals (both via long episodes, and via short episodes aimed at inducing long-horizon optimization). I think scheming is more likely in those cases. See section 2.2.4, “What if you intentionally train the model to have long-term goals?” That said, I’d be interested to see arguments that credit assignment difficulties actively count against the development of beyond-episode goals (whether in models trained on short episodes or long episodes) for models that are otherwise goal-directed. And I do think that, if we could be confident that models trained on short episodes won’t learn beyond-episode goals accidentally (even irrespective of mundane adversarial training—e.g., that models rewarded for getting gold coins on the episode would not learn a goal that generalizes to caring about gold coins in general, even prior to efforts to punish it for sacrificing gold-coins-on-the-episode for gold-coins-later), that would be a significant source of comfort (I discuss some possible experimental directions in this respect in section 6.2).
Reposting my response on Twitter (comment copied from LW):
Some other Tweets I wrote as part of the discussion:
Tweet 1:
Tweet 2:
Tweet 3:
Tweets 4 / 5:
(Also copied from LW. And partly re-hashing my response from twitter.)
I’m seeing your main argument here as a version of what I call, in section 4.4, a “speed argument against schemers”—e.g., basically, that SGD will punish the extra reasoning that schemers need to perform.
(I’m generally happy to talk about this reasoning as a complexity penalty, and/or about the params it requires, and/or about circuit-depth—what matters is the overall “preference” that SGD ends up with. And thinking of this consideration as a different kind of counting argument *against* schemers seems like it might well be a productive frame. I do also think that questions about whether models will be bottlenecked on serial computation, and/or whether “shallower” computations will be selected for, are pretty relevant here, and the report includes a rough calculation in this respect in section 4.4.2 (see also summary here).)
Indeed, I think that maybe the strongest single argument against scheming is a combination of
“Because of the extra reasoning schemers perform, SGD would prefer non-schemers over schemers in a comparison re: final properties of the models” and
“The type of path-dependence/slack at stake in training is such that SGD will get the model that it prefers overall.”
My sense is that I’m less confident than you in both (1) and (2), but I think they’re both plausible (the report, in particular, argues in favor of (1)), and that the combination is a key source of hope. I’m excited to see further work fleshing out the case for both (including e.g. the sorts of arguments for (2) that I took you and Nora to be gesturing at on twitter—the report doesn’t spend a ton of time on assessing how much path-dependence to expect, and of what kind).
Re: your discussion of the “ghost of instrumental reasoning,” “deducing lots of world knowledge ‘in-context,’ and “the perspective that NNs will ‘accidentally’ acquire such capabilities internally as a convergent result of their inductive biases”—especially given that you only skimmed the report’s section headings and a small amount of the content, I have some sense, here, that you’re responding to other arguments you’ve seen about deceptive alignment, rather than to specific claims made in the report (I don’t, for example, make any claims about world knowledge being derived “in-context,” or about models “accidentally” acquiring flexible instrumental reasoning). Is your basic thought something like: sure, the models will develop flexible instrumental reasoning that could in principle be used in service of arbitrary goals, but they will only in fact use it in service of the specified goal, because that’s the thing training pressures them to do? If so, my feeling is something like: ok, but a lot of the question here is whether using the instrumental reasoning in service of some other goal (one that backchains into getting-reward) will be suitably compatible with/incentivized by training pressures as well. And I don’t see e.g. the reversal curse as strong evidence on this front.
Re: “mechanistically ungrounded intuitions about ‘goals’ and ‘tryingness’”—as I discuss in section 0.1, the report is explicitly setting aside disputes about whether the relevant models will be well-understood as goal-directed (my own take on that is in section 2.2.1 of my report on power-seeking AI here). The question in this report is whether, conditional on goal-directedness, we should expect scheming. That said, I do think that what I call the “messyness” of the relevant goal-directedness might be relevant to our overall assessment of the arguments for scheming in various ways, and that scheming might require an unusually high standard of goal-directedness in some sense. I discuss this in section 2.2.3, on “‘Clean’ vs. ‘messy’ goal-directedness,” and in various other places in the report.
Re: “long term goals are sufficiently hard to form deliberately that I don’t think they’ll form accidentally”—the report explicitly discusses cases where we intentionally train models to have long-term goals (both via long episodes, and via short episodes aimed at inducing long-horizon optimization). I think scheming is more likely in those cases. See section 2.2.4, “What if you intentionally train the model to have long-term goals?” That said, I’d be interested to see arguments that credit assignment difficulties actively count against the development of beyond-episode goals (whether in models trained on short episodes or long episodes) for models that are otherwise goal-directed. And I do think that, if we could be confident that models trained on short episodes won’t learn beyond-episode goals accidentally (even irrespective of mundane adversarial training—e.g., that models rewarded for getting gold coins on the episode would not learn a goal that generalizes to caring about gold coins in general, even prior to efforts to punish it for sacrificing gold-coins-on-the-episode for gold-coins-later), that would be a significant source of comfort (I discuss some possible experimental directions in this respect in section 6.2).