Some quick thoughts on “AI is easy to control”

There are many things I feel like the post authors miss, and I want to share a few thoughts that seem good to communicate.

I’m going to focus on controlling superintelligent AI systems: systems powerful enough to solve alignment (in the CEV sense) completely, or to kill everyone on the planet.

In this post, I’m going to ignore other AI-related sources of x-risk, such as AI-enabled bioterrorism, and I’m not commenting on everything that seems important to comment on.

I’m also not going to point at all the slippery claims that I think can make the reader generalize incorrectly, as it’d be nitpicky and also not worth the time (examples of what I’d skip- I couldn’t find evidence that GPT-4 has undergone any supervised fine-tuning; RLHF shapes chatbots’ brains into the kind of systems that produce outputs that make human graders click on thumbs-up/​”I prefer this text”, smart systems that do that are not themselves necessarily “preferred” by human graders; one footnote[1]).

Intro

many people are worried that we will lose control of artificial intelligence, leading to human extinction or a similarly catastrophic “AI takeover.” We hope the arguments in this essay make such an outcome seem implausible. But even if future AI turns out to be less “controllable” in a strict sense of the word— simply because, for example, it thinks faster than humans can directly supervise— we also argue it will be easy to instill our values into an AI, a process called “alignment.”

This misrepresents the worry. Saying “but even if” makes it look like: people worrying about x-risk place credence on “loss of control leads to x-risk no matter/​despite alignment”; and these people wrong, as the post shows “this outcome” to be implausible; and, separately, that even if they’re right about loss of control, they’re wrong about x-risk, as it’ll be fine because of alignment.

But mostly, people (including the leading voices) are worried specifically about capable misaligned systems leading to human extinction. I don’t know anyone in the community who’d say it’s a bad thing that leads to extinction if a CEV-aligned superintelligence grabs control.

Since each generation of controllable AIs can help control the next generation, it looks like this process can continue indefinitely, even to very high levels of capability

I expect it to be easy to reward-shape AIs below a certain level[2] of capability, and I worry about controlling AIs above that level. I believe you need a superhumanly capable system to design and oversee a superhumanly capable system so that it doesn’t kill everyone. The current ability of subhuman systems to oversee other subhuman systems such that these systems don’t kill everyone is something that I predicted, and that doesn’t provide a lot of evidence for subhuman systems being able to oversee superhuman systems.[3]

To solve the problem of aligning superhuman systems, you need some amount of complicated human thought/​hard high-level work. If a system can output that much hard high-level work in a short amount of time, I consider this system to be superhuman, and the problem of aligning it to be “alignment-complete” in the sense that if you solve any of the problems in this class, you essentially solve alignment down the line and probably avoid x-risk, but solving any of these problems requires a lot of hard human work, and safely automating so much the hard work is an alignment-complete problem.

There needs to be an argument for why one can successfully use a subhuman system to control a complicated superhuman system, as otherwise, having generations of controllable subhuman systems doesn’t matter.

Optimization

Let’s talk about the goals specific neural networks will be pursuing.

Many “alignment problems” we routinely solve, like raising children or training pets, seem much harder than training a friendly AI

Note that evolution has had “white-box” access to our architecture, optimising us for inclusive genetic fitness, and getting something that optimizes for similar collections of things. Consider that humans are so alignable because of that. Children are already wired to easily want chocolate, politics, and cooperation; if instead you get an alien child wired to associate goodness with eating children or sorting pebbles, giving this child rewards can make them learn your language, but won’t necessarily make them not want to eat children or sort pebbles.

If you have a child, you don’t need to specify, in math, everything that you value: they’re probably not going to be super-smart about causing you to give them a reward, and they’re already wired to want stuff that’s similar to the kinds of things you want.

When you create AI, you do need to have a target of optimisation: what you hope the AI is going to try to do, a utility function safe to optimize for even with superintelligent optimization power. We don’t know how to safely specify a target like that.

And then, even if you somehow design a target like that, you need to somehow find an AI that actually tries to achieve that target, and not something else, the process of achieving which was correlated with the target during training.

the AI is secretly planning to kill you, gradient descent will notice this and make it less likely to do that in the future, because the neural circuitry needed to make the secret murder plot can be dismantled and reconfigured into circuits that directly improve performance

I’m not sure what their assumptions are around the inner alignment problem. This is false: we expect that a smart AI with a wide range of goals can perform well on a wide range of reward functions that can be used, and gradient descent won’t optimize the terminal goals that AI is actually trying to pursue.

I fully expect gradient descent to successfully optimize artificial neural networks to achieve low loss; I just don’t expect the loss function they can design to represent what we value, and I expect gradient descent to find neural networks that try to achieve something different from what was specified in the reward function.

If gradient descent finds an agent that tries to maximize something completely unrelated to humanity, and understands that for this, it needs to achieve a high score on our function, the agent will successfully achieve a high score. Gradient descent will optimize its ability to achieve a high score on our function—it will optimize the structure that makes up the agent—but won’t really care about the goal contents of the current structure. If after training is finished, this structure optimizes for anything weird about the future of the universe and plans to kill us, this doesn’t retroactively make the gradient change it- there is no known way for us to specify a loss function that trains away parameters that in the future might plan to kill us.

Interventions

Being able to conduct experiments doesn’t mean we can get demonstrations of all potential problems in advance. If the AI is smart enough and already wants something different enough from what we want, and we don’t understand its cognitive architecture, we’re not going to be able to trick it into believing its simulated environment is the real world where it can finally take over. Simply having read/​write access to the weights and activations doesn’t allow us to control what AI thinks about[4]. Techniques to shape the behaviour of subhuman systems aren’t going to let us keep control of smarter systems.

Some people point to the effectiveness of jailbreaks as an argument that AIs are difficult to control

Yes, but this is not an argument of the x-risk community.

Crucially, this doesn’t mean humans are “aligned to evolution”— see Evolution provides no evidence for the sharp left turn by Quintin Pope for a debunking of that analogy.

AFAIK, Nate Soares wouldn’t claim that humans are aligned with evolution. Unfortunately, the authors of this or the linked post don’t mechanistically understand the dynamics of the sharp left turn.

“AI control research is easier”

research on improving AI controllability is much easier than research on improving human controllability, so we should expect AIs to get more controllable faster than humans

(I’m going to assume both control and alignment are meant by “control”.) Ways it’s easier to test AI control techniques than human control techniques are listed. Valid for subhuman systems but isn’t relevant or inapplicable to superhuman systems, as:

  • We don’t know how to systematically suggest techniques that could realistically work to control superintelligent AI systems;

  • We don’t know how to see what a superintelligent AI would do when it can verify it’s in the real world; if it knows it’s not, we don’t know whether the technique actually allows us to control it in the real world;

  • Many at the major labs and outside propose making superintelligent systems that aren’t single models, but instead a combination of many models working in a complicated system, where they oversee and report each other; if all compute for training a model is now being used to run copies of the model to make up a superintelligent system, the cost and scalability consideration don’t really apply, as you have only a single expensive system.

“Values are easy to learn”

If an AI learns morality first, it will want to help us ensure it stays moral as it gets more powerful

True if the AI is smart and coherent enough to be able to do that. But if it’s not yet a CEV-aligned superintelligence, having learnt what humans want doesn’t incentivise gradient descent to change it in ways that move it towards CEV-aligned superintelligence. I expect understanding human values to, indeed, be easy for a smart AI, and to make it easier to play along; but it doesn’t automatically make human values an optimisation target. Knowing what humans want doesn’t make AI care unless you solve the problem of making AI care.

The behaviour of subhuman models that seems “aligned” corresponds to a messy collection of stuff that kind of optimises for what humans give rewards for; but every time gradient descent make the model grok more general optimisation/​agency, the fuzzy thing that a messy collection of stuff had been optimised for is not going to influence the goal content of the new architecture gradient descent installs. There isn’t a reason for gradient descent to preserve the goals and values of algorithms implemented by the neural network in the past: new, smarter AI algorithms implemented by the neural network can achieve a high score with a wider range of possible goals and values.

Since values are shared and understood by almost everyone in a society, they cannot be very complex. Unlike science and technology, where division of labor enables the accumulation of ever more complex knowledge, values must remain simple enough to be learned by children within a few years.

I’d guess the description of human values is probably shorter than a gigabyte of information or something; AI can learn what they are; but they’re not simple enough for us to easily specify them as an optimization target- see The Hidden Complexity of Wishes.

current language models are already very capable of morally evaluating complex actions that a superintelligence might be capable of

They’re capable of evaluating the consequences presented to them- but not more capable than humans. That said,

  • subhuman LLMs won’t be capable of evaluating plans generated by superhuman AIs, just like humans wouldn’t be able to, as seeing all consequences of actions requires intelligence and not just understanding what humans would say;

  • there are failure modes I’d expect to be obvious to the authors of the post. I invite the reader to think about what happens if we automatically evaluate plans generated by superhuman AIs using current LLMs and then launch plans that our current LLMs look at and say, “this looks good”.

Conclusion

There are many reasons to expect that AIs will be easy to control and easy to align with human values

Unfortunately, in this post, I have not seen evidence that superintelligent AIs will be easy to control or align with human values. If a neural network implements a superhuman AI agent that wants something different from what we want, the post has not presented any evidence for thinking we’d be able to keep control over the future despite the impact of what this agent does, or to change it to implement a superhuman AI agent aligned with human values in the CEV sense, or even just to notice that something is wrong with the agent until it’s too late.

While we directly optimize the weights of our AI systems to get rewards, and changes in human brains in response to rewards are less clear and transparent, we do not know how to use it to make a superintelligent AI want something we’d wish it wanted.

  1. ^

    Future AIs will exhibit emotions and desires in ways that deserve serious ethical consideration

    By default, superhuman AI systems that wipe out humanity won’t have emotions. They’re going to be extremely good optimizers. But seems important to note that if we succeed at not dying in the next 20 years from extremely good optimizers, I’d want us to build AI systems with emotions only intentionally and after understanding how to design new minds. See Nonsentient Optimizers and Can’t Unbirth a Child.

  2. ^

    I focus on generally subhuman vs. generally superhuman systems, as this seems like a relevant distinction while being simpler to focus on, even though it’s loses some nuance. It seems that with inference being cheaper than training, once you trained a human-level system, you can immediately run many copies of it, which can together make up a superhuman system (smart enough to solve alignment in a relatively short amount of time if it wanted to, and also capable enough to kill everyone). Many copies of sub-human systems, put together, won’t be able to solve alignment, or any problems requiring a lot of best human cognition. So, I imagine a fuzzy threshold around the human level and focus on it in this post.

  3. ^

    There’s also an open, more general problem, that I don’t discuss here, of weaker systems steering stronger systems (not getting gamed and preserving preferences). We don’t know how to do that.

  4. ^

    And unfortunately, we don’t know what each of the weights represents, and we don’t have much transparency into the algorithms they implement; we don’t understand the thought process and we wouldn’t know how to influence it in a way that’d work despite various internal optimization pressures

Crossposted from LessWrong (14 points, 9 comments)