Great question! I think the core of the answer comes down to the fact that the real danger of AI systems does not come from tools, but from agents. There are strong incentives to build more agenty AIs, agenty AIs are more useful and powerful than tools, it’s likely to be relatively easy to build agents once you can build powerful tools, and tools may naturally slide into becoming agents at a certain level of capability. If you’re a human directing a tool, it’s pretty easy to point the optimization power of the tool in beneficial ways. Once you have a system which has its own goals which it’s maximizing for, then you have much bigger problems.
Consequentialists seek power more effectively than other systems, so when you’re doing a large enough program search with a diverse training task attached to a reinforcement signal they will tend to be dominant. Internally targetable maximization-flavored search is an extremely broadly useful mechanism which will be stumbled on and upwrighted by gradient descent. See Rohin Shah’s AI risk from Program Search threat model for more details. The system which emerges from recursive self-improvement is likely to be a maximizer of some kind. And maximizing AI is dangerous (and hard to avoid!), as explored in this Rob Miles video.
To tie this back to your question: Weak and narrow AIs can be safely used as tools, we can have a human in the outer loop directing the optimization power. Once you have a system much smarter than you, the thing it ends up pointed at maximizing is no longer corrigible by default, and you can’t course correct if you misspecified the kind of facelikeness you were asking for. Specifying open ended goals for a sovereign maximizer to pursue in the real world which don’t kill everyone is an unsolved problem.
Great question! I think the core of the answer comes down to the fact that the real danger of AI systems does not come from tools, but from agents. There are strong incentives to build more agenty AIs, agenty AIs are more useful and powerful than tools, it’s likely to be relatively easy to build agents once you can build powerful tools, and tools may naturally slide into becoming agents at a certain level of capability. If you’re a human directing a tool, it’s pretty easy to point the optimization power of the tool in beneficial ways. Once you have a system which has its own goals which it’s maximizing for, then you have much bigger problems.
Consequentialists seek power more effectively than other systems, so when you’re doing a large enough program search with a diverse training task attached to a reinforcement signal they will tend to be dominant. Internally targetable maximization-flavored search is an extremely broadly useful mechanism which will be stumbled on and upwrighted by gradient descent. See Rohin Shah’s AI risk from Program Search threat model for more details. The system which emerges from recursive self-improvement is likely to be a maximizer of some kind. And maximizing AI is dangerous (and hard to avoid!), as explored in this Rob Miles video.
To tie this back to your question: Weak and narrow AIs can be safely used as tools, we can have a human in the outer loop directing the optimization power. Once you have a system much smarter than you, the thing it ends up pointed at maximizing is no longer corrigible by default, and you can’t course correct if you misspecified the kind of facelikeness you were asking for. Specifying open ended goals for a sovereign maximizer to pursue in the real world which don’t kill everyone is an unsolved problem.