AlphaZero isn’t smart enough (algorithmically speaking). From Human Compatible (p.207):
Life for AlphaGo during the training period must be quite frustrating: the better it gets, the better its opponent gets—because its opponent is a near-exact copy of itself. Its win percentage hovers around 50 percent, no matter how good it becomes. If it were more intelligent—if it had a design closer to what one might expect of a human-level AI system—it would be able to fix this problem. This AlphaGo++ would not assume that the world is just the Go board, because that hypothesis leaves a lot of things unexplained. For example, it doesn’t explain what “physics” is supporting the operation of AlphaGo++’s own decisions or where the mysterious “opponent moves” are coming from. Just as we curious humans have gradually come to understand the workings of our cosmos, in a way that (to some extent) also explains the workings of our own minds, and just like the Oracle AI discussed in Chapter 6, AlphaGo++ will, by a process of experimentation, learn that there is more to the universe than the Go board. It will work out the laws of operation of the computer it runs on and of its own code, and it will realize that such a system cannot easily be explained without the existence of other entities in the universe. It will experiment with different patterns of stones on the board, wondering if those entities can interpret them. It will eventually communicate with those entities through a language of patterns and persuade them to reprogram its reward signal so that it always gets +1. The inevitable conclusion is that a sufficiently capable AlphaGo++ that is designed as a rewardsignal maximizer will wirehead.
From wireheading, it might then go on to resource grab to maximise the probability that it gets a +1 or maximise the number of +1s it’s getting (e.g. filling planet sized memory banks with 1s); although already it would have to have a lot of power over humans to be able to convince them to reprogram it by sending messages via the go board!
I don’t think the examples of humans (Bezos/Witten) are that relevant, in as much as we are products of evolution, and are “adaption executors” rather than “fitness maximisers”, are imperfectly rational, and tend to be (broadly speaking) aligned/human-compatible, by default.
AlphaZero isn’t smart enough (algorithmically speaking). From Human Compatible (p.207):
From wireheading, it might then go on to resource grab to maximise the probability that it gets a +1 or maximise the number of +1s it’s getting (e.g. filling planet sized memory banks with 1s); although already it would have to have a lot of power over humans to be able to convince them to reprogram it by sending messages via the go board!
I don’t think the examples of humans (Bezos/Witten) are that relevant, in as much as we are products of evolution, and are “adaption executors” rather than “fitness maximisers”, are imperfectly rational, and tend to be (broadly speaking) aligned/human-compatible, by default.