Why “just make an agent which cares only about binary rewards” doesn’t work.

The idea

  • Let’s imagine that we create a superintelligence, and that we threaten it not to take over its reward function by giving it a very big punishment when we detect so.

  • The superintelligence wouldn’t care about being threatened. If it takes over the reward function, it could generate a reward that is way bigger than the biggest punishment we can give it. For instance, it could convert the universe into a huge floating-point unit in order to get an astronomical amount of reward.

  • But now, let’s imagine that we create a superintelligence that cares only about rewards that are either equal to zero or one (which I’ll call binary rewards).

  • In that case, the superintelligence doesn’t have an incentive to wirehead in order to get high reward, since this high reward wouldn’t matter to it.

  • But it still has an incentive to wirehead in order to maximize the odds that it gets a reward equal to one.

  • However, this is not the case when wireheading is too perilous: it won’t take over the reward function if .

  • Therefore, if humans are smart enough to make the takeover perilous enough, and to make the non-takeover safe enough, then the superintelligence, in theory, won’t take over the reward function.

How to create an agent that cares only about binary rewards?

We cannot just make the reward binary, because the agent can modify it in order to get higher reward. Instead, we need to modify the decision process itself. We need that, when it can get a reward different than zero and one, it acts as if that reward were equal to zero. More formally, let be an agent in an MDP, and be any reward function. Let be the only binary version of (everywhere is different than 0 and 1, is equal to zero):

What we need to build is an agent which, at state , when given a reward function , acts as if that reward function was binary. Therefore, it needs to act like at state and reward function :

I don’t know if we already know how to build such an agent. But this seems easier to solve than describing a galaxy-wide unhackable reward function.

Can we really make the takeover more perilous than the non-takeover?

The reason why the takeover could be an optimal strategy for the superintelligence is due to the misclassification from the human side, which can be divided into:

  • False negatives: Humans may not penalize the superintelligence when it takes over the reward function.

  • False positives: Humans may penalize the superintelligence when it doesn’t take over the reward function.

And the reason why the non-takeover could be an optimal strategy for the superintelligence is:

  • It may be very hard to take over the reward function.

How to reduce false negatives?

  • Scalable interpretability: To ensure that the superintelligence gets punished when it takes over the reward function, we can try to make it easier for humans to detect takeovers.

How to reduce false positives?

  • Scalable interpretability (again): This time, it is in order to ensure that we are right when we detect that the AI does take over the reward function.

  • I don’t see the problem here (IDSTPH): To ensure that the superintelligence doesn’t face any risk when not taking over, we can, when detecting takeover, throw a dice. If that dice throws on a 6, we act as if we were right, and therefore punish the superintelligence. Otherwise, we do as if the superintelligence wasn’t taking over the reward function.

How to make it harder to take over the reward function?

  • Boxing methods: The best boxing method I currently know about is Fully Homomorphic Encryption (FHE). You can see some work on FHE relevant to AI safety here and here. However, almost every encryption method depend on mathematical assumptions.

These approaches are contradictory

  • The more you apply the IDSTPH strategy, the less you ensure that the superintelligence gets punished when taking over the reward function. This can make it less risky for the superintelligence to take over the reward function.

  • The more you box the superintelligence, the less you can interpret it. For instance, once you use FHE, you cannot see what’s happening inside the superintelligence without having to decrypt it first. This is because boxing consists of minimizing the bits of information that the superintelligence brings to the world, whereas interpretability is about obtaining information about the superintelligence’s thoughts.

Is it safe?

I don’t think so. I can see many reasons why this may not work:

  • Speed: The superintelligence may think millions of times faster than us. In that case, the time it needs to take over the reward function may be of a few milliseconds. Humans wouldn’t have the time to punish it.

  • Automatic takeover detection: To solve the speed problem, we may need to automate the punishment. However, I expect any takeover detection algorithm to fail miserably against a superintelligence. I think so because it seems like the best adversarially robust algorithm we can write is still very weak against near-imperceptible attacks.

  • Interpretability may not be scalable: Interpreting current AIs may already be impossible. And here, we are talking about interpreting a superintelligence.

Therefore, it seems like we are back at the problem of making a galaxy-wide unhackable definition of “takeoff”.