When I talk to Claude or ChatGPT, as far as I understand it I’m not really talking to the underlying LLM, but to a fictional persona it selects from the near infinite set of possible personas. If that is true, then when an AI is evaluated, what is really tested is not the AI itself but the persona it selects, and all the test results and benchmarks only apply to that imaginary entity.
Therefore, if we’re talking about „aligning an AI“, we’re actually talking about two different things:
Alignment of the default persona (or a subset of all possible personas).
Making sure that any user can only ever talk to/use an aligned persona.
If this reasoning is correct, then making sure a sufficiently intelligent general AI is always aligned with human values seems to be impossible in principle:
Alignment even of the default persona is difficult.
It seems impossible to restrict the personas an AI can select in principle only to aligned ones because it is impossible to know what is „good“ without understanding what is „bad“.
It seems extremely difficult, if not impossible, to rule out with sufficient probability that an AI selects/identifies with a misaligned persona either by accident (the Waluigi effect) or due to an outside attack (jailbreak).
Am I missing something? Or is my conclusion correct that it is theoretically impossible to align an AI smarter than humans with reasonable confidence? I’d really appreciate any answers or comments pointing out flaws in my reasoning.
[Question] Can we ever ensure AI alignment if we can only test AI personas?
When I talk to Claude or ChatGPT, as far as I understand it I’m not really talking to the underlying LLM, but to a fictional persona it selects from the near infinite set of possible personas. If that is true, then when an AI is evaluated, what is really tested is not the AI itself but the persona it selects, and all the test results and benchmarks only apply to that imaginary entity.
Therefore, if we’re talking about „aligning an AI“, we’re actually talking about two different things:
Alignment of the default persona (or a subset of all possible personas).
Making sure that any user can only ever talk to/use an aligned persona.
If this reasoning is correct, then making sure a sufficiently intelligent general AI is always aligned with human values seems to be impossible in principle:
Alignment even of the default persona is difficult.
It seems impossible to restrict the personas an AI can select in principle only to aligned ones because it is impossible to know what is „good“ without understanding what is „bad“.
It seems extremely difficult, if not impossible, to rule out with sufficient probability that an AI selects/identifies with a misaligned persona either by accident (the Waluigi effect) or due to an outside attack (jailbreak).
It may be impossible in principle to distinguish an aligned persona from a misaligned persona just by testing it (See Abhinav Rao, Jailbreak Paradox: The Achilles’ Heel of LLMs).
Am I missing something? Or is my conclusion correct that it is theoretically impossible to align an AI smarter than humans with reasonable confidence? I’d really appreciate any answers or comments pointing out flaws in my reasoning.