I have never been satisfied by the “AI infers that it is simulated and changes its behavior” argument because it seems like the root issue is always that some information has leaked into the simulation. The problem goes from, “how do we prevent AI from escaping a box?” to “How do we prevent information from entering a box?” The components of this problem being:
What information is communicated via the nature of the box itself?
What information is built into an AI.
What information is otherwise entering the box?
These questions seem relatively approachable compared to other avenues of AI safety research.
I have never been satisfied by the “AI infers that it is simulated and changes its behavior” argument because it seems like the root issue is always that some information has leaked into the simulation. The problem goes from, “how do we prevent AI from escaping a box?” to “How do we prevent information from entering a box?” The components of this problem being:
What information is communicated via the nature of the box itself?
What information is built into an AI.
What information is otherwise entering the box?
These questions seem relatively approachable compared to other avenues of AI safety research.