[Question] Closing the Feedback Loop on AI Safety Research.

Is there a consensus among AI safety researchers that there is no way to safely study an AI agent’s behavior within a simulated environment?

It seems to me as if the creation of an adequate AGI Sandbox would be a top priority (if not #1) for AI safety researchers as it would effectively close the feedback loop and allow researchers to take multiple shots at AGI alignment without threat of total annihilation.

No comments.