John Wentworth has a post on Godzilla strategies where he claims that putting an AGI to solve the alignment problem is like asking Godzilla to make a larger Godzilla behave. How will you ensure you don’t overshoot the intelligence of the agent you’re using to solve alignment and fall into the “Godzilla trap”?
John Wentworth has a post on Godzilla strategies where he claims that putting an AGI to solve the alignment problem is like asking Godzilla to make a larger Godzilla behave. How will you ensure you don’t overshoot the intelligence of the agent you’re using to solve alignment and fall into the “Godzilla trap”?
(Leike responds to this here if anyone is interested)