Is there an argument that it is impossible?
There is actually an impossibility argument. Even if you could robustly specify goals in AGI, there is another convergent phenonemon that would cause misaligned effects and eventually remove the goal structures.
You can find an intuitive summary here: https://www.lesswrong.com/posts/jFkEhqpsCRbKgLZrd/what-if-alignment-is-not-enough
There is actually an impossibility argument. Even if you could robustly specify goals in AGI, there is another convergent phenonemon that would cause misaligned effects and eventually remove the goal structures.
You can find an intuitive summary here: https://www.lesswrong.com/posts/jFkEhqpsCRbKgLZrd/what-if-alignment-is-not-enough