Sounds like AI will never align with “our” optimization targets, since we as humans can’t seem to agree on what those targets should be.
Exactly this, combine this with no objective morality, and that is a massive problem for AI Alignment.
Sounds like AI will never align with “our” optimization targets, since we as humans can’t seem to agree on what those targets should be.
Exactly this, combine this with no objective morality, and that is a massive problem for AI Alignment.