We’re in an analogous situation with AI. AI is too complex for us to fully understand what it does (by design), and this is also true of mundane, human-programmed software (asking any software engineer who has worked on something more than 1k lines long if their program ever did anything unexpected and I can promise you the answer is “yes”). Thus although we in theory have control of what goes on inside AI, that’s much less the case than it seems at first, so much so that we often have better models of how humans decide to do things than we do for AI.
We’re in an analogous situation with AI. AI is too complex for us to fully understand what it does (by design), and this is also true of mundane, human-programmed software (asking any software engineer who has worked on something more than 1k lines long if their program ever did anything unexpected and I can promise you the answer is “yes”). Thus although we in theory have control of what goes on inside AI, that’s much less the case than it seems at first, so much so that we often have better models of how humans decide to do things than we do for AI.