it seems like the key aspect to carefully operationalize here is “AI control of resources and power.”
Yep, plus something like “these AIs in control either weren’t intended to be successors or were intended to be successors but are importantly misaligned (e.g. the group that appointed them would think ex-post that it would have been much better if these AIs were “better aligned” or if they could retain control)”.
It’s unfortunate that the actual operationalization has to be so complex.
Yep, plus something like “these AIs in control either weren’t intended to be successors or were intended to be successors but are importantly misaligned (e.g. the group that appointed them would think ex-post that it would have been much better if these AIs were “better aligned” or if they could retain control)”.
It’s unfortunate that the actual operationalization has to be so complex.