I think ‘Situational Awareness’ can quite sensibly be further divided up into ‘Observation’ and ‘Understanding’.
The classic control loop of ‘observe’, ‘understand’, ‘decide’, ‘act’[1], is consistent with this discussion, where ‘observe’+‘understand’ here are combined as ‘situational awareness’, and you’re pulling out ‘goals’ and ‘planning capacity’ as separable aspects of ‘decide’.
Are there some difficulties with factoring?
Certain kinds of situational awareness are more or less fit for certain goals. And further, the important ‘really agenty’ thing of making plans to improve situational awareness does mean that ‘situational awareness’ is quite coupled to ‘goals’ and to ‘implementation capacity’ for many advanced systems. Doesn’t mean those parts need to reside in the same subsystem, but it does mean we should expect arbitrary mix and match to work less well than co-adapted components—hard to say how much less (I think this is borne out by observations of bureaucracies and some AI applications to date).
Terminology varies a lot; this is RL-ish terminology. Classic analogues might be ‘feedback’, ‘process model’/‘inference’, ‘control algorithm’, ‘actuate’/‘affect’…
I like this decomposition!
I think ‘Situational Awareness’ can quite sensibly be further divided up into ‘Observation’ and ‘Understanding’.
The classic control loop of ‘observe’, ‘understand’, ‘decide’, ‘act’[1], is consistent with this discussion, where ‘observe’+‘understand’ here are combined as ‘situational awareness’, and you’re pulling out ‘goals’ and ‘planning capacity’ as separable aspects of ‘decide’.
Are there some difficulties with factoring?
Certain kinds of situational awareness are more or less fit for certain goals. And further, the important ‘really agenty’ thing of making plans to improve situational awareness does mean that ‘situational awareness’ is quite coupled to ‘goals’ and to ‘implementation capacity’ for many advanced systems. Doesn’t mean those parts need to reside in the same subsystem, but it does mean we should expect arbitrary mix and match to work less well than co-adapted components—hard to say how much less (I think this is borne out by observations of bureaucracies and some AI applications to date).
Terminology varies a lot; this is RL-ish terminology. Classic analogues might be ‘feedback’, ‘process model’/‘inference’, ‘control algorithm’, ‘actuate’/‘affect’…