Had a thought recently that “self-aware” could be interpreted as something like “has an internal model of the context that it is in, as an artificial neural network running on computer hardware, built by agents known as humans”, rather than anything involving consciousness. Such an ML model—that contains a model of itself and its context—could be thought of as being (non-consciously) self-aware, despite essentially just being a giant pile of linear algebra.
Had a thought recently that “self-aware” could be interpreted as something like “has an internal model of the context that it is in, as an artificial neural network running on computer hardware, built by agents known as humans”, rather than anything involving consciousness. Such an ML model—that contains a model of itself and its context—could be thought of as being (non-consciously) self-aware, despite essentially just being a giant pile of linear algebra.