On the last question at the end, I don’t think the fact that LLMs take their own outputs as inputs means we would have to interpret the text space as the global workspace itself or everything in the global workspace in order to think of LLMs as having feedback/recurrence.
It could be more like talking (or thinking out loud), which also allows the LLM to guide its own attention. Each output text token would (also) be like an attention command for the next run of the network. The global workspace can still be internal, and the LLM’s outputs are just sensed by the LLM, just like hearing your own voice when you speak.
Using outputs this way also doesn’t necessarily mean the LLM is outputting everything it’s thinking (if it thinks at all), just like we don’t. It could have different internal thoughts, reflecting the high-dimensional stuff going on before just before output (or even earlier).
Its text outputs could even be coded commands to help it generate hidden internal thoughts over time, summarising what things it’s done internally or what to do next without saying so to us. I don’t think LLMs are doing this now, but maybe a superintelligent LLM could.
All this being said, the contents of an LLM’s global workspace could change pretty dramatically from one pass over the text to the next pass of the same text+an extra token. Maybe unusually abruptly compared to animals.
On the last question at the end, I don’t think the fact that LLMs take their own outputs as inputs means we would have to interpret the text space as the global workspace itself or everything in the global workspace in order to think of LLMs as having feedback/recurrence.
It could be more like talking (or thinking out loud), which also allows the LLM to guide its own attention. Each output text token would (also) be like an attention command for the next run of the network. The global workspace can still be internal, and the LLM’s outputs are just sensed by the LLM, just like hearing your own voice when you speak.
Using outputs this way also doesn’t necessarily mean the LLM is outputting everything it’s thinking (if it thinks at all), just like we don’t. It could have different internal thoughts, reflecting the high-dimensional stuff going on before just before output (or even earlier).
Its text outputs could even be coded commands to help it generate hidden internal thoughts over time, summarising what things it’s done internally or what to do next without saying so to us. I don’t think LLMs are doing this now, but maybe a superintelligent LLM could.
All this being said, the contents of an LLM’s global workspace could change pretty dramatically from one pass over the text to the next pass of the same text+an extra token. Maybe unusually abruptly compared to animals.