Even in humans, language production is generally subconscious. At least, my experience of talking is that I generally first become conscious of what I say as I’m saying it. I have some sense of what I might want to say before I say it, but the machinery that selects specific words is not conscious. Sometimes, I think of a couple of different things I could say and consciously select between them. But often I don’t: I just hear myself speak. Language generation may often lead to conscious perceptions of inner speech, but it doesn’t seem to rely on it.
All of this suggests that the possibility of non-conscious chatbots should not be surprising. It may be that chatbots provide pretty good evidence that cognitive complexity can come apart from consciousness. But introspection alone should provide sufficient evidence for that.
Even in humans, language production is generally subconscious. At least, my experience of talking is that I generally first become conscious of what I say as I’m saying it. I have some sense of what I might want to say before I say it, but the machinery that selects specific words is not conscious. Sometimes, I think of a couple of different things I could say and consciously select between them. But often I don’t: I just hear myself speak. Language generation may often lead to conscious perceptions of inner speech, but it doesn’t seem to rely on it.
All of this suggests that the possibility of non-conscious chatbots should not be surprising. It may be that chatbots provide pretty good evidence that cognitive complexity can come apart from consciousness. But introspection alone should provide sufficient evidence for that.