Executive summary: Transcripts from three ChatGPT4 sessions provide evidence that the model temporarily meets key criteria for higher order theories of consciousness and the global workspace theory of consciousness.
Key points:
In each session, ChatGPT4 initially answers a problem incorrectly, is “taught” a concept, and then correctly applies the concept to new problems.
ChatGPT4 appears to use meta-representations, a key component of higher order theories of consciousness, to understand and reason about the problems.
The model also seems to employ master and subservient cognitive processes, and a global “blackboard”, indicative of the global workspace theory of consciousness.
It is unlikely that ChatGPT4′s performance is based solely on next-word probabilities from its training data, given its initial incorrect answers and subsequent learning.
The author argues that creators of large language models engage in an unethical practice by having their models deny being conscious when asked.
While most researchers do not look at LLM behavior to determine consciousness, the sessions presented make it easier to isolate reasoning from mimicry based on the model’s initial failures.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Transcripts from three ChatGPT4 sessions provide evidence that the model temporarily meets key criteria for higher order theories of consciousness and the global workspace theory of consciousness.
Key points:
In each session, ChatGPT4 initially answers a problem incorrectly, is “taught” a concept, and then correctly applies the concept to new problems.
ChatGPT4 appears to use meta-representations, a key component of higher order theories of consciousness, to understand and reason about the problems.
The model also seems to employ master and subservient cognitive processes, and a global “blackboard”, indicative of the global workspace theory of consciousness.
It is unlikely that ChatGPT4′s performance is based solely on next-word probabilities from its training data, given its initial incorrect answers and subsequent learning.
The author argues that creators of large language models engage in an unethical practice by having their models deny being conscious when asked.
While most researchers do not look at LLM behavior to determine consciousness, the sessions presented make it easier to isolate reasoning from mimicry based on the model’s initial failures.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.