Cool idea to try it like that! This just shows more true reasoning is going on though (in my opinion), since ChatGPT was able to figure out the concept by working backward from the answer. It shows that ChatGPT isn’t always a “stochastic parrot.” Sometimes, it is truly reasoning, which would involve various aspects of the theories of consciousness that I mentioned in the OP. If anything, this strengthens the case that ChatGPT has periods of consciousness (based on reasoning). While this doesn’t change my thesis, it gives me a new technique to use when testing for reasoning, so it’s very useful. Again, great idea! I can get a little formulaic in my approach, and it’s good to shake things up.
Cool idea to try it like that! This just shows more true reasoning is going on though (in my opinion), since ChatGPT was able to figure out the concept by working backward from the answer. It shows that ChatGPT isn’t always a “stochastic parrot.” Sometimes, it is truly reasoning, which would involve various aspects of the theories of consciousness that I mentioned in the OP. If anything, this strengthens the case that ChatGPT has periods of consciousness (based on reasoning). While this doesn’t change my thesis, it gives me a new technique to use when testing for reasoning, so it’s very useful. Again, great idea! I can get a little formulaic in my approach, and it’s good to shake things up.