That’s a great question. I’ll reply separately with my takes on progress on a) the pretty hard problem, b) the hard problem, and c) something called the meta-problem of consciousness [1].
[1] With apologies for introducing yet another ‘problem’ to distinguish between, when I’ve already introduced two! (Perhaps you can put these three problems into Anki?)
Progress on the pretty hard problem
This is my attempt to explain Jonathan Birch’s recent proposal for studying invertebrate consciousness. Let me know if it makes rough sense!
The problem with studying animal consciousness is that it is hard to know how much we can extrapolate from what we know about what suffices for human consciousness. Let’s grant that we know from experiments on humans that you will be conscious of a visual perception if you have a neural system for broadcasting information to multiple sub-systems in the brain. (This is the Global Workspace Theory mentioned above), and that visual perception is broadcast. Great, now we know that this sophisticated human Global Workspace suffices for consciousness. But how much of that is necessary? How much simpler could the Global Workspace be and still result in consciousness?
When we try to take a theory of consciousness “off the shelf” and apply it to animals, we face a choice of how strict to be. We could say that the Global Workspace must be as complicated as the human case. Then no animals count as conscious. We could say that the Global Workspace can be very simple. Then maybe even simple programs count as conscious. To know how strict or liberal to be in applying the theory, we need to know what animals are conscious. Which is the very question!
Some people try to get around this by proposing tests for consciousness that avoid the need for theory—the Turing Test would be an example of this in the AI case. But these usually end up sneaking theory in the backdoor.
Here’s Birch’s proposal for getting around this impass.
Make a minimal theoretical assumption about consciousness.
The ‘facilitation hypothesis’:
Phenomenally conscious perception, relative to unconscious perception, facilitates a “cluster” of cognitive abilities.
It’s a cluster because it seems like “the abilities will come and go together, co-varying in a way that depends on whether or not a stimulus is consciously perceived” (8). Empirically we have evidence that some abilities in the cluster include: trace conditioning, rapid reversal learning, cross-modal learning.
Look for these clusters of abilities of animals.
See if things which are able to make perceptions unconscious in humans—flashing them quickly and so forth—seems to ‘knock out’ that cluster in animals. If we can make the clusters come and go like this, it’s a pretty reasonable inference that the cause of this is consciousness coming and going.
As I understand it, Birch (a philosopher) is currently working with scientists to flash stuff at bees and so forth. I think Birch’s research proposal is a great conceptual advance and I find the empirical research itself very exciting and am curious to see what comes out of it.
That’s a great question. I’ll reply separately with my takes on progress on a) the pretty hard problem, b) the hard problem, and c) something called the meta-problem of consciousness [1].
[1] With apologies for introducing yet another ‘problem’ to distinguish between, when I’ve already introduced two! (Perhaps you can put these three problems into Anki?)
Progress on the pretty hard problem
This is my attempt to explain Jonathan Birch’s recent proposal for studying invertebrate consciousness. Let me know if it makes rough sense!
The problem with studying animal consciousness is that it is hard to know how much we can extrapolate from what we know about what suffices for human consciousness. Let’s grant that we know from experiments on humans that you will be conscious of a visual perception if you have a neural system for broadcasting information to multiple sub-systems in the brain. (This is the Global Workspace Theory mentioned above), and that visual perception is broadcast. Great, now we know that this sophisticated human Global Workspace suffices for consciousness. But how much of that is necessary? How much simpler could the Global Workspace be and still result in consciousness?
When we try to take a theory of consciousness “off the shelf” and apply it to animals, we face a choice of how strict to be. We could say that the Global Workspace must be as complicated as the human case. Then no animals count as conscious. We could say that the Global Workspace can be very simple. Then maybe even simple programs count as conscious. To know how strict or liberal to be in applying the theory, we need to know what animals are conscious. Which is the very question!
Some people try to get around this by proposing tests for consciousness that avoid the need for theory—the Turing Test would be an example of this in the AI case. But these usually end up sneaking theory in the backdoor.
Here’s Birch’s proposal for getting around this impass.
Make a minimal theoretical assumption about consciousness.
It’s a cluster because it seems like “the abilities will come and go together, co-varying in a way that depends on whether or not a stimulus is consciously perceived” (8). Empirically we have evidence that some abilities in the cluster include: trace conditioning, rapid reversal learning, cross-modal learning.
Look for these clusters of abilities of animals.
See if things which are able to make perceptions unconscious in humans—flashing them quickly and so forth—seems to ‘knock out’ that cluster in animals. If we can make the clusters come and go like this, it’s a pretty reasonable inference that the cause of this is consciousness coming and going.
As I understand it, Birch (a philosopher) is currently working with scientists to flash stuff at bees and so forth. I think Birch’s research proposal is a great conceptual advance and I find the empirical research itself very exciting and am curious to see what comes out of it.