For Possibility 3, I guess you mean more specifically “Decorticate rats are not conscious, and neither are intact rats”, correct?
If so, I think you’re prematurely rejecting, let’s call it, Possibility 5: “Decorticate rats are not conscious, whereas intact rats are conscious.”
I think it’s just generally tricky to infer consciousness from behavior. For example, you mention “survive, navigate their environment, or interact with their peers… find their way around landmarks, solve basic reasoning tasks, and learn to avoid painful stimuli.” But deep-RL agents can do all those things too, right? Are deep-RL agents conscious? Well, maybe you think they are. But I and lots of people think they aren’t. At the very least, you need to make that argument, it doesn’t go without saying. And if we can’t unthinkingly infer consciousness from behavior in deep RL, then we likewise can’t unthinkingly infer consciousness from seeing not-very-different behaviors in decorticate rats (or any other animals).
I also am a bit confused by your suggestion that decorticate-from-birth rats are wildly different from decorticate-from-birth primates. Merker 2007a argues that humans with hydranencephaly are basically decorticate-from-birth, and discusses all their behavior on p79, which very much seemed conscious to both Merker and the parents of these children, just as decorticate rats seem conscious to you. We don’t have to agree with Merker (and I don’t), but it seems that the basic issue is present in humans, unless of course Merker is mis-describing the nature of hydranencephaly. (I don’t know anything about hydranencephaly except from this one paper.)
(My actual [tentative] position is that, to the limited extent that phenomenal consciousness is a real meaningful notion in the first place, decorticate rats are not conscious, and intact rats might or might not be conscious, I don’t know, I’m still a bit hazy on the relevant neuroanatomy. I’m mostly a Graziano-ist, a.k.a. Attention Schema Theory.)
(My take on superior colliculus versus visual cortex is that they’re doing two very different types of computations, see §3.2.1 here.)
(Separately, mammal cortex seems to have a lot in common with bird pallium, such that “all mammals are conscious and no birds are conscious” would be a very weird position from my perspective. I’ve never heard anyone take that position, have you?)
For Possibility 3, I guess you mean more specifically “Decorticate rats are not conscious, and neither are intact rats”, correct?
That was what I meant when I started writing the section. When I finished, I decided I wanted to hedge my claims to not completely exclude the possibility you mention. In retrospect, I don’t think that hedge makes a lot of sense in the context of my overall argument.
Are deep-RL agents conscious? Well, maybe you think they are. But I and lots of people think they aren’t. At the very least, you need to make that argument, it doesn’t go without saying. And if we can’t unthinkingly infer consciousness from behavior in deep RL, then we likewise can’t unthinkingly infer consciousness from seeing not-very-different behaviors in decorticate rats (or any other animals).
It would be a mistake to infer from such behavior to consciousness without making some assumptions about implementation. In the typical case, when people infer consciousness in animals on the basis of similar behaviors, I take it that they implicitly assume something about similarity in brain structures that would account for similarities in the behaviors. This doesn’t seem to hold for RL agents who might use radically different architectures to produce the same ends. It also seems to hold only to a much lesser extent in animals with different sorts of brains like octopi (or possibly, given these studies, rats).
I’m not completely unsympathetic with the thought that the cortex is necessary for consciousness in rats.
Faculties for consciousness might exist in the cortex just to help with complex action planning; when the cortex is lost the behavioral effects are minor and revealed only by studies requiring complex actions. If it is plausible that rats have conscious experiences produced solely within their cortex, it would undermine my claim about the overall upshot of theses studies.
I do think it is somewhat counterintuitive for consciousness to exist in rats but not be necessary for basic behaviors. E.g., if they feel pain but don’t need to feel pain in order to be motivated to avoid noxious stimuli.
I also am a bit confused by your suggestion that decorticate-from-birth rats are wildly different from decorticate-from-birth primates. Merker 2007a argues that humans with hydranencephaly are basically decorticate-from-birth, and discusses all their behavior on p79, which very much seemed conscious to both Merker and the parents of these children, just as decorticate rats seem conscious to you.
It has been awhile since I’ve looked through that literature. My recollection was that the case was very unconvincing, and a lot of it looked like cherry-picked examples and biased reasoning. The important point is that decorticate-from-birth humans don’t have the ability to act nearly to the extent that rats do. They can orient towards interesting phenomena and have some control over muscles for things like smiling or kicking, but they can’t walk into the kitchen to get themselves a snack. I also think it is important that rats exhibit these capacities even when they lost their cortex in adulthood.
mammal cortex seems to have a lot in common with bird pallium, such that “all mammals are conscious and no birds are conscious”
I’ve heard the similarity claim a lot, but I’ve never been able to track down very convincing details. Birds are clearly very smart, and their palliums have evolved to solve the same sorts of problems as our cortices, but I would be surprised if there were strong neuroscientific grounds for thinking that if one group were conscious, the other would be too that didn’t depend on behavior.
As for whether anyone thinks that, Brian Key or Jack Rose have denied consciousness to fish specifically because the differences in their forebrains. I’m not sure what they would say about birds.
For Possibility 3, I guess you mean more specifically “Decorticate rats are not conscious, and neither are intact rats”, correct?
If so, I think you’re prematurely rejecting, let’s call it, Possibility 5: “Decorticate rats are not conscious, whereas intact rats are conscious.”
I think it’s just generally tricky to infer consciousness from behavior. For example, you mention “survive, navigate their environment, or interact with their peers… find their way around landmarks, solve basic reasoning tasks, and learn to avoid painful stimuli.” But deep-RL agents can do all those things too, right? Are deep-RL agents conscious? Well, maybe you think they are. But I and lots of people think they aren’t. At the very least, you need to make that argument, it doesn’t go without saying. And if we can’t unthinkingly infer consciousness from behavior in deep RL, then we likewise can’t unthinkingly infer consciousness from seeing not-very-different behaviors in decorticate rats (or any other animals).
I also am a bit confused by your suggestion that decorticate-from-birth rats are wildly different from decorticate-from-birth primates. Merker 2007a argues that humans with hydranencephaly are basically decorticate-from-birth, and discusses all their behavior on p79, which very much seemed conscious to both Merker and the parents of these children, just as decorticate rats seem conscious to you. We don’t have to agree with Merker (and I don’t), but it seems that the basic issue is present in humans, unless of course Merker is mis-describing the nature of hydranencephaly. (I don’t know anything about hydranencephaly except from this one paper.)
(My actual [tentative] position is that, to the limited extent that phenomenal consciousness is a real meaningful notion in the first place, decorticate rats are not conscious, and intact rats might or might not be conscious, I don’t know, I’m still a bit hazy on the relevant neuroanatomy. I’m mostly a Graziano-ist, a.k.a. Attention Schema Theory.)
(My take on superior colliculus versus visual cortex is that they’re doing two very different types of computations, see §3.2.1 here.)
(Separately, mammal cortex seems to have a lot in common with bird pallium, such that “all mammals are conscious and no birds are conscious” would be a very weird position from my perspective. I’ve never heard anyone take that position, have you?)
That was what I meant when I started writing the section. When I finished, I decided I wanted to hedge my claims to not completely exclude the possibility you mention. In retrospect, I don’t think that hedge makes a lot of sense in the context of my overall argument.
It would be a mistake to infer from such behavior to consciousness without making some assumptions about implementation. In the typical case, when people infer consciousness in animals on the basis of similar behaviors, I take it that they implicitly assume something about similarity in brain structures that would account for similarities in the behaviors. This doesn’t seem to hold for RL agents who might use radically different architectures to produce the same ends. It also seems to hold only to a much lesser extent in animals with different sorts of brains like octopi (or possibly, given these studies, rats).
I’m not completely unsympathetic with the thought that the cortex is necessary for consciousness in rats.
Faculties for consciousness might exist in the cortex just to help with complex action planning; when the cortex is lost the behavioral effects are minor and revealed only by studies requiring complex actions. If it is plausible that rats have conscious experiences produced solely within their cortex, it would undermine my claim about the overall upshot of theses studies.
I do think it is somewhat counterintuitive for consciousness to exist in rats but not be necessary for basic behaviors. E.g., if they feel pain but don’t need to feel pain in order to be motivated to avoid noxious stimuli.
It has been awhile since I’ve looked through that literature. My recollection was that the case was very unconvincing, and a lot of it looked like cherry-picked examples and biased reasoning. The important point is that decorticate-from-birth humans don’t have the ability to act nearly to the extent that rats do. They can orient towards interesting phenomena and have some control over muscles for things like smiling or kicking, but they can’t walk into the kitchen to get themselves a snack. I also think it is important that rats exhibit these capacities even when they lost their cortex in adulthood.
I’ve heard the similarity claim a lot, but I’ve never been able to track down very convincing details. Birds are clearly very smart, and their palliums have evolved to solve the same sorts of problems as our cortices, but I would be surprised if there were strong neuroscientific grounds for thinking that if one group were conscious, the other would be too that didn’t depend on behavior.
As for whether anyone thinks that, Brian Key or Jack Rose have denied consciousness to fish specifically because the differences in their forebrains. I’m not sure what they would say about birds.