FWIW, I meant “How could they not be conscious?” kind of rhetorically, but I appreciate your response. Making it more concrete like this is helpful. My comment here is pretty object-level about the specific views in question, so feel free not to respond to it or any specific points here.
Global workspace theory (...)
There probably still needs to be “workspaces”, e.g. working memory (+ voluntary attention?), or else the robots couldn’t do many sophisticated things flexibly, and whatever those workspaces are could be global workspaces. Maybe each module has its own workspace, so is “global” to itself, and that’s enough. Or, if the workspaces are considered together as one combined system, then it could be a more conventional “global workspace”, just distributed. The differences don’t seem significant at this level of abstraction. Maybe they are, but I’d want to know why. So, my direct intuitive reaction to “GWT is true and the robots aren’t conscious” could be unreliable, because it’s hard to entertain.
Higher order theories suggest that consciousness depends on having representations of our own mental states. A creature could have all sorts of direct concerns that it never reflected on, and these could look a lot like ours.
I think this one is more plausible and easier to entertain, although still weird.
I think it means that if you asked the mother robot if she cares about her child, she wouldn’t say ‘yes’ (she might say ‘no’ or be confused). It seems the robots would all have complete alexithymia, and not just for emotions, but for all mental states, or at least all (the components of) mental states that could matter, e.g. valence, desires, preferences. But they’d still be intelligent and articulate. The mother would have no concept of desire, preference, caring, etc., or she’d be systematically unable to apply such concepts to herself, even though she might apply it to her child, e.g. she distinguishes her child from a “mere thing”, and I imagine she recognizes that her child cares about things.
Or, maybe it could depend on the particulars of what’s required of a higher order representation according to theory. The mother robot might have and apply a concept of desire, preference, caring, etc. to herself, but it’s not the right kind of higher order representation.
IIT suggests that you could have a high level duplicate of a conscious system that was unconscious due to the fine grained details.
IIT is pretty panpsychist in practice, just needing recurrence, IIRC. I don’t think you would have a complex society of intelligent robots without recurrence (networks of purely feedforward interactions would end up far too large, but the recurrence might be extended beyond their brains). And at any rate, IIT seems way off track to me as a theory. So, my direct intuitive reaction to “IIT is true and the robots aren’t conscious” will probably be unreliable.
My impression was that you like theories that stress the mechanisms behind our judgments of the weirdness of consciousness as critical to conscious experiences. I could imagine a robot just like us but totally non-introspective, lacking phenomenal concepts, etc. Would you think such a thing was conscious? Could it not desire things in something like the way we do?
There are a few “lines” that seem potentially morally significant to me as an illusionist:
As you mention, having and applying phenomenal concepts, or having illusions of phenomenal consciousness, e.g. finding aspects of our perceptions/information processing weird/mysterious/curious/ineffable (or unphysical, private and/or intrinsic, etc., although that’s getting more specific, and there’s probably more disagreement on this). I agree the robots could fail to matter in this way.
Having states that would lead to illusions of phenomenal consciousness or the application of phenomenal concepts to them, finding them weird/mysterious/curious, etc., if those states were introspected on by a sufficiently sophisticated system in the right way (even if the existing system is incapable of introspection; we consider a hypothetical attaching another system to do it). This is Frankish’s and I suspect Dennett’s normative interpretation of illusionism, and their views of consciousness are highly graded. Maybe just cognitive impenetrability suffices, if/because the cognitive impenetrability of the things we introspect is what makes them seem weird/mysterious/curious/ineffable to us.[1] I’d guess the robots would matter in this way.
The appearances of something mattering, in causal/functional terms — including desires, pleasure, unpleasantness, preferences, moral intuitions, normative beliefs, etc. — just are phenomenal illusions or (the application of) phenomenal concepts, or parts of phenomenal illusions or phenomenal concepts that matter even on their own. It’s not just that consciousness seems weird (etc.), but that part of our phenomenal concepts for (morally relevant) conscious mental states is just that they seem to matter. And, in fact, it’s the appearance of mattering that makes the mental states matter morally, not the apparent weirdness (etc.). We wouldn’t care (much) about a person’s specific experience of red unless they cared about it, too. An experience only matters morally in itself if it seems to matter to the individual, e.g. the individual takes a specific interest in it, or finds it pleasant, unpleasant, attractive, aversive, significant, etc.. Furthermore, it’s not important that that “seeming to matter” applies to mental states in a higher-order way rather than “directly” to the intentional objects of mental states, like in the robots’ desires; that’s an arbitrary line.[2] The robots seem to matter in this way.
1 implies 2, and I suspect 3 implies 2, as well.
I also suspect we can’t answer which of 1, 2 or 3 is (objectively, stance-independently) correct. It seems inherently normative and subjective (and I’m not a moral realist), although I’ve become pretty sympathetic to 3, basically for the reasons I give in 3. We could also go for a graded account of moral status, where each of 1, 2 and 3 ground different degrees of moral status.
Humphrey, another illusionist, said “Consciousness matters because it is its function to matter”. However, he’s skeptical animals other than mammals and birds are conscious. He thinks consciousness requires finding your own mental states/perceptions/sensations to matter, e.g. engaging in sensation-seeking or sensory play. Such animals find their perceptions themselves interesting, not just the intentional objects of those perceptions. So it’s higher order-ish.
FWIW, I meant “How could they not be conscious?” kind of rhetorically, but I appreciate your response. Making it more concrete like this is helpful. My comment here is pretty object-level about the specific views in question, so feel free not to respond to it or any specific points here.
There probably still needs to be “workspaces”, e.g. working memory (+ voluntary attention?), or else the robots couldn’t do many sophisticated things flexibly, and whatever those workspaces are could be global workspaces. Maybe each module has its own workspace, so is “global” to itself, and that’s enough. Or, if the workspaces are considered together as one combined system, then it could be a more conventional “global workspace”, just distributed. The differences don’t seem significant at this level of abstraction. Maybe they are, but I’d want to know why. So, my direct intuitive reaction to “GWT is true and the robots aren’t conscious” could be unreliable, because it’s hard to entertain.
I think this one is more plausible and easier to entertain, although still weird.
I think it means that if you asked the mother robot if she cares about her child, she wouldn’t say ‘yes’ (she might say ‘no’ or be confused). It seems the robots would all have complete alexithymia, and not just for emotions, but for all mental states, or at least all (the components of) mental states that could matter, e.g. valence, desires, preferences. But they’d still be intelligent and articulate. The mother would have no concept of desire, preference, caring, etc., or she’d be systematically unable to apply such concepts to herself, even though she might apply it to her child, e.g. she distinguishes her child from a “mere thing”, and I imagine she recognizes that her child cares about things.
Or, maybe it could depend on the particulars of what’s required of a higher order representation according to theory. The mother robot might have and apply a concept of desire, preference, caring, etc. to herself, but it’s not the right kind of higher order representation.
IIT is pretty panpsychist in practice, just needing recurrence, IIRC. I don’t think you would have a complex society of intelligent robots without recurrence (networks of purely feedforward interactions would end up far too large, but the recurrence might be extended beyond their brains). And at any rate, IIT seems way off track to me as a theory. So, my direct intuitive reaction to “IIT is true and the robots aren’t conscious” will probably be unreliable.
There are a few “lines” that seem potentially morally significant to me as an illusionist:
As you mention, having and applying phenomenal concepts, or having illusions of phenomenal consciousness, e.g. finding aspects of our perceptions/information processing weird/mysterious/curious/ineffable (or unphysical, private and/or intrinsic, etc., although that’s getting more specific, and there’s probably more disagreement on this). I agree the robots could fail to matter in this way.
Having states that would lead to illusions of phenomenal consciousness or the application of phenomenal concepts to them, finding them weird/mysterious/curious, etc., if those states were introspected on by a sufficiently sophisticated system in the right way (even if the existing system is incapable of introspection; we consider a hypothetical attaching another system to do it). This is Frankish’s and I suspect Dennett’s normative interpretation of illusionism, and their views of consciousness are highly graded. Maybe just cognitive impenetrability suffices, if/because the cognitive impenetrability of the things we introspect is what makes them seem weird/mysterious/curious/ineffable to us.[1] I’d guess the robots would matter in this way.
The appearances of something mattering, in causal/functional terms — including desires, pleasure, unpleasantness, preferences, moral intuitions, normative beliefs, etc. — just are phenomenal illusions or (the application of) phenomenal concepts, or parts of phenomenal illusions or phenomenal concepts that matter even on their own. It’s not just that consciousness seems weird (etc.), but that part of our phenomenal concepts for (morally relevant) conscious mental states is just that they seem to matter. And, in fact, it’s the appearance of mattering that makes the mental states matter morally, not the apparent weirdness (etc.). We wouldn’t care (much) about a person’s specific experience of red unless they cared about it, too. An experience only matters morally in itself if it seems to matter to the individual, e.g. the individual takes a specific interest in it, or finds it pleasant, unpleasant, attractive, aversive, significant, etc.. Furthermore, it’s not important that that “seeming to matter” applies to mental states in a higher-order way rather than “directly” to the intentional objects of mental states, like in the robots’ desires; that’s an arbitrary line.[2] The robots seem to matter in this way.
1 implies 2, and I suspect 3 implies 2, as well.
I also suspect we can’t answer which of 1, 2 or 3 is (objectively, stance-independently) correct. It seems inherently normative and subjective (and I’m not a moral realist), although I’ve become pretty sympathetic to 3, basically for the reasons I give in 3. We could also go for a graded account of moral status, where each of 1, 2 and 3 ground different degrees of moral status.
In defense of the necessity of the cognitive impenetrability of illusions of phenomenal consciousness, see Kammerer, 2022.
Humphrey, another illusionist, said “Consciousness matters because it is its function to matter”. However, he’s skeptical animals other than mammals and birds are conscious. He thinks consciousness requires finding your own mental states/perceptions/sensations to matter, e.g. engaging in sensation-seeking or sensory play. Such animals find their perceptions themselves interesting, not just the intentional objects of those perceptions. So it’s higher order-ish.