FWIW, I meant âHow could they not be conscious?â kind of rhetorically, but I appreciate your response. Making it more concrete like this is helpful. My comment here is pretty object-level about the specific views in question, so feel free not to respond to it or any specific points here.
Global workspace theory (...)
There probably still needs to be âworkspacesâ, e.g. working memory (+ voluntary attention?), or else the robots couldnât do many sophisticated things flexibly, and whatever those workspaces are could be global workspaces. Maybe each module has its own workspace, so is âglobalâ to itself, and thatâs enough. Or, if the workspaces are considered together as one combined system, then it could be a more conventional âglobal workspaceâ, just distributed. The differences donât seem significant at this level of abstraction. Maybe they are, but Iâd want to know why. So, my direct intuitive reaction to âGWT is true and the robots arenât consciousâ could be unreliable, because itâs hard to entertain.
Higher order theories suggest that consciousness depends on having representations of our own mental states. A creature could have all sorts of direct concerns that it never reflected on, and these could look a lot like ours.
I think this one is more plausible and easier to entertain, although still weird.
I think it means that if you asked the mother robot if she cares about her child, she wouldnât say âyesâ (she might say ânoâ or be confused). It seems the robots would all have complete alexithymia, and not just for emotions, but for all mental states, or at least all (the components of) mental states that could matter, e.g. valence, desires, preferences. But theyâd still be intelligent and articulate. The mother would have no concept of desire, preference, caring, etc., or sheâd be systematically unable to apply such concepts to herself, even though she might apply it to her child, e.g. she distinguishes her child from a âmere thingâ, and I imagine she recognizes that her child cares about things.
Or, maybe it could depend on the particulars of whatâs required of a higher order representation according to theory. The mother robot might have and apply a concept of desire, preference, caring, etc. to herself, but itâs not the right kind of higher order representation.
IIT suggests that you could have a high level duplicate of a conscious system that was unconscious due to the fine grained details.
IIT is pretty panpsychist in practice, just needing recurrence, IIRC. I donât think you would have a complex society of intelligent robots without recurrence (networks of purely feedforward interactions would end up far too large, but the recurrence might be extended beyond their brains). And at any rate, IIT seems way off track to me as a theory. So, my direct intuitive reaction to âIIT is true and the robots arenât consciousâ will probably be unreliable.
My impression was that you like theories that stress the mechanisms behind our judgments of the weirdness of consciousness as critical to conscious experiences. I could imagine a robot just like us but totally non-introspective, lacking phenomenal concepts, etc. Would you think such a thing was conscious? Could it not desire things in something like the way we do?
There are a few âlinesâ that seem potentially morally significant to me as an illusionist:
As you mention, having and applying phenomenal concepts, or having illusions of phenomenal consciousness, e.g. finding aspects of our perceptions/âinformation processing weird/âmysterious/âcurious/âineffable (or unphysical, private and/âor intrinsic, etc., although thatâs getting more specific, and thereâs probably more disagreement on this). I agree the robots could fail to matter in this way.
Having states that would lead to illusions of phenomenal consciousness or the application of phenomenal concepts to them, finding them weird/âmysterious/âcurious, etc., if those states were introspected on by a sufficiently sophisticated system in the right way (even if the existing system is incapable of introspection; we consider a hypothetical attaching another system to do it). This is Frankishâs and I suspect Dennettâs normative interpretation of illusionism, and their views of consciousness are highly graded. Maybe just cognitive impenetrability suffices, if/âbecause the cognitive impenetrability of the things we introspect is what makes them seem weird/âmysterious/âcurious/âineffable to us.[1] Iâd guess the robots would matter in this way.
The appearances of something mattering, in causal/âfunctional terms â including desires, pleasure, unpleasantness, preferences, moral intuitions, normative beliefs, etc. â just are phenomenal illusions or (the application of) phenomenal concepts, or parts of phenomenal illusions or phenomenal concepts that matter even on their own. Itâs not just that consciousness seems weird (etc.), but that part of our phenomenal concepts for (morally relevant) conscious mental states is just that they seem to matter. And, in fact, itâs the appearance of mattering that makes the mental states matter morally, not the apparent weirdness (etc.). We wouldnât care (much) about a personâs specific experience of red unless they cared about it, too. An experience only matters morally in itself if it seems to matter to the individual, e.g. the individual takes a specific interest in it, or finds it pleasant, unpleasant, attractive, aversive, significant, etc.. Furthermore, itâs not important that that âseeming to matterâ applies to mental states in a higher-order way rather than âdirectlyâ to the intentional objects of mental states, like in the robotsâ desires; thatâs an arbitrary line.[2] The robots seem to matter in this way.
1 implies 2, and I suspect 3 implies 2, as well.
I also suspect we canât answer which of 1, 2 or 3 is (objectively, stance-independently) correct. It seems inherently normative and subjective (and Iâm not a moral realist), although Iâve become pretty sympathetic to 3, basically for the reasons I give in 3. We could also go for a graded account of moral status, where each of 1, 2 and 3 ground different degrees of moral status.
Humphrey, another illusionist, said âConsciousness matters because it is its function to matterâ. However, heâs skeptical animals other than mammals and birds are conscious. He thinks consciousness requires finding your own mental states/âperceptions/âsensations to matter, e.g. engaging in sensation-seeking or sensory play. Such animals find their perceptions themselves interesting, not just the intentional objects of those perceptions. So itâs higher order-ish.
FWIW, I meant âHow could they not be conscious?â kind of rhetorically, but I appreciate your response. Making it more concrete like this is helpful. My comment here is pretty object-level about the specific views in question, so feel free not to respond to it or any specific points here.
There probably still needs to be âworkspacesâ, e.g. working memory (+ voluntary attention?), or else the robots couldnât do many sophisticated things flexibly, and whatever those workspaces are could be global workspaces. Maybe each module has its own workspace, so is âglobalâ to itself, and thatâs enough. Or, if the workspaces are considered together as one combined system, then it could be a more conventional âglobal workspaceâ, just distributed. The differences donât seem significant at this level of abstraction. Maybe they are, but Iâd want to know why. So, my direct intuitive reaction to âGWT is true and the robots arenât consciousâ could be unreliable, because itâs hard to entertain.
I think this one is more plausible and easier to entertain, although still weird.
I think it means that if you asked the mother robot if she cares about her child, she wouldnât say âyesâ (she might say ânoâ or be confused). It seems the robots would all have complete alexithymia, and not just for emotions, but for all mental states, or at least all (the components of) mental states that could matter, e.g. valence, desires, preferences. But theyâd still be intelligent and articulate. The mother would have no concept of desire, preference, caring, etc., or sheâd be systematically unable to apply such concepts to herself, even though she might apply it to her child, e.g. she distinguishes her child from a âmere thingâ, and I imagine she recognizes that her child cares about things.
Or, maybe it could depend on the particulars of whatâs required of a higher order representation according to theory. The mother robot might have and apply a concept of desire, preference, caring, etc. to herself, but itâs not the right kind of higher order representation.
IIT is pretty panpsychist in practice, just needing recurrence, IIRC. I donât think you would have a complex society of intelligent robots without recurrence (networks of purely feedforward interactions would end up far too large, but the recurrence might be extended beyond their brains). And at any rate, IIT seems way off track to me as a theory. So, my direct intuitive reaction to âIIT is true and the robots arenât consciousâ will probably be unreliable.
There are a few âlinesâ that seem potentially morally significant to me as an illusionist:
As you mention, having and applying phenomenal concepts, or having illusions of phenomenal consciousness, e.g. finding aspects of our perceptions/âinformation processing weird/âmysterious/âcurious/âineffable (or unphysical, private and/âor intrinsic, etc., although thatâs getting more specific, and thereâs probably more disagreement on this). I agree the robots could fail to matter in this way.
Having states that would lead to illusions of phenomenal consciousness or the application of phenomenal concepts to them, finding them weird/âmysterious/âcurious, etc., if those states were introspected on by a sufficiently sophisticated system in the right way (even if the existing system is incapable of introspection; we consider a hypothetical attaching another system to do it). This is Frankishâs and I suspect Dennettâs normative interpretation of illusionism, and their views of consciousness are highly graded. Maybe just cognitive impenetrability suffices, if/âbecause the cognitive impenetrability of the things we introspect is what makes them seem weird/âmysterious/âcurious/âineffable to us.[1] Iâd guess the robots would matter in this way.
The appearances of something mattering, in causal/âfunctional terms â including desires, pleasure, unpleasantness, preferences, moral intuitions, normative beliefs, etc. â just are phenomenal illusions or (the application of) phenomenal concepts, or parts of phenomenal illusions or phenomenal concepts that matter even on their own. Itâs not just that consciousness seems weird (etc.), but that part of our phenomenal concepts for (morally relevant) conscious mental states is just that they seem to matter. And, in fact, itâs the appearance of mattering that makes the mental states matter morally, not the apparent weirdness (etc.). We wouldnât care (much) about a personâs specific experience of red unless they cared about it, too. An experience only matters morally in itself if it seems to matter to the individual, e.g. the individual takes a specific interest in it, or finds it pleasant, unpleasant, attractive, aversive, significant, etc.. Furthermore, itâs not important that that âseeming to matterâ applies to mental states in a higher-order way rather than âdirectlyâ to the intentional objects of mental states, like in the robotsâ desires; thatâs an arbitrary line.[2] The robots seem to matter in this way.
1 implies 2, and I suspect 3 implies 2, as well.
I also suspect we canât answer which of 1, 2 or 3 is (objectively, stance-independently) correct. It seems inherently normative and subjective (and Iâm not a moral realist), although Iâve become pretty sympathetic to 3, basically for the reasons I give in 3. We could also go for a graded account of moral status, where each of 1, 2 and 3 ground different degrees of moral status.
In defense of the necessity of the cognitive impenetrability of illusions of phenomenal consciousness, see Kammerer, 2022.
Humphrey, another illusionist, said âConsciousness matters because it is its function to matterâ. However, heâs skeptical animals other than mammals and birds are conscious. He thinks consciousness requires finding your own mental states/âperceptions/âsensations to matter, e.g. engaging in sensation-seeking or sensory play. Such animals find their perceptions themselves interesting, not just the intentional objects of those perceptions. So itâs higher order-ish.