This case is interesting, but I think it touches on a slightly different issue. The symbolic presumably doesn’t care about their pretend pain. There is a more complicated story about their actions that involves their commitment to the ruse. In the robot case, I assume we’re supposed to imagine that the robots care about each other to whatever extent that unconscious things can. Their motivational structure is close to ours.
I think the case is less clear if we build up the extent to which the asymbolic child really wants the painkillers. If they constantly worry about not getting them, if they are willing to sacrifice lots of other things they care about to secure them (even though they know that it won’t help them avoid pain), etc. I’m less inclined to think the case is clear cut.
In the robot case, I assume we’re supposed to imagine that the robots care about each other to whatever extent that unconscious things can.
I think so, but without more detail about what exactly they’re missing, my intuitive reaction is that they are conscious or reasonably likely to be conscious. It’s hard to trust or entertain the hypothetical. How could they not be conscious?
If you fill in the details in specific ways, then you might get different responses. If the robots are like today’s LLMs or a giant lookup table, then I’m inclined to say they aren’t really conscious to any significant degree: they’ve been designed (or assumed into existence) by Goodharting the behavioural outputs of conscious beings.
There’s another question about whether I’d actually dissect one, and maybe I still wouldn’t, but this could be for indirect or emotional reasons. It could still be very unpleasant or even traumatic for me to dissect something that cries out and against the desperate pleas of its mother. Or, it could be bad to become less sensitive to such responses, when such responses often are good indicators of risk of morally significant harm. People who were confident nonhuman animals don’t matter in themselves sometimes condemned animal cruelty for similar reasons.
Or, maybe the robots’ consciousness is very probably minimal, but still enough to warrant some care. This could be in line with how many people treat insects or spiders: they wouldn’t give up much to help them, but they might still take them outside when found inside or otherwise avoid killing them when the costs are very low.
If they constantly worry about not getting them, if they are willing to sacrifice lots of other things they care about to secure them (even though they know that it won’t help them avoid pain), etc. I’m less inclined to think the case is clear cut.
This could all follow from a great commitment to pretending to be capable of unpleasant pain like a typical person.
I guess if they’re subjectively worse off the less convincing they think they are to others, they could be worse off finding out they won’t get painkillers, if and because they find out they failed to convince you.
You could just lie and say there aren’t any painkillers available, but then this gets into the issue of whether they care about actually being convincing, or just believing they’re convincing (contact with reality, experience machine, etc.), and which of the two you care about on their behalf.
It is rare for theories of consciousness to make any demands on motivational structure.
Global workspace theory, for instance, says that consciousness depends on having a central repository by which different cognitive modules talk to each other. If the modules were to directly communicate point to point, there would be no conscious experiences (by that theory). I see no reason in that case why decision making would have to rely on different mechanisms.
Higher order theories suggest that consciousness depends on having representations of our own mental states. A creature could have all sorts of direct concerns that it never reflected on, and these could look a lot like ours.
IIT suggests that you could have a high level duplicate of a conscious system that was unconscious due to the fine grained details.
Etc.
The specific things you need to change in the robots to render them not conscious depends on your theory, but I don’t think you need to go quite so far as to make them a lookup table or an transformer.
My impression was that you like theories that stress the mechanisms behind our judgments of the weirdness of consciousness as critical to conscious experiences. I could imagine a robot just like us but totally non-introspective, lacking phenomenal concepts, etc. Would you think such a thing was conscious? Could it not desire things in something like the way we do?
There’s another question about whether I’d actually dissect one, and maybe I still wouldn’t, but this could be for indirect or emotional reasons. It could still be very unpleasant or even traumatic for me to dissect something that cries out and against the desperate pleas of its mother. Or, it could be bad to become less sensitive to such responses, when such responses often are good indicators of risk of morally significant harm. People who were confident nonhuman animals don’t matter in themselves sometimes condemned animal cruelty for similar reasons.
This supports my main argument. If you value conscious experience these emotional reasons could be concerning for the long term future. It seems like a slippery slope from being nice to them because we find it more pleasant to thinking that they are moral patients, particularly if we frequently interact with them. It is possible that our generation will never stop caring about consciousness, but if we’re not careful, our children might.
FWIW, I meant “How could they not be conscious?” kind of rhetorically, but I appreciate your response. Making it more concrete like this is helpful. My comment here is pretty object-level about the specific views in question, so feel free not to respond to it or any specific points here.
Global workspace theory (...)
There probably still needs to be “workspaces”, e.g. working memory (+ voluntary attention?), or else the robots couldn’t do many sophisticated things flexibly, and whatever those workspaces are could be global workspaces. Maybe each module has its own workspace, so is “global” to itself, and that’s enough. Or, if the workspaces are considered together as one combined system, then it could be a more conventional “global workspace”, just distributed. The differences don’t seem significant at this level of abstraction. Maybe they are, but I’d want to know why. So, my direct intuitive reaction to “GWT is true and the robots aren’t conscious” could be unreliable, because it’s hard to entertain.
Higher order theories suggest that consciousness depends on having representations of our own mental states. A creature could have all sorts of direct concerns that it never reflected on, and these could look a lot like ours.
I think this one is more plausible and easier to entertain, although still weird.
I think it means that if you asked the mother robot if she cares about her child, she wouldn’t say ‘yes’ (she might say ‘no’ or be confused). It seems the robots would all have complete alexithymia, and not just for emotions, but for all mental states, or at least all (the components of) mental states that could matter, e.g. valence, desires, preferences. But they’d still be intelligent and articulate. The mother would have no concept of desire, preference, caring, etc., or she’d be systematically unable to apply such concepts to herself, even though she might apply it to her child, e.g. she distinguishes her child from a “mere thing”, and I imagine she recognizes that her child cares about things.
Or, maybe it could depend on the particulars of what’s required of a higher order representation according to theory. The mother robot might have and apply a concept of desire, preference, caring, etc. to herself, but it’s not the right kind of higher order representation.
IIT suggests that you could have a high level duplicate of a conscious system that was unconscious due to the fine grained details.
IIT is pretty panpsychist in practice, just needing recurrence, IIRC. I don’t think you would have a complex society of intelligent robots without recurrence (networks of purely feedforward interactions would end up far too large, but the recurrence might be extended beyond their brains). And at any rate, IIT seems way off track to me as a theory. So, my direct intuitive reaction to “IIT is true and the robots aren’t conscious” will probably be unreliable.
My impression was that you like theories that stress the mechanisms behind our judgments of the weirdness of consciousness as critical to conscious experiences. I could imagine a robot just like us but totally non-introspective, lacking phenomenal concepts, etc. Would you think such a thing was conscious? Could it not desire things in something like the way we do?
There are a few “lines” that seem potentially morally significant to me as an illusionist:
As you mention, having and applying phenomenal concepts, or having illusions of phenomenal consciousness, e.g. finding aspects of our perceptions/information processing weird/mysterious/curious/ineffable (or unphysical, private and/or intrinsic, etc., although that’s getting more specific, and there’s probably more disagreement on this). I agree the robots could fail to matter in this way.
Having states that would lead to illusions of phenomenal consciousness or the application of phenomenal concepts to them, finding them weird/mysterious/curious, etc., if those states were introspected on by a sufficiently sophisticated system in the right way (even if the existing system is incapable of introspection; we consider a hypothetical attaching another system to do it). This is Frankish’s and I suspect Dennett’s normative interpretation of illusionism, and their views of consciousness are highly graded. Maybe just cognitive impenetrability suffices, if/because the cognitive impenetrability of the things we introspect is what makes them seem weird/mysterious/curious/ineffable to us.[1] I’d guess the robots would matter in this way.
The appearances of something mattering, in causal/functional terms — including desires, pleasure, unpleasantness, preferences, moral intuitions, normative beliefs, etc. — just are phenomenal illusions or (the application of) phenomenal concepts, or parts of phenomenal illusions or phenomenal concepts that matter even on their own. It’s not just that consciousness seems weird (etc.), but that part of our phenomenal concepts for (morally relevant) conscious mental states is just that they seem to matter. And, in fact, it’s the appearance of mattering that makes the mental states matter morally, not the apparent weirdness (etc.). We wouldn’t care (much) about a person’s specific experience of red unless they cared about it, too. An experience only matters morally in itself if it seems to matter to the individual, e.g. the individual takes a specific interest in it, or finds it pleasant, unpleasant, attractive, aversive, significant, etc.. Furthermore, it’s not important that that “seeming to matter” applies to mental states in a higher-order way rather than “directly” to the intentional objects of mental states, like in the robots’ desires; that’s an arbitrary line.[2] The robots seem to matter in this way.
1 implies 2, and I suspect 3 implies 2, as well.
I also suspect we can’t answer which of 1, 2 or 3 is (objectively, stance-independently) correct. It seems inherently normative and subjective (and I’m not a moral realist), although I’ve become pretty sympathetic to 3, basically for the reasons I give in 3. We could also go for a graded account of moral status, where each of 1, 2 and 3 ground different degrees of moral status.
Humphrey, another illusionist, said “Consciousness matters because it is its function to matter”. However, he’s skeptical animals other than mammals and birds are conscious. He thinks consciousness requires finding your own mental states/perceptions/sensations to matter, e.g. engaging in sensation-seeking or sensory play. Such animals find their perceptions themselves interesting, not just the intentional objects of those perceptions. So it’s higher order-ish.
This case is interesting, but I think it touches on a slightly different issue. The symbolic presumably doesn’t care about their pretend pain. There is a more complicated story about their actions that involves their commitment to the ruse. In the robot case, I assume we’re supposed to imagine that the robots care about each other to whatever extent that unconscious things can. Their motivational structure is close to ours.
I think the case is less clear if we build up the extent to which the asymbolic child really wants the painkillers. If they constantly worry about not getting them, if they are willing to sacrifice lots of other things they care about to secure them (even though they know that it won’t help them avoid pain), etc. I’m less inclined to think the case is clear cut.
I think so, but without more detail about what exactly they’re missing, my intuitive reaction is that they are conscious or reasonably likely to be conscious. It’s hard to trust or entertain the hypothetical. How could they not be conscious?
If you fill in the details in specific ways, then you might get different responses. If the robots are like today’s LLMs or a giant lookup table, then I’m inclined to say they aren’t really conscious to any significant degree: they’ve been designed (or assumed into existence) by Goodharting the behavioural outputs of conscious beings.
There’s another question about whether I’d actually dissect one, and maybe I still wouldn’t, but this could be for indirect or emotional reasons. It could still be very unpleasant or even traumatic for me to dissect something that cries out and against the desperate pleas of its mother. Or, it could be bad to become less sensitive to such responses, when such responses often are good indicators of risk of morally significant harm. People who were confident nonhuman animals don’t matter in themselves sometimes condemned animal cruelty for similar reasons.
Or, maybe the robots’ consciousness is very probably minimal, but still enough to warrant some care. This could be in line with how many people treat insects or spiders: they wouldn’t give up much to help them, but they might still take them outside when found inside or otherwise avoid killing them when the costs are very low.
This could all follow from a great commitment to pretending to be capable of unpleasant pain like a typical person.
I guess if they’re subjectively worse off the less convincing they think they are to others, they could be worse off finding out they won’t get painkillers, if and because they find out they failed to convince you.
You could just lie and say there aren’t any painkillers available, but then this gets into the issue of whether they care about actually being convincing, or just believing they’re convincing (contact with reality, experience machine, etc.), and which of the two you care about on their behalf.
It is rare for theories of consciousness to make any demands on motivational structure.
Global workspace theory, for instance, says that consciousness depends on having a central repository by which different cognitive modules talk to each other. If the modules were to directly communicate point to point, there would be no conscious experiences (by that theory). I see no reason in that case why decision making would have to rely on different mechanisms.
Higher order theories suggest that consciousness depends on having representations of our own mental states. A creature could have all sorts of direct concerns that it never reflected on, and these could look a lot like ours.
IIT suggests that you could have a high level duplicate of a conscious system that was unconscious due to the fine grained details.
Etc.
The specific things you need to change in the robots to render them not conscious depends on your theory, but I don’t think you need to go quite so far as to make them a lookup table or an transformer.
My impression was that you like theories that stress the mechanisms behind our judgments of the weirdness of consciousness as critical to conscious experiences. I could imagine a robot just like us but totally non-introspective, lacking phenomenal concepts, etc. Would you think such a thing was conscious? Could it not desire things in something like the way we do?
This supports my main argument. If you value conscious experience these emotional reasons could be concerning for the long term future. It seems like a slippery slope from being nice to them because we find it more pleasant to thinking that they are moral patients, particularly if we frequently interact with them. It is possible that our generation will never stop caring about consciousness, but if we’re not careful, our children might.
FWIW, I meant “How could they not be conscious?” kind of rhetorically, but I appreciate your response. Making it more concrete like this is helpful. My comment here is pretty object-level about the specific views in question, so feel free not to respond to it or any specific points here.
There probably still needs to be “workspaces”, e.g. working memory (+ voluntary attention?), or else the robots couldn’t do many sophisticated things flexibly, and whatever those workspaces are could be global workspaces. Maybe each module has its own workspace, so is “global” to itself, and that’s enough. Or, if the workspaces are considered together as one combined system, then it could be a more conventional “global workspace”, just distributed. The differences don’t seem significant at this level of abstraction. Maybe they are, but I’d want to know why. So, my direct intuitive reaction to “GWT is true and the robots aren’t conscious” could be unreliable, because it’s hard to entertain.
I think this one is more plausible and easier to entertain, although still weird.
I think it means that if you asked the mother robot if she cares about her child, she wouldn’t say ‘yes’ (she might say ‘no’ or be confused). It seems the robots would all have complete alexithymia, and not just for emotions, but for all mental states, or at least all (the components of) mental states that could matter, e.g. valence, desires, preferences. But they’d still be intelligent and articulate. The mother would have no concept of desire, preference, caring, etc., or she’d be systematically unable to apply such concepts to herself, even though she might apply it to her child, e.g. she distinguishes her child from a “mere thing”, and I imagine she recognizes that her child cares about things.
Or, maybe it could depend on the particulars of what’s required of a higher order representation according to theory. The mother robot might have and apply a concept of desire, preference, caring, etc. to herself, but it’s not the right kind of higher order representation.
IIT is pretty panpsychist in practice, just needing recurrence, IIRC. I don’t think you would have a complex society of intelligent robots without recurrence (networks of purely feedforward interactions would end up far too large, but the recurrence might be extended beyond their brains). And at any rate, IIT seems way off track to me as a theory. So, my direct intuitive reaction to “IIT is true and the robots aren’t conscious” will probably be unreliable.
There are a few “lines” that seem potentially morally significant to me as an illusionist:
As you mention, having and applying phenomenal concepts, or having illusions of phenomenal consciousness, e.g. finding aspects of our perceptions/information processing weird/mysterious/curious/ineffable (or unphysical, private and/or intrinsic, etc., although that’s getting more specific, and there’s probably more disagreement on this). I agree the robots could fail to matter in this way.
Having states that would lead to illusions of phenomenal consciousness or the application of phenomenal concepts to them, finding them weird/mysterious/curious, etc., if those states were introspected on by a sufficiently sophisticated system in the right way (even if the existing system is incapable of introspection; we consider a hypothetical attaching another system to do it). This is Frankish’s and I suspect Dennett’s normative interpretation of illusionism, and their views of consciousness are highly graded. Maybe just cognitive impenetrability suffices, if/because the cognitive impenetrability of the things we introspect is what makes them seem weird/mysterious/curious/ineffable to us.[1] I’d guess the robots would matter in this way.
The appearances of something mattering, in causal/functional terms — including desires, pleasure, unpleasantness, preferences, moral intuitions, normative beliefs, etc. — just are phenomenal illusions or (the application of) phenomenal concepts, or parts of phenomenal illusions or phenomenal concepts that matter even on their own. It’s not just that consciousness seems weird (etc.), but that part of our phenomenal concepts for (morally relevant) conscious mental states is just that they seem to matter. And, in fact, it’s the appearance of mattering that makes the mental states matter morally, not the apparent weirdness (etc.). We wouldn’t care (much) about a person’s specific experience of red unless they cared about it, too. An experience only matters morally in itself if it seems to matter to the individual, e.g. the individual takes a specific interest in it, or finds it pleasant, unpleasant, attractive, aversive, significant, etc.. Furthermore, it’s not important that that “seeming to matter” applies to mental states in a higher-order way rather than “directly” to the intentional objects of mental states, like in the robots’ desires; that’s an arbitrary line.[2] The robots seem to matter in this way.
1 implies 2, and I suspect 3 implies 2, as well.
I also suspect we can’t answer which of 1, 2 or 3 is (objectively, stance-independently) correct. It seems inherently normative and subjective (and I’m not a moral realist), although I’ve become pretty sympathetic to 3, basically for the reasons I give in 3. We could also go for a graded account of moral status, where each of 1, 2 and 3 ground different degrees of moral status.
In defense of the necessity of the cognitive impenetrability of illusions of phenomenal consciousness, see Kammerer, 2022.
Humphrey, another illusionist, said “Consciousness matters because it is its function to matter”. However, he’s skeptical animals other than mammals and birds are conscious. He thinks consciousness requires finding your own mental states/perceptions/sensations to matter, e.g. engaging in sensation-seeking or sensory play. Such animals find their perceptions themselves interesting, not just the intentional objects of those perceptions. So it’s higher order-ish.