Does Humphrey discuss his theory as an illusionist one in the book? My understanding is that he’s an illusionist and the theory he’s been working on is illusionist.[1] That seems like a pretty important part of his theory, but it might not be that important for his claim that only mammals and birds are conscious.
(FWIW, I think illusionism about consciousness is probably correct.)
It seems there are two broad (moral?) interpretations of illusionism (e.g. Frankish, 2021):
To be conscious, a physical system has to actually believe in the mysteriousness (or importance/mattering?) of what it’s processing. In other words, it would have to actually be subject to illusions of phenomenal consciousness.
To be conscious, if the right kind of system[2] were connected to the original system in the right way, that system would have to believe in (and report) the mysteriousness (or importance/mattering?) of what the combined system is processing.
1 implies 2, and it seems fewer systems could meet 1 than 2.
It seemed like Humphrey endorses something like 1. Graziano’s (illusionist) Attention Schema Theory seems between 1 and 2,[3] and he (2022) wrote “the components of what we call consciousness may be present in some form in a huge range of animals, including mammals, birds, and many nonavian reptiles”[4], although I’m not aware of him specifically denying consciousness to other animals. Related to this, and while not specifically illusionist, Key, Brown and Zalucki argue that molluscs (including octopuses), insects and fish don’t have internal state prediction networks for their own pain, i.e. they don’t model their own pain. Key (2014, 2016) argues that fish lack long-range feedback connections for pain processing (perhaps between certain structures specifically) and that the pain pathway is feedforward.
Although Frankish endorses 2 anyway, I suspect he’s too skeptical of other animals meeting something like 1, setting the bar too high for introspection and/or the kinds of beliefs that are required. He has a whole talk titled “Why We Can Know What It’s Like To Be a Bat and Bats Can’t”. Dennett might also set the bar too high; see Graziano’s response to him.[3]
I also lean towards 1, but possibly under a slightly different interpretation: I suspect the system just has to believe something matters. I might also have a low bar for what could count as a belief that something matters, but this seems vague. I think humans can believe things without stating their beliefs (in inner speech or externally, see Malcolm, 1973, and/or sections 1 and 4 ofSchwitzgebel, 2019), and if that’s the case, it seems hard to justify the claim that insects, say, very likely don’t believe anything matters.
On the other hand, then we might end up having to recognize that humans often have (active) beliefs that something matters that we don’t typically recognize ourselves as being conscious of. And we might end up with a basically panpsychist (but possibly gradualist) view.
Humphrey (2017) wrote, after contrasting realism and illusionism:
Still, which is right? No one yet knows for sure. But I’m not hiding which I hope is right. Although I myself have recently questioned the language of illusionism (Humphrey 2016b), I hope to see a resolution of the “hard problem” within the bounds of our standard world model.
(FWIW, Graziano (2016), also an illusionist, wrote: “I confess that I baulk at the term ‘illusionism’ because I think it miscommunicates”, and elaborates on this.)
Therefore, in AST, just as animals “know” about their own bodies in some deep intuitive sense via their body schemas, they also “know” about a subjective experience inside of them (a detail-poor depiction of their attentional state) via an attention schema. They may, however, lack higher cognitive levels of reflection on those deeper models.
Dennett (2020) suggests that only humans need an attention schema and that dogs do not. I think perhaps the difference in opinion here relates to higher level and lower level models. Humans undoubtedly have layers of higher cognitive models, myths and beliefs and cultural baggage. Much of the ghost mythology that we discussed in our target article (Graziano et al., 2020) is presumably unique to humans, exactly as Dennett suggests. But in AST, many of these human beliefs stem from, or are cultural elaborations of, a deeper model that is built into us and many other animals – an intrinsic model of attention.
He uses quotes around the word ‘know’, so he might not mean these count as beliefs. Graziano (2020b) also wrote the following, which contrasts the attention schema (“automatic self-model (...)”) from our beliefs:
Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does.
Not absolutely sure I’m afraid. I lent my copy of the book out to a colleague so I can’t check.
Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn’t actually say his view is an illusionist one.
Personally I can’t stand the label “illusionism” because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane—there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are more specific.
And it seems like the version matters a great deal too. Perhaps if consciousness really is an illusion, and none of us really have qualia—we’re all p-zombies programmed to believe we aren’t—I have a hard time understanding the point of altruism or anything more than instrumental morality. But if we’re just talking about an illusion that consciousness is a mysterious other worldly thing, and somehow, there really are qualia, then altruism feels like a meaningful life project to adopt.
On the whole having read Humphrey’s book, I don’t think he explicitly said he was an illusionist. but perhaps his theory suggests it, I’m not sure. He didn’t really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.
Personally I can’t stand the label “illusionism” because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all!
I think this is technically accurate, but illusionists don’t deny the existence of consciousness or claim that consciousness is an illusion; they deny the existence of phenomenal consciousness and qualia as typically characterized[1], and claim their appearances are illusions. Even Frankish, an illusionist, uses “what-it-is-likeness” in describing consciousness (e.g. “Why We Can Know What It’s Like To Be a Bat and Bats Can’t”), but thinks that should be formalized and understood in non-phenomenal (and instead physical-functional) terms, not as standard qualia.
The problem is that (classic) qualia and phenomenality have become understood as synonymous with consciousness, so denying them sounds like denying consciousness, which seems crazy.
Perhaps if consciousness really is an illusion, and none of us really have qualia—we’re all p-zombies programmed to believe we aren’t—I have a hard time understanding the point of altruism or anything more than instrumental morality.
Kammerer, 2019might be of interest. On accounting for the badness of pain, he writes:
The best option here for the illusionist would probably be to draw inspiration from desire-satisfaction views of well-being (Brandt 1979; Heathwood 2006) or from attitudinal theories of valenced states (Feldman 2002), and to say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike. After all, when I am in pain, there is something awful which is that I want it to stop (and that my desire is frustrated); alternatively, one could insist on the fact that what is bad is that I dislike my pain. This frustration or this dislike are what makes pain a harm, which in turn grounds its negative value. This might be the most promising lead to an account of what makes pain bad.
This approach is also roughly what I’d go with. That being said, I’m a moral antirealist, and I think you can’t actually ground valuestance-independently.
He didn’t really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.
say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike
I’m curious how, excluding phenomenal definitions, you define he defines “frustration of a desire” or “negative attitude of a dislike”, because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire (“navigate through a maze to get to the goal square”) and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.
I share your moral antirealism, but don’t think I could be convinced to care about preventing frustration of that sort of simple desire. It’s the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai’s frustrated desires and a human’s
I think illusionists haven’t worked out the precise details, and that’s more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like “frustration of a desire” or “negative attitude of a dislike”. And we can assign more moral weight the more true it seems.[2]
We can ask about:
how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state),
what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they’re worth describing as beliefs, and the effects of these beliefs,
how else they’re aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that’s worth describing as (that type of) awareness, and the effects of this awareness.
(Somewhat tangential.)
Does Humphrey discuss his theory as an illusionist one in the book? My understanding is that he’s an illusionist and the theory he’s been working on is illusionist.[1] That seems like a pretty important part of his theory, but it might not be that important for his claim that only mammals and birds are conscious.
(FWIW, I think illusionism about consciousness is probably correct.)
It seems there are two broad (moral?) interpretations of illusionism (e.g. Frankish, 2021):
To be conscious, a physical system has to actually believe in the mysteriousness (or importance/mattering?) of what it’s processing. In other words, it would have to actually be subject to illusions of phenomenal consciousness.
To be conscious, if the right kind of system[2] were connected to the original system in the right way, that system would have to believe in (and report) the mysteriousness (or importance/mattering?) of what the combined system is processing.
1 implies 2, and it seems fewer systems could meet 1 than 2.
It seemed like Humphrey endorses something like 1. Graziano’s (illusionist) Attention Schema Theory seems between 1 and 2,[3] and he (2022) wrote “the components of what we call consciousness may be present in some form in a huge range of animals, including mammals, birds, and many nonavian reptiles”[4], although I’m not aware of him specifically denying consciousness to other animals. Related to this, and while not specifically illusionist, Key, Brown and Zalucki argue that molluscs (including octopuses), insects and fish don’t have internal state prediction networks for their own pain, i.e. they don’t model their own pain. Key (2014, 2016) argues that fish lack long-range feedback connections for pain processing (perhaps between certain structures specifically) and that the pain pathway is feedforward.
On the other hand, Frankish (2023, 2022, 2021) endorses 2. I’d guess Dennett endorses 2 (or neither?), because he’s confident in octopus and bee consciousness, but I’m not sure.
Although Frankish endorses 2 anyway, I suspect he’s too skeptical of other animals meeting something like 1, setting the bar too high for introspection and/or the kinds of beliefs that are required. He has a whole talk titled “Why We Can Know What It’s Like To Be a Bat and Bats Can’t”. Dennett might also set the bar too high; see Graziano’s response to him.[3]
I also lean towards 1, but possibly under a slightly different interpretation: I suspect the system just has to believe something matters. I might also have a low bar for what could count as a belief that something matters, but this seems vague. I think humans can believe things without stating their beliefs (in inner speech or externally, see Malcolm, 1973, and/or sections 1 and 4 of Schwitzgebel, 2019), and if that’s the case, it seems hard to justify the claim that insects, say, very likely don’t believe anything matters.
On the other hand, then we might end up having to recognize that humans often have (active) beliefs that something matters that we don’t typically recognize ourselves as being conscious of. And we might end up with a basically panpsychist (but possibly gradualist) view.
Humphrey (2017) wrote, after contrasting realism and illusionism:
Also, see this interview.
(FWIW, Graziano (2016), also an illusionist, wrote: “I confess that I baulk at the term ‘illusionism’ because I think it miscommunicates”, and elaborates on this.)
Presumably with some constraints on what the system can do.
The attention schema could itself be the beliefs and include the illusions of consciousness.
Graziano (2020a) wrote:
He uses quotes around the word ‘know’, so he might not mean these count as beliefs. Graziano (2020b) also wrote the following, which contrasts the attention schema (“automatic self-model (...)”) from our beliefs:
Before that, Graziano (2020a) wrote:
Not absolutely sure I’m afraid. I lent my copy of the book out to a colleague so I can’t check.
Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn’t actually say his view is an illusionist one.
Personally I can’t stand the label “illusionism” because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane—there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are more specific.
And it seems like the version matters a great deal too. Perhaps if consciousness really is an illusion, and none of us really have qualia—we’re all p-zombies programmed to believe we aren’t—I have a hard time understanding the point of altruism or anything more than instrumental morality. But if we’re just talking about an illusion that consciousness is a mysterious other worldly thing, and somehow, there really are qualia, then altruism feels like a meaningful life project to adopt.
On the whole having read Humphrey’s book, I don’t think he explicitly said he was an illusionist. but perhaps his theory suggests it, I’m not sure. He didn’t really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.
I think this is technically accurate, but illusionists don’t deny the existence of consciousness or claim that consciousness is an illusion; they deny the existence of phenomenal consciousness and qualia as typically characterized[1], and claim their appearances are illusions. Even Frankish, an illusionist, uses “what-it-is-likeness” in describing consciousness (e.g. “Why We Can Know What It’s Like To Be a Bat and Bats Can’t”), but thinks that should be formalized and understood in non-phenomenal (and instead physical-functional) terms, not as standard qualia.
The problem is that (classic) qualia and phenomenality have become understood as synonymous with consciousness, so denying them sounds like denying consciousness, which seems crazy.
Kammerer, 2019 might be of interest. On accounting for the badness of pain, he writes:
This approach is also roughly what I’d go with. That being said, I’m a moral antirealist, and I think you can’t actually ground value stance-independently.
Makes sense.
“Classic qualia: Introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective.” (Frankish (video))
I think this is basically the standard definition of ‘qualia’, but Frankish adds ‘classic’ to distinguish it from Nagel’s ‘what-it-is-likeness’.
I’m curious how, excluding phenomenal definitions,
you definehe defines “frustration of a desire” or “negative attitude of a dislike”, because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire (“navigate through a maze to get to the goal square”) and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.I share your moral antirealism, but don’t think I could be convinced to care about preventing frustration of that sort of simple desire. It’s the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai’s frustrated desires and a human’s
I think illusionists haven’t worked out the precise details, and that’s more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like “frustration of a desire” or “negative attitude of a dislike”. And we can assign more moral weight the more true it seems.[2]
We can ask about:
how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state),
what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they’re worth describing as beliefs, and the effects of these beliefs,
how else they’re aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that’s worth describing as (that type of) awareness, and the effects of this awareness.
Tomasik (2014-2017, various other writings here), Muehlhauser, 2017 (sections 2.3.2 and 6.7), Frankish (2023, 51:00-1:02:25), Dennett (Rothman, 2017, 2018, p.168-169, 2019, 2021, 1:16:30-1:18:00), Dung (2022) and Wilterson and Graziano, 2021.
This is separate from their intensity or strength.