Not absolutely sure I’m afraid. I lent my copy of the book out to a colleague so I can’t check.
Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn’t actually say his view is an illusionist one.
Personally I can’t stand the label “illusionism” because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane—there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are more specific.
And it seems like the version matters a great deal too. Perhaps if consciousness really is an illusion, and none of us really have qualia—we’re all p-zombies programmed to believe we aren’t—I have a hard time understanding the point of altruism or anything more than instrumental morality. But if we’re just talking about an illusion that consciousness is a mysterious other worldly thing, and somehow, there really are qualia, then altruism feels like a meaningful life project to adopt.
On the whole having read Humphrey’s book, I don’t think he explicitly said he was an illusionist. but perhaps his theory suggests it, I’m not sure. He didn’t really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.
Personally I can’t stand the label “illusionism” because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all!
I think this is technically accurate, but illusionists don’t deny the existence of consciousness or claim that consciousness is an illusion; they deny the existence of phenomenal consciousness and qualia as typically characterized[1], and claim their appearances are illusions. Even Frankish, an illusionist, uses “what-it-is-likeness” in describing consciousness (e.g. “Why We Can Know What It’s Like To Be a Bat and Bats Can’t”), but thinks that should be formalized and understood in non-phenomenal (and instead physical-functional) terms, not as standard qualia.
The problem is that (classic) qualia and phenomenality have become understood as synonymous with consciousness, so denying them sounds like denying consciousness, which seems crazy.
Perhaps if consciousness really is an illusion, and none of us really have qualia—we’re all p-zombies programmed to believe we aren’t—I have a hard time understanding the point of altruism or anything more than instrumental morality.
Kammerer, 2019might be of interest. On accounting for the badness of pain, he writes:
The best option here for the illusionist would probably be to draw inspiration from desire-satisfaction views of well-being (Brandt 1979; Heathwood 2006) or from attitudinal theories of valenced states (Feldman 2002), and to say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike. After all, when I am in pain, there is something awful which is that I want it to stop (and that my desire is frustrated); alternatively, one could insist on the fact that what is bad is that I dislike my pain. This frustration or this dislike are what makes pain a harm, which in turn grounds its negative value. This might be the most promising lead to an account of what makes pain bad.
This approach is also roughly what I’d go with. That being said, I’m a moral antirealist, and I think you can’t actually ground valuestance-independently.
He didn’t really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.
say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike
I’m curious how, excluding phenomenal definitions, you define he defines “frustration of a desire” or “negative attitude of a dislike”, because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire (“navigate through a maze to get to the goal square”) and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.
I share your moral antirealism, but don’t think I could be convinced to care about preventing frustration of that sort of simple desire. It’s the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai’s frustrated desires and a human’s
I think illusionists haven’t worked out the precise details, and that’s more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like “frustration of a desire” or “negative attitude of a dislike”. And we can assign more moral weight the more true it seems.[2]
We can ask about:
how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state),
what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they’re worth describing as beliefs, and the effects of these beliefs,
how else they’re aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that’s worth describing as (that type of) awareness, and the effects of this awareness.
Not absolutely sure I’m afraid. I lent my copy of the book out to a colleague so I can’t check.
Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn’t actually say his view is an illusionist one.
Personally I can’t stand the label “illusionism” because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane—there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are more specific.
And it seems like the version matters a great deal too. Perhaps if consciousness really is an illusion, and none of us really have qualia—we’re all p-zombies programmed to believe we aren’t—I have a hard time understanding the point of altruism or anything more than instrumental morality. But if we’re just talking about an illusion that consciousness is a mysterious other worldly thing, and somehow, there really are qualia, then altruism feels like a meaningful life project to adopt.
On the whole having read Humphrey’s book, I don’t think he explicitly said he was an illusionist. but perhaps his theory suggests it, I’m not sure. He didn’t really explain why exactly he thought, a priori, we should expect sensorimotor feedback loops would generate consciousness, just that they seem to do so empirically. Perhaps he cleverly sidestepped the issue. I think his theory could make sense whether you are an illusionist or not.
I think this is technically accurate, but illusionists don’t deny the existence of consciousness or claim that consciousness is an illusion; they deny the existence of phenomenal consciousness and qualia as typically characterized[1], and claim their appearances are illusions. Even Frankish, an illusionist, uses “what-it-is-likeness” in describing consciousness (e.g. “Why We Can Know What It’s Like To Be a Bat and Bats Can’t”), but thinks that should be formalized and understood in non-phenomenal (and instead physical-functional) terms, not as standard qualia.
The problem is that (classic) qualia and phenomenality have become understood as synonymous with consciousness, so denying them sounds like denying consciousness, which seems crazy.
Kammerer, 2019 might be of interest. On accounting for the badness of pain, he writes:
This approach is also roughly what I’d go with. That being said, I’m a moral antirealist, and I think you can’t actually ground value stance-independently.
Makes sense.
“Classic qualia: Introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective.” (Frankish (video))
I think this is basically the standard definition of ‘qualia’, but Frankish adds ‘classic’ to distinguish it from Nagel’s ‘what-it-is-likeness’.
I’m curious how, excluding phenomenal definitions,
you definehe defines “frustration of a desire” or “negative attitude of a dislike”, because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire (“navigate through a maze to get to the goal square”) and then prevent it from doing so, or even add additional cruelty by making it from an expectation it is about to reach its goal and then preventing it.I share your moral antirealism, but don’t think I could be convinced to care about preventing frustration of that sort of simple desire. It’s the qualia-laden desire that seems to matter to me, but that might be irrational if it turns out qualia is an illusion. In think within anti-realism it still makes sense to avoid certain stances if they involve arbitrary inconsistencies. So if not qualia, I wonder what meaningful difference there is between a starcraft ai’s frustrated desires and a human’s
I think illusionists haven’t worked out the precise details, and that’s more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like “frustration of a desire” or “negative attitude of a dislike”. And we can assign more moral weight the more true it seems.[2]
We can ask about:
how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state),
what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they’re worth describing as beliefs, and the effects of these beliefs,
how else they’re aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that’s worth describing as (that type of) awareness, and the effects of this awareness.
Tomasik (2014-2017, various other writings here), Muehlhauser, 2017 (sections 2.3.2 and 6.7), Frankish (2023, 51:00-1:02:25), Dennett (Rothman, 2017, 2018, p.168-169, 2019, 2021, 1:16:30-1:18:00), Dung (2022) and Wilterson and Graziano, 2021.
This is separate from their intensity or strength.