Fair. I think the stronger arguments for (strong) illusionism are of the following form:
Physicalism seems true and dualism (including property dualism and epiphenomenalism) false for various reasons.
No other (physicalist) theory besides strong illusionism seems able to address the meta-problem of consciousness or even on the right path.
No theory has an adequate solution to the hard problem of consciousness and some debates between them seem empirically unresolvable (e.g. where the line is between report(ability)/access and phenomenal consciousness), but every theory other than strong illusionism needs to solve it.
There are specific illusionist explanations of some posited phenomenal or classic qualia properties.
There don’t seem to be any strong arguments against illusionism (other than possibly mere intuition that phenomenal consciousness is real).
To be clear, I’m leaving out all of the details, none of the above is obvious, and most or all of it is controversial. I think part of 3 isn’t controversial (no full solution yet, and non-illusionist theories need it).
On 4, ineffability and privacy seem easy to explain. First, we don’t today know enough of the details of how our brains make the discriminations they do, so we can’t fully communicate or compare them in practice yet anyway. Second, even if I understood and could communicate how my brain makes the discriminations it does, this doesn’t allow you to put yourself in the same brain states or generally make the same discriminations in the same way. You could potentially build an AI that could or modify your brain accordingly, but this hasn’t been possible yet, and it wouldn’t really be “you” making those discriminations. I can’t subject you to my illusions just by explanation, so ineffability is true in practice. With a full enough description, we could compare and privacy wouldn’t hold.
Also, I don’t take “intuitively obvious” to be a strong reason for belief, but I am unusually skeptical.
but every theory other than strong illusionism needs to solve [the hard problem].
I agree in the sense that other theories can’t simply dissolve it, but that’s almost tautological. If you mean that other theories need to solve it in order to justify belief in them, or in other words if you mean that if we were all certain the hard problem would never be adequately resolved we would be forced to accept illusionism, then I don’t think that’s correct at all.
Consider what we might call “the hard problem of physics”: why this? Why anything? What puts the fire in the equations? Short of some galaxybrained PSR maneuver, which seems more and more dubious by the century, I doubt we’re ever going to get an answer. It is completely inexplicable that anything should exist.
And yet it does. It’s there, it’s obviously there, everything you’ve ever seen or felt or thought bears witness to it, and someone who claims otherwise on the grounds that it doesn’t make any sense has entirely misunderstood the nature of their situation.
This is also how I feel about illusionism. Phenomenal experience is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the phenomenal content of consciousness. Whatever its ontological status, it’s the epistemic ground of everything else. You can’t justify the claim that phenomenal consciousness doesn’t exist by pointing to patterns of phenomena, any more than you can demonstrate the nonexistence language in an essay or offer a formal disproof of modus ponens.
So these illusionists explanations are, well, not really explanations of consciousness. They’re explanations of a coarse world model in terms of a finer one, but the coarse world model wasn’t the thing I wanted explained. On the contrary, it was a tiny provisional step towards an explanation: there are many lawlike regularities in the structure of my experiences, so I hypothesize a common cause and call it “my brain”. It’s a very successful hypothesis, and I’d like to know why—given that the world is more than just its shadow on the mind[1], why should the structure of one reflect the other?
The illusionist response of “actually your hypothesis is the evidence and your data are just hypotheses” misses the point entirely.
The analogy to the “hard problem of physics” is interesting, and my stance towards the problem is the same as yours.
However, I don’t think the analogy really works.
This is also how I feel about illusionism. Phenomenal experience is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the phenomenal content of consciousness. Whatever its ontological status, it’s the epistemic ground of everything else.
Is phenomenality itself necessary/on the causal path here? Illusionists aren’t denying consciousness, that it has contents, that there’s regularity in its contents or that it’s the only thing we have direct access to. Illusionists are just denying the phenomenal nature of consciousness or phenomenal properties. I would instead say, more neutrally:
Experience (whatever it is) is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the content of consciousness (whatever it is). Whatever its ontological status, it’s the epistemic ground of everything else.
Note also that the information in or states of a computer (including robots and AIs) also play a similar role for the computer. And, a computer program can’t necessarily explain how it does everything it does. “Ineffability” for computers, like us, could just be cognitive impenetrability: some responses and contents are just wired in, and their causes are not accessible to (certain levels of) the program. For “us”, everything goes through our access consciousness.
So, what exactly do you mean by phenomenality, and what’s the extra explanatory work phenomenality is doing here? What isn’t already explained by the discriminations and responses by our brains, non-phenomenal (quasi-phenomenal) states or just generally physics?
If you define phenomenality just by certain physical states, effects or responses, or functionalist or causal abstractions thereof, say, then I think you’d be defining away phenomenality, i.e. “zero qualia” according to Frankish (paper, video).
Is phenomenality itself necessary/on the causal path here?
I have no idea what the causal path is, or even whether causation is the right conceptual framework here. But it has no bearing on whether phenomenal experiences exist: they’re particular things out there in the world (so to speak), not causal roles in a model.
Note also that the information in or states of a computer (including robots and AIs) also play a similar role for the computer.
It plays a similar role, for very generous values of “similar”, in the computer qua physical system, sure. And I am perfectly happy to grant that “I” qua human organism am almost certainly a causally closed physical system like any other. (Or rather, the joint me-environment system is). But that’s not what I’m talking about.
For “us”, everything goes through our access consciousness.
I’m not talking about access consciousness either! That’s just one particular sort of mental state in a vast landscape. The existence of the landscape—as a really existing thing with really existing contents, not a model - is the heart of the mystery.
what’s the extra explanatory work phenomenality is doing here?
My whole point is that it doesn’t do explanatory work, and expecting it to is a conceptual confusion. The sun’s luminosity does not explain its composition, the fact that looking at it causes retinal damage does not explain its luminosity, the firing of sensory nerves does not explain the damage, and the qualia that constitute “hurting to look at” do not explain the brain states which cause them.
Phenomenality is raw data: the thing to be explained. Not what I do, not what I say, not the exact microstate of my brain, not even the structural features of my mind—but the stuff being structured, and the fact there is any.
If you define phenomenality just by certain physical states, effects or responses, or functionalist or causal abstractions thereof
I don’t define phenomenality! I point at it. It’s that thing, right there, all the time. The stuff in virtue of which all my inferential knowledge is inferential knowledge about something, and not just empty formal structure. The relata which introspective thought relates[1]. The stuff at the bottom of the logical positivists’ glass. You know, the thing.
And again, I am only pointing at particular examples, not defining or characterizing or even trying to offer a conceptual prototype: qualia need not have anything to do with introspection, linguistic thought, inference, or any other sort of higher cognition. In particular, “seeing my computer screen” and “being aware of seeing my computer screen” are not the same quale.
But it seems to me that phenomenal aspects themselves aren’t the raw data by which we know things. If you accept the causal closure of the physical, non-phenomenal aspects of our discriminations and cognitive responses are already enough to explain how we know things, or the phenomenal aspects just are physical aspects (possibly abstracted to functions or dispositions), which would be consistent with illusionism.
Or, do you mean that knowing itself is not entirely physical?
I think the causal closure of the physical is very, very likely, given the evidence. I do not accept it as axiomatic. But if it turns out that it implies illusionism, i.e. that it implies the evidence does not exist, then it is self-defeating and should be rejected.
Or, do you mean that knowing itself is not entirely physical?
I am referring to my phenomenology, not (what I believe to be) the corresponding behavioral dispositions. E.g. so far as I know my visual field can be simultaneously all blue and all dark, but never all blue and all red. We have a clear path towards explaining why that would be true, and vague hints that it might be possible to explain why, given that it’s true, I can think the corresponding thoughts and say the corresponding words. But explaining how I can make that judgement is not an explanation of why I have visual qualia to begin with.
Whether these are also physical in some broader sense of the word, I can’t say.
No, it isn’t just saying that. That understates the case for both physicalism and illusionism that I outlined.
We have good independent reasons to believe physicalism and against alternatives, and I mentioned this, but didn’t give examples. Here are some:
There’s the good empirical track record of physicalism generally and specifically in giving adequate explanations for the seemingly nonphysical.
There are the questions of where, when, how and why nonphysical properties arise, whether that’s from or with a collection of particles in a system, over a human’s development from conception, or in our evolutionary history, that nonphysicalist theories struggle to give sensible answers to. If the nonphysical is fundamental and there at all levels (panpsychism), then we have the combination problem: how does the nonphysical combine to make minds like ours?
There’s the expansion of the physical to include what’s empirically reliable and testable to very high precision and for which we have precise fundamental accounts, including interactions with other fundamental physical properties (although not necessarily all such interactions, e.g. we don’t yet have a good theory of quantum gravity). For example, gravity, quantum superposition and quantum entanglement might have seemed unphysical before, but they’ve become part of our physical ontology because of their reliability and our very good (but incomplete) understanding of them and their relationships with other things. Of course, maybe the seemingly nonphysical properties of minds will eventually come to gain the same status, but it’s nowhere close to that now. We shouldn’t be hasty to assume the existence of things that don’t meet this bar, because the evidence for them is far weaker.
The illusionist also argues (or would want to, but currently lacks the details to make it very convincing) that there’s a specific adequate (physicalist) explanation for the appearance of X that doesn’t require the existence of X. If the appearance of X doesn’t depend on its existence, then the appearance of X isn’t reliable evidence for its existence. Without any other independent argument for the existence of X (as seems to be the case for phenomenality and classic qualia), then it becomes like any other verified illusion, and our reasons to believe in X become very weak.
Fair. I think the stronger arguments for (strong) illusionism are of the following form:
Physicalism seems true and dualism (including property dualism and epiphenomenalism) false for various reasons.
No other (physicalist) theory besides strong illusionism seems able to address the meta-problem of consciousness or even on the right path.
No theory has an adequate solution to the hard problem of consciousness and some debates between them seem empirically unresolvable (e.g. where the line is between report(ability)/access and phenomenal consciousness), but every theory other than strong illusionism needs to solve it.
There are specific illusionist explanations of some posited phenomenal or classic qualia properties.
There don’t seem to be any strong arguments against illusionism (other than possibly mere intuition that phenomenal consciousness is real).
To be clear, I’m leaving out all of the details, none of the above is obvious, and most or all of it is controversial. I think part of 3 isn’t controversial (no full solution yet, and non-illusionist theories need it).
On 4, ineffability and privacy seem easy to explain. First, we don’t today know enough of the details of how our brains make the discriminations they do, so we can’t fully communicate or compare them in practice yet anyway. Second, even if I understood and could communicate how my brain makes the discriminations it does, this doesn’t allow you to put yourself in the same brain states or generally make the same discriminations in the same way. You could potentially build an AI that could or modify your brain accordingly, but this hasn’t been possible yet, and it wouldn’t really be “you” making those discriminations. I can’t subject you to my illusions just by explanation, so ineffability is true in practice. With a full enough description, we could compare and privacy wouldn’t hold.
Also, I don’t take “intuitively obvious” to be a strong reason for belief, but I am unusually skeptical.
I agree in the sense that other theories can’t simply dissolve it, but that’s almost tautological. If you mean that other theories need to solve it in order to justify belief in them, or in other words if you mean that if we were all certain the hard problem would never be adequately resolved we would be forced to accept illusionism, then I don’t think that’s correct at all.
Consider what we might call “the hard problem of physics”: why this? Why anything? What puts the fire in the equations? Short of some galaxybrained PSR maneuver, which seems more and more dubious by the century, I doubt we’re ever going to get an answer. It is completely inexplicable that anything should exist.
And yet it does. It’s there, it’s obviously there, everything you’ve ever seen or felt or thought bears witness to it, and someone who claims otherwise on the grounds that it doesn’t make any sense has entirely misunderstood the nature of their situation.
This is also how I feel about illusionism. Phenomenal experience is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the phenomenal content of consciousness. Whatever its ontological status, it’s the epistemic ground of everything else. You can’t justify the claim that phenomenal consciousness doesn’t exist by pointing to patterns of phenomena, any more than you can demonstrate the nonexistence language in an essay or offer a formal disproof of modus ponens.
So these illusionists explanations are, well, not really explanations of consciousness. They’re explanations of a coarse world model in terms of a finer one, but the coarse world model wasn’t the thing I wanted explained. On the contrary, it was a tiny provisional step towards an explanation: there are many lawlike regularities in the structure of my experiences, so I hypothesize a common cause and call it “my brain”. It’s a very successful hypothesis, and I’d like to know why—given that the world is more than just its shadow on the mind[1], why should the structure of one reflect the other?
The illusionist response of “actually your hypothesis is the evidence and your data are just hypotheses” misses the point entirely.
the dumbest possible solution, but I can’t rule it out
The analogy to the “hard problem of physics” is interesting, and my stance towards the problem is the same as yours.
However, I don’t think the analogy really works.
Is phenomenality itself necessary/on the causal path here? Illusionists aren’t denying consciousness, that it has contents, that there’s regularity in its contents or that it’s the only thing we have direct access to. Illusionists are just denying the phenomenal nature of consciousness or phenomenal properties. I would instead say, more neutrally:
Note also that the information in or states of a computer (including robots and AIs) also play a similar role for the computer. And, a computer program can’t necessarily explain how it does everything it does. “Ineffability” for computers, like us, could just be cognitive impenetrability: some responses and contents are just wired in, and their causes are not accessible to (certain levels of) the program. For “us”, everything goes through our access consciousness.
So, what exactly do you mean by phenomenality, and what’s the extra explanatory work phenomenality is doing here? What isn’t already explained by the discriminations and responses by our brains, non-phenomenal (quasi-phenomenal) states or just generally physics?
If you define phenomenality just by certain physical states, effects or responses, or functionalist or causal abstractions thereof, say, then I think you’d be defining away phenomenality, i.e. “zero qualia” according to Frankish (paper, video).
I have no idea what the causal path is, or even whether causation is the right conceptual framework here. But it has no bearing on whether phenomenal experiences exist: they’re particular things out there in the world (so to speak), not causal roles in a model.
It plays a similar role, for very generous values of “similar”, in the computer qua physical system, sure. And I am perfectly happy to grant that “I” qua human organism am almost certainly a causally closed physical system like any other. (Or rather, the joint me-environment system is). But that’s not what I’m talking about.
I’m not talking about access consciousness either! That’s just one particular sort of mental state in a vast landscape. The existence of the landscape—as a really existing thing with really existing contents, not a model - is the heart of the mystery.
My whole point is that it doesn’t do explanatory work, and expecting it to is a conceptual confusion. The sun’s luminosity does not explain its composition, the fact that looking at it causes retinal damage does not explain its luminosity, the firing of sensory nerves does not explain the damage, and the qualia that constitute “hurting to look at” do not explain the brain states which cause them.
Phenomenality is raw data: the thing to be explained. Not what I do, not what I say, not the exact microstate of my brain, not even the structural features of my mind—but the stuff being structured, and the fact there is any.
I don’t define phenomenality! I point at it. It’s that thing, right there, all the time. The stuff in virtue of which all my inferential knowledge is inferential knowledge about something, and not just empty formal structure. The relata which introspective thought relates[1]. The stuff at the bottom of the logical positivists’ glass. You know, the thing.
And again, I am only pointing at particular examples, not defining or characterizing or even trying to offer a conceptual prototype: qualia need not have anything to do with introspection, linguistic thought, inference, or any other sort of higher cognition. In particular, “seeing my computer screen” and “being aware of seeing my computer screen” are not the same quale.
But it seems to me that phenomenal aspects themselves aren’t the raw data by which we know things. If you accept the causal closure of the physical, non-phenomenal aspects of our discriminations and cognitive responses are already enough to explain how we know things, or the phenomenal aspects just are physical aspects (possibly abstracted to functions or dispositions), which would be consistent with illusionism.
Or, do you mean that knowing itself is not entirely physical?
I think the causal closure of the physical is very, very likely, given the evidence. I do not accept it as axiomatic. But if it turns out that it implies illusionism, i.e. that it implies the evidence does not exist, then it is self-defeating and should be rejected.
I am referring to my phenomenology, not (what I believe to be) the corresponding behavioral dispositions. E.g. so far as I know my visual field can be simultaneously all blue and all dark, but never all blue and all red. We have a clear path towards explaining why that would be true, and vague hints that it might be possible to explain why, given that it’s true, I can think the corresponding thoughts and say the corresponding words. But explaining how I can make that judgement is not an explanation of why I have visual qualia to begin with.
Whether these are also physical in some broader sense of the word, I can’t say.
The argument is basically saying that if X can’t be explained by physicalism, then X is an illusion. That’s treating physicalism as unfalsifable.
No, it isn’t just saying that. That understates the case for both physicalism and illusionism that I outlined.
We have good independent reasons to believe physicalism and against alternatives, and I mentioned this, but didn’t give examples. Here are some:
There’s the good empirical track record of physicalism generally and specifically in giving adequate explanations for the seemingly nonphysical.
There are the questions of where, when, how and why nonphysical properties arise, whether that’s from or with a collection of particles in a system, over a human’s development from conception, or in our evolutionary history, that nonphysicalist theories struggle to give sensible answers to. If the nonphysical is fundamental and there at all levels (panpsychism), then we have the combination problem: how does the nonphysical combine to make minds like ours?
There’s the expansion of the physical to include what’s empirically reliable and testable to very high precision and for which we have precise fundamental accounts, including interactions with other fundamental physical properties (although not necessarily all such interactions, e.g. we don’t yet have a good theory of quantum gravity). For example, gravity, quantum superposition and quantum entanglement might have seemed unphysical before, but they’ve become part of our physical ontology because of their reliability and our very good (but incomplete) understanding of them and their relationships with other things. Of course, maybe the seemingly nonphysical properties of minds will eventually come to gain the same status, but it’s nowhere close to that now. We shouldn’t be hasty to assume the existence of things that don’t meet this bar, because the evidence for them is far weaker.
The illusionist also argues (or would want to, but currently lacks the details to make it very convincing) that there’s a specific adequate (physicalist) explanation for the appearance of X that doesn’t require the existence of X. If the appearance of X doesn’t depend on its existence, then the appearance of X isn’t reliable evidence for its existence. Without any other independent argument for the existence of X (as seems to be the case for phenomenality and classic qualia), then it becomes like any other verified illusion, and our reasons to believe in X become very weak.