Thus, we would need to be open to the possibility that certain interventions could cause a change in a system’s physical substrate (which generates its qualia) without causing a change in its computational level (which generates its qualia reports)
It seems like this means that empirical tests (e.g. neuroscience stuff) aren’t going to help test aspects of the theory that are about divergence between computational pseudo-qualia (the things people report on) and actual qualia. If I squint a lot I could see “anthropic evidence” being used to distinguish between pseudo-qualia and qualia, but it seems like nothing else would work.
I’m also not sure why we would expect pseudo-qualia to have any correlation with actual qualia? I guess you could make an anthropic argument (we’re viewing the world from the perspective of actual qualia, and our sensations seem to match the pseudo-qualia). That would give someone the suspicion that there’s some causal story for why they would be synchronized, without directly providing such a causal story.
(For the record I think anthropic reasoning is usually confused and should be replaced with decision-theoretic reasoning (e.g. see this discussion), but this seems like a topic for another day)
Yes, the epistemological challenges with distinguishing between ground-truth qualia and qualia reports are worrying. However, I don’t think they’re completely intractable, because there is a causal chain (from Appendix C):
Our brain’s physical microstates (perfectly correlated with qualia) -->
The logical states of our brain’s self-model (systematically correlated with our brain’s physical microstates) -->
Our reports about our qualia (systematically correlated with our brain’s model of its internal state)
.. but there could be substantial blindspots, especially in contexts where there was no adaptive benefit to having accurate systematic correlations.
(more comments)
It seems like this means that empirical tests (e.g. neuroscience stuff) aren’t going to help test aspects of the theory that are about divergence between computational pseudo-qualia (the things people report on) and actual qualia. If I squint a lot I could see “anthropic evidence” being used to distinguish between pseudo-qualia and qualia, but it seems like nothing else would work.
I’m also not sure why we would expect pseudo-qualia to have any correlation with actual qualia? I guess you could make an anthropic argument (we’re viewing the world from the perspective of actual qualia, and our sensations seem to match the pseudo-qualia). That would give someone the suspicion that there’s some causal story for why they would be synchronized, without directly providing such a causal story.
(For the record I think anthropic reasoning is usually confused and should be replaced with decision-theoretic reasoning (e.g. see this discussion), but this seems like a topic for another day)
Yes, the epistemological challenges with distinguishing between ground-truth qualia and qualia reports are worrying. However, I don’t think they’re completely intractable, because there is a causal chain (from Appendix C):
Our brain’s physical microstates (perfectly correlated with qualia) --> The logical states of our brain’s self-model (systematically correlated with our brain’s physical microstates) --> Our reports about our qualia (systematically correlated with our brain’s model of its internal state)
.. but there could be substantial blindspots, especially in contexts where there was no adaptive benefit to having accurate systematic correlations.