We trust human self-reports about consciousness, which makes them an indispensable tool for understanding the basis of human consciousness (“I just saw a square flash on the screen”; “I felt that pinprick”).
I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans. A p-zombie would be able to report these stimuli without subjective experience of them.
They are “indispensable tools for understanding” insofar as we already have a high credence in human consciousness.
I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans.
Self-report is evidence of consciousness in Bayesian sense (and in common parlance): in a wide range of scenarios, if a human says they are conscious of something, you should have a higher credence than if they do not say they are. And in the scientific sense: it’s commonly and appropriately taken as evidence in scientific practice; here is Chalmers’s “How Can We Construct a Science of Consciousness?” on the practice of using self-reports to gather data about people’s conscious experiences:
Of course our access to this data depends on our making certain assumptions: in particular, the assumption that other subjects really are having conscious experiences, and that by and large their verbal reports reflect these conscious experiences. We cannot directly test this assumption; instead, it serves as a sort of background assumption for research in the field. But this situation is present throughout other areas of science. When physicists use perception to gather information about the external world, for example, they rely on the assumption that the external world exists, and that perception reflects the state of the external world. They cannot directly test this assumption; instead, it serves as a sort of background assumption for the whole field. Still, it seems a reasonable assumption to make, and it makes the science of physics possible. The same goes for our assumptions about the conscious experiences and verbal reports of others. These seem to be reasonable assumptions to make, and they make the science of consciousness possible .
It’s suppose it’s true that self-reports can’t budge someone from the hypothesis that other actual people are p-zombies, but few people (if any) think that. From the SEP:
Few people, if any, think zombies actually exist. But many hold that they are at least conceivable, and some that they are possible....The usual assumption is that none of us is actually a zombie, and that zombies cannot exist in our world. The central question, however, is not whether zombies can exist in our world, but whether they, or a whole zombie world (which is sometimes a more appropriate idea to work with), are possible in some broader sense.
So yeah: my take is that no one, including anti-physicalists who discuss p-zombies like Chalmers, really thinks that we can’t use self-report as evidence, and correctly so.
Thanks for following up and thanks for the references! Definitely agree these statements are evidence; I should have been more precise and said that they’re weak evidence / not likely to move your credences in the existence/prevalence of human consciousness.
This is true for literally all empirical evidence if you accept the possibility of a P-Zombie. The only possible falsification for consciousness can come from the internal subject itself, nothing else will do. But for everyone apart from you, it’s self-reports, 3rd party observation, or nothing.
Edit: What I mean here is that these self-reports are evidence—if they’re not then there’s no evidence for any minds apart from your own. And therefore we also ought to take AI self-reports as evidence. Not as serious as we take human self-reports at this stage, but evidence nonetheless.
I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans. A p-zombie would be able to report these stimuli without subjective experience of them.
They are “indispensable tools for understanding” insofar as we already have a high credence in human consciousness.
Thanks for the comment. A couple replies:
Self-report is evidence of consciousness in Bayesian sense (and in common parlance): in a wide range of scenarios, if a human says they are conscious of something, you should have a higher credence than if they do not say they are. And in the scientific sense: it’s commonly and appropriately taken as evidence in scientific practice; here is Chalmers’s “How Can We Construct a Science of Consciousness?” on the practice of using self-reports to gather data about people’s conscious experiences:
It’s suppose it’s true that self-reports can’t budge someone from the hypothesis that other actual people are p-zombies, but few people (if any) think that. From the SEP:
So yeah: my take is that no one, including anti-physicalists who discuss p-zombies like Chalmers, really thinks that we can’t use self-report as evidence, and correctly so.
Thanks for following up and thanks for the references! Definitely agree these statements are evidence; I should have been more precise and said that they’re weak evidence / not likely to move your credences in the existence/prevalence of human consciousness.
This is true for literally all empirical evidence if you accept the possibility of a P-Zombie. The only possible falsification for consciousness can come from the internal subject itself, nothing else will do. But for everyone apart from you, it’s self-reports, 3rd party observation, or nothing.
Edit: What I mean here is that these self-reports are evidence—if they’re not then there’s no evidence for any minds apart from your own. And therefore we also ought to take AI self-reports as evidence. Not as serious as we take human self-reports at this stage, but evidence nonetheless.