I’m unsure whether we can in principle ascertain whether a digital mind is conscious.
I used to believe this too, but I’ve actually become quite optimistic that this is a tractable question. If, as David Pearce argues, the laws of physics don’t break down inside the brain, and we define the constraints that a satisfactory theory of consciousness should fulfill, then I believe we can make very good progress. For instance, one such constraint is that the theory should solve the binding problem (see also here). So if we cannot observe a binding mechanism in digital computers, or explain how digital computers could in principle give rise to binding (as I and other at QRI argue), then that should be very strong evidence against digital computers being conscious. More in: Digital Sentience: Can Digital Computers Ever “Wake Up”?
(Note that this doesn’t rule out other forms of artificial, non-digital consciousness!)
Cheers, and thanks for all the links! I read the binding problem section of your post- it’s really interesting and something I’ve thought about before, but without a nice phrase to bind to it. From an initial read, I’m pretty sceptical that the binding problem is something we can talk about/ experiment on/ introspect on. I.e it feels kind of impossible for me to answer to what extent my various experiences at this moment are bound together, and what would be different if they weren’t bound, but they were integrated into the information I’m acting on in some other way. Is there a reason to think a “binding mechanism” is an easier thing to investigate than “consciousness”? A valid answer to that question is: please read my post in full, lol.
I get what you mean! Here are a few very quick thoughts—hope they make sense!
Maybe one challenge around introspecting on binding is that one typically has to be in altered or ill states of consciousness to experience what it’s like for binding to break down. For instance, under some psychedelics, the color of an object can “bleed out” of its boundaries.
In normal waking consciousness, we just take binding for granted. We have no problem recognizing that the eyes, nose, mouth, etc. of the person in front of us are “glued” together into a unified gestalt that we can make sense of (and e.g. recognize as our friend). Some people with integrative agnosia can’t do that.
As David Pearce says, no binding = no mind. Without binding, there would only be “qualia dust” floating around. It makes sense that evolution would recruit bound experiences to make sense of the external world. Ethically, we should care about bound experiences.
Binding is also the reason why you can’t do your taxes on a high dose of LSD (though maybe altered states of consciousness can be used for other types of computation).
Happy to chat about this more in person one of these days! I just moved to London and plan to visit Oxford every now and then. ☺️
Thanks for sharing your thoughts!
I used to believe this too, but I’ve actually become quite optimistic that this is a tractable question. If, as David Pearce argues, the laws of physics don’t break down inside the brain, and we define the constraints that a satisfactory theory of consciousness should fulfill, then I believe we can make very good progress. For instance, one such constraint is that the theory should solve the binding problem (see also here). So if we cannot observe a binding mechanism in digital computers, or explain how digital computers could in principle give rise to binding (as I and other at QRI argue), then that should be very strong evidence against digital computers being conscious. More in: Digital Sentience: Can Digital Computers Ever “Wake Up”?
(Note that this doesn’t rule out other forms of artificial, non-digital consciousness!)
Cheers, and thanks for all the links!
I read the binding problem section of your post- it’s really interesting and something I’ve thought about before, but without a nice phrase to bind to it.
From an initial read, I’m pretty sceptical that the binding problem is something we can talk about/ experiment on/ introspect on. I.e it feels kind of impossible for me to answer to what extent my various experiences at this moment are bound together, and what would be different if they weren’t bound, but they were integrated into the information I’m acting on in some other way. Is there a reason to think a “binding mechanism” is an easier thing to investigate than “consciousness”? A valid answer to that question is: please read my post in full, lol.
I get what you mean! Here are a few very quick thoughts—hope they make sense!
Maybe one challenge around introspecting on binding is that one typically has to be in altered or ill states of consciousness to experience what it’s like for binding to break down. For instance, under some psychedelics, the color of an object can “bleed out” of its boundaries.
In normal waking consciousness, we just take binding for granted. We have no problem recognizing that the eyes, nose, mouth, etc. of the person in front of us are “glued” together into a unified gestalt that we can make sense of (and e.g. recognize as our friend). Some people with integrative agnosia can’t do that.
As David Pearce says, no binding = no mind. Without binding, there would only be “qualia dust” floating around. It makes sense that evolution would recruit bound experiences to make sense of the external world. Ethically, we should care about bound experiences.
Binding is also the reason why you can’t do your taxes on a high dose of LSD (though maybe altered states of consciousness can be used for other types of computation).
Happy to chat about this more in person one of these days! I just moved to London and plan to visit Oxford every now and then. ☺️
Awesome, I’d love to chat when you’re around in Oxford!