That paper is long and kind of confusing but from skimming for relevant passages, here is how I understood its arguments against computational functionalism:
Section 3.4: Human brains deviate from Turing machines in that brain states require energy to be maintained and Turing machines are “immortal”. [And I guess the implication is that this is evidence for substrate dependence? But I don’t see why.]
Section 3.5: Brains might violate informational closure, which basically means that the computations a brain performs might depend on the substrate on which they are performed. Which is evidence that AIs wouldn’t be conscious. [I found this section confusing but it seems unlikely to me that brains violate informational closure, if I understood it correctly.]
Section 3.6: AI can only be conscious if computational functionalism is true. [That sounds false to me. It could be that some other version of functionalism is true, or panpsychism is true, or perhaps identity theory is true but that both brains and transistors can produce consciousness, or perhaps even dualism is true and AIs are endowed with dualistic consciousness somehow.]
I didn’t understand these arguments very well but I didn’t find them compelling. I think the China brain argument is much stronger although I don’t find it persuasive either. If you’re talking to a black box that contains either a human or a China-brain, then there is no test you can perform to distinguish the two. If the human can say things to you that convince you it’s conscious, then you should also be convinced that the China-brain is conscious.
Thanks for this! I’d be curious to hear what you think about the arguments against computational functionalism put forward by Anil Seth in this paper: https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A
That paper is long and kind of confusing but from skimming for relevant passages, here is how I understood its arguments against computational functionalism:
Section 3.4: Human brains deviate from Turing machines in that brain states require energy to be maintained and Turing machines are “immortal”. [And I guess the implication is that this is evidence for substrate dependence? But I don’t see why.]
Section 3.5: Brains might violate informational closure, which basically means that the computations a brain performs might depend on the substrate on which they are performed. Which is evidence that AIs wouldn’t be conscious. [I found this section confusing but it seems unlikely to me that brains violate informational closure, if I understood it correctly.]
Section 3.6: AI can only be conscious if computational functionalism is true. [That sounds false to me. It could be that some other version of functionalism is true, or panpsychism is true, or perhaps identity theory is true but that both brains and transistors can produce consciousness, or perhaps even dualism is true and AIs are endowed with dualistic consciousness somehow.]
I didn’t understand these arguments very well but I didn’t find them compelling. I think the China brain argument is much stronger although I don’t find it persuasive either. If you’re talking to a black box that contains either a human or a China-brain, then there is no test you can perform to distinguish the two. If the human can say things to you that convince you it’s conscious, then you should also be convinced that the China-brain is conscious.