(EDIT: Split this up into two comments, the other here.)
I think that there’s probably a minimum level of substrate independence we should accept, e.g. that it doesn’t matter exactly what matter a “brain” is made out of, as long as the causal structure is similar enough on a fine enough level. The mere fact that neurons are largely made out of carbon doesn’t seem essential. Furthermore, human and (apparently) conscious animal brains are noisy and vary substantially from one another, so exact duplication of the causal structure doesn’t seem necessary, as long as the errors don’t accumulate so much that the result isn’t similar to a plausible state for a plausible conscious biological brain.[1] So, I’m inclined to say that we could replace biological neurons with artificial neurons and retain consciousness, at least in principle, but it could depend on the artificial neurons.
It’s worth pointing out that the China brain[2] and a digital mind (or digital simulation of a mind, on computers like today’s) aren’t really causally isomorphic to biological brains even if you ignore a lot of the details of biological brains. Obviously, you also have to ignore a lot of the details of the China brain and digital minds. But I could imagine that the extra details in the China brain and digital minds make a difference.
In a simulated neuron, both in the China brain and digital minds, there are details to ignore. In the China brain, that’s all the stuff happening inside each person simulating a neuron. For a digital mind, there’s probably lots of extra hardware stuff going on.
In a digital mind on a computer like today’s computers or even distributed across hundreds of computers or processing units (CPU cores, GPUs), it seems you must ignore the fact that the digital state transitions are orchestrated centrally “from the outside”, today through some kind of loop (e.g. a for-loop or while-loop, or some number of these with some asynchronous distribution). Individual biological neurons act relatively autonomously/asynchronously just in response to local neural activity (including electrical and chemical signals), without this kind of external central orchestration. Actually, if you were to ignore the centralized orchestration in a digital mind, depending on how you cash that out, the digital mind might never change states, so maybe the digital mind isn’t actually isomorphic to a biological brain at the right level(s) of causal structure for each at all.
These extra details make me less sure that we should attribute consciousness to the China brain and digital minds, but they don’t seem decisive.
At the NYU talk, Chalmers raised a passage from The Conscious Mind (p. 331) where he claims, in relation to replacement scenarios, that “when it comes to duplicating our cognitive capacities, a close approximation is as good as the real thing.” His argument is that in biological systems, random “noise” processes play a role (greater than the role of any analogous processes in a computer). When the biological system performs some operation, the outcome is never entirely reliable and will instead fall within a band of possibilities. An artificial duplicate of the biological system only has to give a result somewhere in that band. The duplicate’s output might depart from what the biological system actually does, on some occasion, but the biological system could just as well have produced the same output as the duplicate, if noise had played a different role. When a duplicate gives a result within the band, it is doing “as well as the system itself can reliably do.”
In response, it is true that this role for noise is an important micro-functional feature of living systems. In addition, neurons change what they do as a result of their normal operation, they don’t respond to the “same” stimulus twice in the same way (see “Mind, Matter, and Metabolism” for references). The “rules” or the “program” being followed are always changing as a result of the activity of the system itself and its embedding in other biological processes. Over time, the effects of these factors will accumulate and compound – a comparison of what a living system and a duplicate might do in a single operation doesn’t capture their importance. I see all this not as a “lowering of the bar” that enables us to keep talking in a rough way about functional identity, but another functional difference between living and artificial systems.
the China brainthought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?
(China’s population, at 1.4 billion, isn’t large enough for each person to only simulate one neuron and so simulate a whole human brain with >80 billion neurons, but we could imagine a larger population, or a smaller animal brain being simulated, e.g. various mammals or birds.)
(EDIT: Split this up into two comments, the other here.)
I think that there’s probably a minimum level of substrate independence we should accept, e.g. that it doesn’t matter exactly what matter a “brain” is made out of, as long as the causal structure is similar enough on a fine enough level. The mere fact that neurons are largely made out of carbon doesn’t seem essential. Furthermore, human and (apparently) conscious animal brains are noisy and vary substantially from one another, so exact duplication of the causal structure doesn’t seem necessary, as long as the errors don’t accumulate so much that the result isn’t similar to a plausible state for a plausible conscious biological brain.[1] So, I’m inclined to say that we could replace biological neurons with artificial neurons and retain consciousness, at least in principle, but it could depend on the artificial neurons.
It’s worth pointing out that the China brain[2] and a digital mind (or digital simulation of a mind, on computers like today’s) aren’t really causally isomorphic to biological brains even if you ignore a lot of the details of biological brains. Obviously, you also have to ignore a lot of the details of the China brain and digital minds. But I could imagine that the extra details in the China brain and digital minds make a difference.
In a simulated neuron, both in the China brain and digital minds, there are details to ignore. In the China brain, that’s all the stuff happening inside each person simulating a neuron. For a digital mind, there’s probably lots of extra hardware stuff going on.
In a digital mind on a computer like today’s computers or even distributed across hundreds of computers or processing units (CPU cores, GPUs), it seems you must ignore the fact that the digital state transitions are orchestrated centrally “from the outside”, today through some kind of loop (e.g. a for-loop or while-loop, or some number of these with some asynchronous distribution). Individual biological neurons act relatively autonomously/asynchronously just in response to local neural activity (including electrical and chemical signals), without this kind of external central orchestration. Actually, if you were to ignore the centralized orchestration in a digital mind, depending on how you cash that out, the digital mind might never change states, so maybe the digital mind isn’t actually isomorphic to a biological brain at the right level(s) of causal structure for each at all.
These extra details make me less sure that we should attribute consciousness to the China brain and digital minds, but they don’t seem decisive.
From footnote 4 from Godfrey-Smith, 2023 (based on the talk he gave):
From the Wikipedia page:
(China’s population, at 1.4 billion, isn’t large enough for each person to only simulate one neuron and so simulate a whole human brain with >80 billion neurons, but we could imagine a larger population, or a smaller animal brain being simulated, e.g. various mammals or birds.)