I donāt think I fully understand exactly what you are arguing for here, but would be interested in asking a few questions to help me understand it better, if youāre happy to answer?
If the human brain operates according to the known laws of physics, then in principle we could simulate it on a modern computer, and it would behave identically to the real thing (i.e. would respond in the same way to the same stimuli, and claim to see a purple ball with grandmaās face on it if given simulated LSD). Would such a brain simulation have qualia according to your view? Yes, no, or you donāt think the brain operates according to known laws of physics?
If (1) is answered no, what would happen if you gradually replaced a biological brain with a simulated brain bit by bit, replacing sections of the cells one at a time with a machine running a simulation of its counterpart? What would that feel like for the person? Their consciousness would slowly be disappearing but they would not outwardly behave any differently, which seems very odd.
If (1) is answered yes, does that mean that whatever this strange property of the EM field is, it will necessarily be possessed by the inner workings of the computer as well, when this simulation is run?
If (3) is answered yes, what if you instead ran the simulation with a pencil and paper, instead of an electronic computer. Would that simulated brain have qualia? You can execute any computer program with a pencil and paper (using paper as the memory and doing the necessary instructions yourself with the pencil) if you have enough time. But it seems much clearer here that there will be nothing unusual happening in the EM field when you do this simulation.
If all the fields of physics are made of qualia, then everything is made of qualia, including the electron field, the quark fields, etc?
If (1) is answered no, what would happen if you gradually replaced a biological brain with a simulated brain bit by bit, replacing sections of the cells one at a time with a machine running a simulation of its counterpart? What would that feel like for the person? Their consciousness would slowly be disappearing but they would not outwardly behave any differently, which seems very odd.
Not the OP, but a point Iāve made in past discussions when this argument comes up is that this is would probably actually not be all that odd without additional assumptions.
For any realist theory of consciousness, a question you could ask is, do there exist two systems that have the same external behavior, but one system is much less conscious than the other? (āS1,S2:B(S1)=B(S2)ā§C(S1)ā0ā C(S2)?)
Most theories answer āyesā. Functionalists tend to answer āyesā because lookup tables can theoretically simulate programs. Integrated Information Theory explicitly answers yes (see Fig 8, p.37 in the IIT4.0 paper). Attention Schema Theory Iām not familiar with, but I assume it has to answer yes because you could build a functionally identical system without an attention mechanism. Essentially any theory that looks inside a system rather than at input/āoutput level onlyāany non-behaviorist theoryāhas to answer yes.
Well if the answer is yes, then a situation you describe has to be possible. You just take S1 and gradually rebuild it to S2 such that behavior also gets preserved along the way.
So it seems to me like the fact that you can alter a system such that its consciousness fades but its behavior remains unchanged is not itself all that odd, it seems like something that probably has to be possible. Where it does get odd is if also assume that S1 and S2 perform their computations in a similar fashion. One thing that the examples Iāve listed all have in common is that this additional assumption is false; replacing e.g. a human cognitive function with a lookup table would lead in dramatically different internal behavior.
Because of all this, I think the more damning question would not just be ācan you replace the brain bit by bit and consciousness fadesā but ācan you replace the brain bit by bit such that the new components do similar things internally to the old components, and consciousness fadesā?[1] If the answer to that question is yes, then a theory might have a serious problem.
Thank you so much for your questions! :) Some quick thoughts:
If the human brain operates according to the known laws of physics, then in principle we could simulate it on a modern computer, and it would behave identically to the real thing (i.e. would respond in the same way to the same stimuli, and claim to see a purple ball with grandmaās face on it if given simulated LSD).
The brain operating according to the known laws of physics doesnāt imply we can simulate it on a modern computer (assuming you mean a digital computer). A trivial example is certain quantum phenomena. Digital hardware doesnāt cut it. And even if you do manage to simulate certain parts of the system, the only way to get it to behave identically to the real thing is to use the real thing as the substrate. For example, sure, you can crudely simulate the propagation of light on a digital computer, but in order for it to behave identically to the real thing, youād have to ensure e.g. that all āobserversā within your simulation measure its propagation speed to be c. I donāt believe you can do that given the costs of embodiment of computers.
Would such a brain simulation have qualia according to your view? Yes, no, or you donāt think the brain operates according to known laws of physics?
It would be trivial āqualia dust,ā like most electromagnetic phenomena (which are not globally bound). (I do think that the brain operates according to the laws of physics.)
If (1) is answered no, what would happen if you gradually replaced a biological brain with a simulated brain bit by bit, replacing sections of the cells one at a time with a machine running a simulation of its counterpart? What would that feel like for the person? Their consciousness would slowly be disappearing but they would not outwardly behave any differently, which seems very odd.
I think āgradually replacing the biological brain with a simulated brain bit by bitā begs the question. For example, what would it mean to replace a laser beam ābit by bitā?
If (1) is answered yes, does that mean that whatever this strange property of the EM field is, it will necessarily be possessed by the inner workings of the computer as well, when this simulation is run?
Just to be clear, Iām not claiming that the EM field has some additional strange property, but rather that the EM field as it is is conscious (cf. dual-aspect monism). Also consider: When you talk about āthe simulation being run,ā where exactly is the simulation? In the chips? In sub-elements of the chips? On the computer screen? Simulations, algorithms, etc. donāt have clearly-delineated boundaries, unlike our conscious experience. This is a problem.
If all the fields of physics are made of qualia, then everything is made of qualia, including the electron field, the quark fields, etc?
I believe that to be the most parsimonious and consistent view, yes.
The brain operating according to the known laws of physics doesnāt imply we can simulate it on a modern computer (assuming you mean a digital computer). A trivial example is certain quantum phenomena. Digital hardware doesnāt cut it.
Could you explain what you mean by this..? I wasnāt aware that there were any quantum phenomena that could not be simulated on a digital computer? Where do the non-computable functions appear in quantum theory? (My background: I have a PhD in theoretical physics, which certainly doesnāt make me an expert on this question, but Iād be very surprised if this was true and Iād never heard about it! And Iād be a bit embarrassed if it was a fact considered ātrivialā and I was unaware of it!)
There are quantum processes that canāt be simulated efficiently on a digital computer, but that is a different question.
Thanks, and sorry, I could have been more precise there. I guess I was thinking of the fact that, for example, some quantum systems would take, I donāt know, the age of the universe to compute on a digital computer. And as I hinted in my previous response, the runtime complexity matters. I illustrated this point in a previous post, using the example of an optical setup used to compute Fourier transforms at the speed of light, which you might find interesting. Curious if you have any thoughts!
Thanks for the link, Iāve just given your previous post a read. It is great! Extremely well written! Thanks for sharing!
I have a few thoughts on it I thought Iād just share. Would be interested to read a reply but donāt worry if it would be too time consuming.
I agree that your laser example is a good response to the āreplace one neuron at a timeā argument, and that at least in the context of that argument, computational complexity does matter. You canāt replace components of a brain with simulated parts if the simulated parts canāt keep up with the rest. If neurons are not individually replaceable, or at least not individually replaceable with something that can match the speed of a real neuron, (and I accept this seems possible) then I agree that the āreplace one neuron at a timeā thought experiment fails.
Computational complexity still seems pretty irrelevant for the other thought experiments: whether we can simulate a whole brain on a computer, and whether we can simulate a brain with a pencil and paper. Sure, itās going to take a very long time to get results, but why does that matter? Itās a thought experiment anyway.
I agree with you that the answer to the question āis this system conscious?ā should be observer independent. But I didnāt really follow why this belief is incompatible with functionalism?
I like the āreplace one neuron at a timeā thought-experiment, but accept it has flaws. For me, itās that we could in principle simulate a brain on a digital computer and have it behave identically, that convinces me of functionalism. I canāt grok how some system could behave identically but its thoughts not āexistā.
I really appreciate your feedback and your questions! š
Iād love to reply in detail but it would take me a while. š But maybe two quick points:
On observer independence: The main challenge that computational functionalism faces (IMO) is that thereās no principled way to say āTHIS is the (observer-independent) system I posit to be consciousā because algorithms, simulations, etc. donāt have clearly-defined boundaries. Itās up to us (as conscious agents) to arbitrarily determine those boundaries, so anything goes! The section āIs simulation an intrinsic property?ā in this post sums it up quite neatly, I think. Field topology, as well as, say, entanglement networks, do give us observer-independent boundaries.
On the simulation behaving identically to the brain: Here I think one could reasonably ask: What if, in order for the simulation to behave identically, we had to simulate the brain even at the smallest physical scale? Many people think this isnāt necessary and that the āneuron as digital switchesā abstraction is enough. But say we actually had to simulate EM field phenomena, quantum phenomena, etc. Then I think runtime complexity matters, since maybe some parts of the brain can be simulated easily and others take millions of years. Can one bootstrap a coherent simulation from that? Now imagine trying to simulate multiple brains interacting with each other, running physics experiments, etc. Can one set up the simulation such that e.g. they all measure the speed of light to be the same? Or otherwise always get the same experimental results? I kind of doubt so. But regardless, the previous point about observer dependence would still stand.
I donāt think I fully understand exactly what you are arguing for here, but would be interested in asking a few questions to help me understand it better, if youāre happy to answer?
If the human brain operates according to the known laws of physics, then in principle we could simulate it on a modern computer, and it would behave identically to the real thing (i.e. would respond in the same way to the same stimuli, and claim to see a purple ball with grandmaās face on it if given simulated LSD). Would such a brain simulation have qualia according to your view? Yes, no, or you donāt think the brain operates according to known laws of physics?
If (1) is answered no, what would happen if you gradually replaced a biological brain with a simulated brain bit by bit, replacing sections of the cells one at a time with a machine running a simulation of its counterpart? What would that feel like for the person? Their consciousness would slowly be disappearing but they would not outwardly behave any differently, which seems very odd.
If (1) is answered yes, does that mean that whatever this strange property of the EM field is, it will necessarily be possessed by the inner workings of the computer as well, when this simulation is run?
If (3) is answered yes, what if you instead ran the simulation with a pencil and paper, instead of an electronic computer. Would that simulated brain have qualia? You can execute any computer program with a pencil and paper (using paper as the memory and doing the necessary instructions yourself with the pencil) if you have enough time. But it seems much clearer here that there will be nothing unusual happening in the EM field when you do this simulation.
If all the fields of physics are made of qualia, then everything is made of qualia, including the electron field, the quark fields, etc?
Not the OP, but a point Iāve made in past discussions when this argument comes up is that this is would probably actually not be all that odd without additional assumptions.
For any realist theory of consciousness, a question you could ask is, do there exist two systems that have the same external behavior, but one system is much less conscious than the other? (āS1,S2:B(S1)=B(S2)ā§C(S1)ā0ā C(S2)?)
Most theories answer āyesā. Functionalists tend to answer āyesā because lookup tables can theoretically simulate programs. Integrated Information Theory explicitly answers yes (see Fig 8, p.37 in the IIT4.0 paper). Attention Schema Theory Iām not familiar with, but I assume it has to answer yes because you could build a functionally identical system without an attention mechanism. Essentially any theory that looks inside a system rather than at input/āoutput level onlyāany non-behaviorist theoryāhas to answer yes.
Well if the answer is yes, then a situation you describe has to be possible. You just take S1 and gradually rebuild it to S2 such that behavior also gets preserved along the way.
So it seems to me like the fact that you can alter a system such that its consciousness fades but its behavior remains unchanged is not itself all that odd, it seems like something that probably has to be possible. Where it does get odd is if also assume that S1 and S2 perform their computations in a similar fashion. One thing that the examples Iāve listed all have in common is that this additional assumption is false; replacing e.g. a human cognitive function with a lookup table would lead in dramatically different internal behavior.
Because of all this, I think the more damning question would not just be ācan you replace the brain bit by bit and consciousness fadesā but ācan you replace the brain bit by bit such that the new components do similar things internally to the old components, and consciousness fadesā?[1] If the answer to that question is yes, then a theory might have a serious problem.
Notably this is actually the thought experiment Eliezer proposed in the sequences (see start of the Socrates Dialogue).
Thank you so much for your questions! :) Some quick thoughts:
The brain operating according to the known laws of physics doesnāt imply we can simulate it on a modern computer (assuming you mean a digital computer). A trivial example is certain quantum phenomena. Digital hardware doesnāt cut it. And even if you do manage to simulate certain parts of the system, the only way to get it to behave identically to the real thing is to use the real thing as the substrate. For example, sure, you can crudely simulate the propagation of light on a digital computer, but in order for it to behave identically to the real thing, youād have to ensure e.g. that all āobserversā within your simulation measure its propagation speed to be c. I donāt believe you can do that given the costs of embodiment of computers.
It would be trivial āqualia dust,ā like most electromagnetic phenomena (which are not globally bound). (I do think that the brain operates according to the laws of physics.)
I think āgradually replacing the biological brain with a simulated brain bit by bitā begs the question. For example, what would it mean to replace a laser beam ābit by bitā?
Just to be clear, Iām not claiming that the EM field has some additional strange property, but rather that the EM field as it is is conscious (cf. dual-aspect monism). Also consider: When you talk about āthe simulation being run,ā where exactly is the simulation? In the chips? In sub-elements of the chips? On the computer screen? Simulations, algorithms, etc. donāt have clearly-delineated boundaries, unlike our conscious experience. This is a problem.
I believe that to be the most parsimonious and consistent view, yes.
Thanks for the reply, this definitely helps!
Could you explain what you mean by this..? I wasnāt aware that there were any quantum phenomena that could not be simulated on a digital computer? Where do the non-computable functions appear in quantum theory? (My background: I have a PhD in theoretical physics, which certainly doesnāt make me an expert on this question, but Iād be very surprised if this was true and Iād never heard about it! And Iād be a bit embarrassed if it was a fact considered ātrivialā and I was unaware of it!)
There are quantum processes that canāt be simulated efficiently on a digital computer, but that is a different question.
Thanks, and sorry, I could have been more precise there. I guess I was thinking of the fact that, for example, some quantum systems would take, I donāt know, the age of the universe to compute on a digital computer. And as I hinted in my previous response, the runtime complexity matters. I illustrated this point in a previous post, using the example of an optical setup used to compute Fourier transforms at the speed of light, which you might find interesting. Curious if you have any thoughts!
Thanks for the link, Iāve just given your previous post a read. It is great! Extremely well written! Thanks for sharing!
I have a few thoughts on it I thought Iād just share. Would be interested to read a reply but donāt worry if it would be too time consuming.
I agree that your laser example is a good response to the āreplace one neuron at a timeā argument, and that at least in the context of that argument, computational complexity does matter. You canāt replace components of a brain with simulated parts if the simulated parts canāt keep up with the rest. If neurons are not individually replaceable, or at least not individually replaceable with something that can match the speed of a real neuron, (and I accept this seems possible) then I agree that the āreplace one neuron at a timeā thought experiment fails.
Computational complexity still seems pretty irrelevant for the other thought experiments: whether we can simulate a whole brain on a computer, and whether we can simulate a brain with a pencil and paper. Sure, itās going to take a very long time to get results, but why does that matter? Itās a thought experiment anyway.
I agree with you that the answer to the question āis this system conscious?ā should be observer independent. But I didnāt really follow why this belief is incompatible with functionalism?
I like the āreplace one neuron at a timeā thought-experiment, but accept it has flaws. For me, itās that we could in principle simulate a brain on a digital computer and have it behave identically, that convinces me of functionalism. I canāt grok how some system could behave identically but its thoughts not āexistā.
I really appreciate your feedback and your questions! š
Iād love to reply in detail but it would take me a while. š But maybe two quick points:
On observer independence: The main challenge that computational functionalism faces (IMO) is that thereās no principled way to say āTHIS is the (observer-independent) system I posit to be consciousā because algorithms, simulations, etc. donāt have clearly-defined boundaries. Itās up to us (as conscious agents) to arbitrarily determine those boundaries, so anything goes! The section āIs simulation an intrinsic property?ā in this post sums it up quite neatly, I think. Field topology, as well as, say, entanglement networks, do give us observer-independent boundaries.
On the simulation behaving identically to the brain: Here I think one could reasonably ask: What if, in order for the simulation to behave identically, we had to simulate the brain even at the smallest physical scale? Many people think this isnāt necessary and that the āneuron as digital switchesā abstraction is enough. But say we actually had to simulate EM field phenomena, quantum phenomena, etc. Then I think runtime complexity matters, since maybe some parts of the brain can be simulated easily and others take millions of years. Can one bootstrap a coherent simulation from that? Now imagine trying to simulate multiple brains interacting with each other, running physics experiments, etc. Can one set up the simulation such that e.g. they all measure the speed of light to be the same? Or otherwise always get the same experimental results? I kind of doubt so. But regardless, the previous point about observer dependence would still stand.