If the human brain operates according the known laws of physics, then in principle your brain could be simulated with a pen and paper (at least given unlimited time, ink, and paper), and it would behave identically to the real thing (it would talk and think like you and have all your opinions).
One would need infinite resources to fully reproduce the behaviour of the brain assuming the universe is continuous. Even if the universe is discrete, one would need an unfeasibly large amount of resources. The human brain has around 0.00120 m^3 (= (1.13 + 1.26)*10^-3/ā2). The Planck volume is 4.22*10^-105 m^3. So the volume of a human brain corresponds to 2.84*10^101 (= 0.00120/ā(4.22*10^-105)) times the Plack volume. Even assuming all the information in a volume equal to the Planck volume can be represented by a single bit, one would need 2.84*10^101 bits to fully represent the state of a human brain. This is more bits than the 10^80 or so atoms in the universe, and one needs more than 1 atom per bit in a digital computer.
I donāt get why the āmoment of experience taking a thousand yearsā thing is supposed to be so weird? If we slowed down all the processes in your brain then moments of experience would take longer in physical time. Thatās not an argument against your consciousness being real. And this isnāt a hypothetical. We can literally do that by sending you on a spaceship close to the speed of light, and thatās exactly what would happen!
This is not what would happen under special relativity. If I was sent on a spaceship close to the speed of light, I would continue aging normally in my frame of reference. If I travelled for N years in the frame of reference of the spaceship, I would become N years older biologically speaking (neglecting the effects of microgravity). If I returned back to Earth then, it would have passed more than N years on Earth. So I would have effectively time-travelled into the future on Earth.
CF predicts that some sets of AND, OR, and NOT operations are conscious even if run at an arbitrarily low speed in their local frame of reference. So all my brain processes would have to slow down in the frame of reference of the brain for the analogy to hold. I guess one can get the closest to this slow down with cryopreserved brains, and I do not think these are conscious.
I donāt deny that my āunlimited time, ink, and paperā caveat is doing a lot of work in my argument. But we started with a thought experiment that is impossible to implement in practice (simulating a modern digital computer with a pen and paper) so I donāt see why my reply canāt do the same thing (even if it might require a lot more resources).
I think itās very unlikely that the human brain requires infinite time and memory to simulate. Even if continuous, you could probably simulate to arbitrary accuracy with a big enough discrete approximation. And the Bekenstein bound suggests there is a finite limit to the amount of information that can exist within a given volume.
As for whether my speed analogy works, I still think it does. Sure, if you pick a frame of reference in which you are stationary, then you continue to have experiences at the normal rate. But that wasnāt the frame of reference I was using. I was working in the frame of reference of someone back on Earth, which is an equally valid frame of reference. In those coordinates, every physical process in your brain is getting slowed down (electrical impulses are travelling slower from one side of your brain to the other, chemical reactions are slowing down, etc) and you are having experiences at a slower rate.
Suppose this was all that existed of you, and your real brain never had existed. Would that mean that you never existed as a conscious being, despite all your thoughts and utterances still being a part of the world?
I think whether my thoughts and utterances would come together with consciousness would strictly depend on how they are produced. I agree they could be reproduced at the computational (input-to-output) level with an arbitrarily high precision with an infinitely powerful digital computer (see Marrās levels of analysis). However, I do not see that as sufficient (or necessary) for consciousness. An infinitely large lookup table can also reproduce human behaviour at the computational level with an arbitrarily high precision, and I consider it has the least consciousness possible (practically 0). I believe consciousness depends on algorithms and implementation, not on the input-to-output mapping. This matters to me because simple logical operations written out by hand with pen and paper can only reproduce the behaviour of humans at the input-to-output level, not at the algorithmic or implementation level. In contrast, they can reproduce the behaviour of digital computers at the computational and algorithmic level. So my belief that they cannot be conscious makes me very sceptical about digital consciousness without causing a conflict with my belief in human consciousness.
Hi Toby. Thanks for the comment.
One would need infinite resources to fully reproduce the behaviour of the brain assuming the universe is continuous. Even if the universe is discrete, one would need an unfeasibly large amount of resources. The human brain has around 0.00120 m^3 (= (1.13 + 1.26)*10^-3/ā2). The Planck volume is 4.22*10^-105 m^3. So the volume of a human brain corresponds to 2.84*10^101 (= 0.00120/ā(4.22*10^-105)) times the Plack volume. Even assuming all the information in a volume equal to the Planck volume can be represented by a single bit, one would need 2.84*10^101 bits to fully represent the state of a human brain. This is more bits than the 10^80 or so atoms in the universe, and one needs more than 1 atom per bit in a digital computer.
This is not what would happen under special relativity. If I was sent on a spaceship close to the speed of light, I would continue aging normally in my frame of reference. If I travelled for N years in the frame of reference of the spaceship, I would become N years older biologically speaking (neglecting the effects of microgravity). If I returned back to Earth then, it would have passed more than N years on Earth. So I would have effectively time-travelled into the future on Earth.
CF predicts that some sets of AND, OR, and NOT operations are conscious even if run at an arbitrarily low speed in their local frame of reference. So all my brain processes would have to slow down in the frame of reference of the brain for the analogy to hold. I guess one can get the closest to this slow down with cryopreserved brains, and I do not think these are conscious.
I donāt deny that my āunlimited time, ink, and paperā caveat is doing a lot of work in my argument. But we started with a thought experiment that is impossible to implement in practice (simulating a modern digital computer with a pen and paper) so I donāt see why my reply canāt do the same thing (even if it might require a lot more resources).
I think itās very unlikely that the human brain requires infinite time and memory to simulate. Even if continuous, you could probably simulate to arbitrary accuracy with a big enough discrete approximation. And the Bekenstein bound suggests there is a finite limit to the amount of information that can exist within a given volume.
As for whether my speed analogy works, I still think it does. Sure, if you pick a frame of reference in which you are stationary, then you continue to have experiences at the normal rate. But that wasnāt the frame of reference I was using. I was working in the frame of reference of someone back on Earth, which is an equally valid frame of reference. In those coordinates, every physical process in your brain is getting slowed down (electrical impulses are travelling slower from one side of your brain to the other, chemical reactions are slowing down, etc) and you are having experiences at a slower rate.
I think whether my thoughts and utterances would come together with consciousness would strictly depend on how they are produced. I agree they could be reproduced at the computational (input-to-output) level with an arbitrarily high precision with an infinitely powerful digital computer (see Marrās levels of analysis). However, I do not see that as sufficient (or necessary) for consciousness. An infinitely large lookup table can also reproduce human behaviour at the computational level with an arbitrarily high precision, and I consider it has the least consciousness possible (practically 0). I believe consciousness depends on algorithms and implementation, not on the input-to-output mapping. This matters to me because simple logical operations written out by hand with pen and paper can only reproduce the behaviour of humans at the input-to-output level, not at the algorithmic or implementation level. In contrast, they can reproduce the behaviour of digital computers at the computational and algorithmic level. So my belief that they cannot be conscious makes me very sceptical about digital consciousness without causing a conflict with my belief in human consciousness.