I donāt deny that my āunlimited time, ink, and paperā caveat is doing a lot of work in my argument. But we started with a thought experiment that is impossible to implement in practice (simulating a modern digital computer with a pen and paper) so I donāt see why my reply canāt do the same thing (even if it might require a lot more resources).
I think itās very unlikely that the human brain requires infinite time and memory to simulate. Even if continuous, you could probably simulate to arbitrary accuracy with a big enough discrete approximation. And the Bekenstein bound suggests there is a finite limit to the amount of information that can exist within a given volume.
As for whether my speed analogy works, I still think it does. Sure, if you pick a frame of reference in which you are stationary, then you continue to have experiences at the normal rate. But that wasnāt the frame of reference I was using. I was working in the frame of reference of someone back on Earth, which is an equally valid frame of reference. In those coordinates, every physical process in your brain is getting slowed down (electrical impulses are travelling slower from one side of your brain to the other, chemical reactions are slowing down, etc) and you are having experiences at a slower rate.
Suppose this was all that existed of you, and your real brain never had existed. Would that mean that you never existed as a conscious being, despite all your thoughts and utterances still being a part of the world?
I think whether my thoughts and utterances would come together with consciousness would strictly depend on how they are produced. I agree they could be reproduced at the computational (input-to-output) level with an arbitrarily high precision with an infinitely powerful digital computer (see Marrās levels of analysis). However, I do not see that as sufficient (or necessary) for consciousness. An infinitely large lookup table can also reproduce human behaviour at the computational level with an arbitrarily high precision, and I consider it has the least consciousness possible (practically 0). I believe consciousness depends on algorithms and implementation, not on the input-to-output mapping. This matters to me because simple logical operations written out by hand with pen and paper can only reproduce the behaviour of humans at the input-to-output level, not at the algorithmic or implementation level. In contrast, they can reproduce the behaviour of digital computers at the computational and algorithmic level. So my belief that they cannot be conscious makes me very sceptical about digital consciousness without causing a conflict with my belief in human consciousness.
I donāt deny that my āunlimited time, ink, and paperā caveat is doing a lot of work in my argument. But we started with a thought experiment that is impossible to implement in practice (simulating a modern digital computer with a pen and paper) so I donāt see why my reply canāt do the same thing (even if it might require a lot more resources).
I think itās very unlikely that the human brain requires infinite time and memory to simulate. Even if continuous, you could probably simulate to arbitrary accuracy with a big enough discrete approximation. And the Bekenstein bound suggests there is a finite limit to the amount of information that can exist within a given volume.
As for whether my speed analogy works, I still think it does. Sure, if you pick a frame of reference in which you are stationary, then you continue to have experiences at the normal rate. But that wasnāt the frame of reference I was using. I was working in the frame of reference of someone back on Earth, which is an equally valid frame of reference. In those coordinates, every physical process in your brain is getting slowed down (electrical impulses are travelling slower from one side of your brain to the other, chemical reactions are slowing down, etc) and you are having experiences at a slower rate.
I think whether my thoughts and utterances would come together with consciousness would strictly depend on how they are produced. I agree they could be reproduced at the computational (input-to-output) level with an arbitrarily high precision with an infinitely powerful digital computer (see Marrās levels of analysis). However, I do not see that as sufficient (or necessary) for consciousness. An infinitely large lookup table can also reproduce human behaviour at the computational level with an arbitrarily high precision, and I consider it has the least consciousness possible (practically 0). I believe consciousness depends on algorithms and implementation, not on the input-to-output mapping. This matters to me because simple logical operations written out by hand with pen and paper can only reproduce the behaviour of humans at the input-to-output level, not at the algorithmic or implementation level. In contrast, they can reproduce the behaviour of digital computers at the computational and algorithmic level. So my belief that they cannot be conscious makes me very sceptical about digital consciousness without causing a conflict with my belief in human consciousness.