This is a crosspost for the Pen & Paper Argument on Computational Functionalism Debate. This website has âA structured assembly of arguments in support of and challenging digital consciousnessâ. It was announced on the EA Forum on 7 November 2025. âThe current project lead is Chris Percy PhDâ.
Overview
The algorithm that is conscious in a computer can, by CF [computational functionalism] assumption, be replicated in all relevant aspects of its function by writing it out by hand on [with] pen and paper, e.g. conducting the matrix multiplications by hand over as many years as it takes. [Digital computers are just AND, OR, and NOT operations. So some sets of these operations would have to be conscious for consciousness to be possible in digital computers under CF. I (Vasco) do not see how any set of such operations could itself be conscious, and therefore reject CF.]
Specifically we could use this method to instantiate the feeling of âbeing you right now in this secondâ. Even if we wrote it down a thousand years from now and it took a thousand years to write it, a moment of experience identical to the one you are having now would materialise â and it would map to some physical spatiotemporal structure somewhere in this system of paper calculations. No matter how long the calculation took to write, the experienced moment would be no longer than the second of your current experience, i.e. there would likely be a temporal disconnect between the algorithm duration and the experience generated.
Closely related to the Chinese Room, US Economy, and Leibnizâs Mill arguments.
Responses
The bullet can be bitten simply by rejecting the intuition that such a paper system being conscious is âweirdâ or by rejecting the claim that âweirdnessâ of intuitions is a guide to truthfulness (pointing perhaps to weird intuitions in modern physics, such as quantum mechanics and general relativity, or the diverse ways proposed to resolve certain logical paradoxes).
BUT: Such an approach would need to be applied consistently to alternative accounts of consciousness as well. What makes one intuition about âweird implicationsâ a credible grounds for rejecting a theory (e.g. the promiscuity of panpsychism) but not credible for another?
Additional constraints could be put on CF to prevent this kind of outcome from occurring. For instance, the thermodynamics of calculation implementation could be drawn on to motivate a need for a spatiotemporal intensity constraint on the algorithm.
BUT: Such constraints could be hard to motivate (although might produce testable conclusions) and would move away from some of the canonical motivations for CF.
If the human brain operates according the known laws of physics, then in principle your brain could be simulated with a pen and paper (at least given unlimited time, ink, and paper), and it would behave identically to the real thing (it would talk and think like you and have all your opinions).
Suppose this was all that existed of you, and your real brain never had existed. Would that mean that you never existed as a conscious being, despite all your thoughts and utterances still being a part of the world? That seems like a much more counter intuitive conclusion to me than biting the bullet on pen+paper simulations having the potential for consciousness.
I donât get why the âmoment of experience taking a thousand yearsâ thing is supposed to be so weird? If we slowed down all the processes in your brain then moments of experience would take longer in physical time. Thatâs not an argument against your consciousness being real. And this isnât a hypothetical. We can literally do that by sending you on a spaceship close to the speed of light, and thatâs exactly what would happen!
Hi Toby. Thanks for the comment.
One would need infinite resources to fully reproduce the behaviour of the brain assuming the universe is continuous. Even if the universe is discrete, one would need an unfeasibly large amount of resources. The human brain has around 0.00120 m^3 (= (1.13 + 1.26)*10^-3/â2). The Planck volume is 4.22*10^-105 m^3. So the volume of a human brain corresponds to 2.84*10^101 (= 0.00120/â(4.22*10^-105)) times the Plack volume. Even assuming all the information in a volume equal to the Planck volume can be represented by a single bit, one would need 2.84*10^101 bits to fully represent the state of a human brain. This is more bits than the 10^80 or so atoms in the universe, and one needs more than 1 atom per bit in a digital computer.
This is not what would happen under special relativity. If I was sent on a spaceship close to the speed of light, I would continue aging normally in my frame of reference. If I travelled for N years in the frame of reference of the spaceship, I would become N years older biologically speaking (neglecting the effects of microgravity). If I returned back to Earth then, it would have passed more than N years on Earth. So I would have effectively time-travelled into the future on Earth.
CF predicts that some sets of AND, OR, and NOT operations are conscious even if run at an arbitrarily low speed in their local frame of reference. So all my brain processes would have to slow down in the frame of reference of the brain for the analogy to hold. I guess one can get the closest to this slow down with cryopreserved brains, and I do not think these are conscious.
I donât deny that my âunlimited time, ink, and paperâ caveat is doing a lot of work in my argument. But we started with a thought experiment that is impossible to implement in practice (simulating a modern digital computer with a pen and paper) so I donât see why my reply canât do the same thing (even if it might require a lot more resources).
I think itâs very unlikely that the human brain requires infinite time and memory to simulate. Even if continuous, you could probably simulate to arbitrary accuracy with a big enough discrete approximation. And the Bekenstein bound suggests there is a finite limit to the amount of information that can exist within a given volume.
As for whether my speed analogy works, I still think it does. Sure, if you pick a frame of reference in which you are stationary, then you continue to have experiences at the normal rate. But that wasnât the frame of reference I was using. I was working in the frame of reference of someone back on Earth, which is an equally valid frame of reference. In those coordinates, every physical process in your brain is getting slowed down (electrical impulses are travelling slower from one side of your brain to the other, chemical reactions are slowing down, etc) and you are having experiences at a slower rate.
I think whether my thoughts and utterances would come together with consciousness would strictly depend on how they are produced. I agree they could be reproduced at the computational (input-to-output) level with an arbitrarily high precision with an infinitely powerful digital computer (see Marrâs levels of analysis). However, I do not see that as sufficient (or necessary) for consciousness. An infinitely large lookup table can also reproduce human behaviour at the computational level with an arbitrarily high precision, and I consider it has the least consciousness possible (practically 0). I believe consciousness depends on algorithms and implementation, not on the input-to-output mapping. This matters to me because simple logical operations written out by hand with pen and paper can only reproduce the behaviour of humans at the input-to-output level, not at the algorithmic or implementation level. In contrast, they can reproduce the behaviour of digital computers at the computational and algorithmic level. So my belief that they cannot be conscious makes me very sceptical about digital consciousness without causing a conflict with my belief in human consciousness.