The algorithm that is conscious in a computer can, by CF [computational functionalism] assumption, be replicated in all relevant aspects of its function by writing it out by hand on [with] pen and paper, e.g. conducting the matrix multiplications by hand over as many years as it takes. [Digital computers are just AND, OR, and NOT operations. So some sets of these operations would have to be conscious for consciousness to be possible in digital computers under CF. I (Vasco) do not see how any set of such operations could itself be conscious, and therefore reject CF.]
Specifically we could use this method to instantiate the feeling of âbeing you right now in this secondâ. Even if we wrote it down a thousand years from now and it took a thousand years to write it, a moment of experience identical to the one you are having now would materialise â and it would map to some physical spatiotemporal structure somewhere in this system of paper calculations. No matter how long the calculation took to write, the experienced moment would be no longer than the second of your current experience, i.e. there would likely be a temporal disconnect between the algorithm duration and the experience generated.
The bullet can be bitten simply by rejecting the intuition that such a paper system being conscious is âweirdâ or by rejecting the claim that âweirdnessâ of intuitions is a guide to truthfulness (pointing perhaps to weird intuitions in modern physics, such as quantum mechanics and general relativity, or the diverse ways proposed to resolve certain logical paradoxes).
BUT: Such an approach would need to be applied consistently to alternative accounts of consciousness as well. What makes one intuition about âweird implicationsâ a credible grounds for rejecting a theory (e.g. the promiscuity of panpsychism) but not credible for another?
Additional constraints could be put on CF to prevent this kind of outcome from occurring. For instance, the thermodynamics of calculation implementation could be drawn on to motivate a need for a spatiotemporal intensity constraint on the algorithm.
BUT: Such constraints could be hard to motivate (although might produce testable conclusions) and would move away from some of the canonical motivations for CF.
The pen and paper argument against computational functionalism
Link post
This is a crosspost for the Pen & Paper Argument on Computational Functionalism Debate. This website has âA structured assembly of arguments in support of and challenging digital consciousnessâ. It was announced on the EA Forum on 7 November 2025. âThe current project lead is Chris Percy PhDâ.
Overview
The algorithm that is conscious in a computer can, by CF [computational functionalism] assumption, be replicated in all relevant aspects of its function by writing it out by hand on [with] pen and paper, e.g. conducting the matrix multiplications by hand over as many years as it takes. [Digital computers are just AND, OR, and NOT operations. So some sets of these operations would have to be conscious for consciousness to be possible in digital computers under CF. I (Vasco) do not see how any set of such operations could itself be conscious, and therefore reject CF.]
Specifically we could use this method to instantiate the feeling of âbeing you right now in this secondâ. Even if we wrote it down a thousand years from now and it took a thousand years to write it, a moment of experience identical to the one you are having now would materialise â and it would map to some physical spatiotemporal structure somewhere in this system of paper calculations. No matter how long the calculation took to write, the experienced moment would be no longer than the second of your current experience, i.e. there would likely be a temporal disconnect between the algorithm duration and the experience generated.
Closely related to the Chinese Room, US Economy, and Leibnizâs Mill arguments.
Responses
The bullet can be bitten simply by rejecting the intuition that such a paper system being conscious is âweirdâ or by rejecting the claim that âweirdnessâ of intuitions is a guide to truthfulness (pointing perhaps to weird intuitions in modern physics, such as quantum mechanics and general relativity, or the diverse ways proposed to resolve certain logical paradoxes).
BUT: Such an approach would need to be applied consistently to alternative accounts of consciousness as well. What makes one intuition about âweird implicationsâ a credible grounds for rejecting a theory (e.g. the promiscuity of panpsychism) but not credible for another?
Additional constraints could be put on CF to prevent this kind of outcome from occurring. For instance, the thermodynamics of calculation implementation could be drawn on to motivate a need for a spatiotemporal intensity constraint on the algorithm.
BUT: Such constraints could be hard to motivate (although might produce testable conclusions) and would move away from some of the canonical motivations for CF.