One thing that confused me was that there were sections where it felt like you suggested that X is an argument against functionalism, where my own inclination would be to just somewhat reinterpret functionalism in light of X. For example:
I’ve yet to see a satisfactory functionalist account of how binding can happen (and I’ve come to believe that it’s not even possible in principle). At the same time, possible solutions positing that binding happens, for example, at the level of the electromagnetic fields produced by neurons strike me as elegant, parsimonious, and rigorous.
If it turns out that the information in separate neurons is somehow bound together by electromagnetic fields (I’ll admit that I didn’t read the papers you linked, so don’t understand what exactly this means), then why couldn’t we have a functionalist theory that included electromagnetic fields as its own communication channel? If we currently think that neurons communicate mostly by electric and chemical messages, then it doesn’t seem like a huge issue to revise that theory to say that the causal properties involved are achieved in part electromagnetically.
My understanding is that functionalist theories are characterized by their implicit ontological assumption that p-consciousness is an abstract entity; namely, a function. But “there are multiple ways to physically realize any (Turing-level) computation, and multiple ways to interpret a physical realization as computation, and no privileged way to choose between them” (Johnson, 2024, p.5). If a functionalist theory identifies an abstract entity that can only be implemented within a particular physical substrate (e.g., quantum theories of consciousness) then you solve the reality mapping problem (cf. Johnson, 2016, p.61). But most functionalist theories are not physically constrained in this way; a theory which identifies function p as sufficient for consciousness has to be open to p being realized within any physical system where the relevant causal mappings are preserved (both brains and silicone chips). EM field theories of consciousness are an elegant solution to the phenomenal binding problem precisely because there already exists a physical mechanism for drawing nontrivial boundaries between two conscious experiences: topological segmentation.
Right, most functionalist theories are currently not physically constrained. But if it turns out that consciousness requires causal properties implemented by EM fields, then the functions used by the theory would become ones defined in part by the math of EM fields. Which could then turn out to be physically constrained in practice, if the relevant causal properties could only be implemented by EM fields. (Though it might still allow a sufficiently good computer simulation of a brain with EM fields to be conscious.)
But “there are multiple ways to physically realize any (Turing-level) computation, and multiple ways to interpret a physical realization as computation, and no privileged way to choose between them”
This argument feels somewhat unconvincing to me. Of course, there are situations where you can validly interpret a physical realization as multiple different computations. But I tend to agree with e.g. Scott Aaronson’s argument (p. 22-25) that if you e.g. want to interpret a waterfall as computing a good chess move, then you probably need to include in your interpretation a component that calculates the chess move and which could do so even without making use of the waterfall. And then you get an objective way of checking whether the waterfall actually implements the chess algorithm:
… it seems overwhelmingly likely that any reduction algorithm would just solve the chess problem itself, without using the waterfall in an essential way at all! A bit more precisely, I conjecture that, given any chess-playing algorithm A that accesses a “waterfall oracle” W, there is an equally-good chess-playing algorithm A0 , with similar time and space requirements, that does not access W. If this conjecture holds, then it gives us a perfectly observer-independent way to formalize our intuition that the “semantics” of waterfalls have nothing to do with chess. [...]
Interestingly, the issue of “trivial” or “degenerate” reductions also arises within complexity theory, so it might be instructive to see how it is handled there. [...] Suppose we want to claim, for example, that a computation that plays chess is “equivalent” to some other computation that simulates a waterfall. Then our claim is only non-vacuous if it’s possible to exhibit the equivalence (i.e., give the reductions) within a model of computation that isn’t itself powerful enough to solve the chess or waterfall problems.
Likewise, I think that if a system is claimed to implement all of the functions of consciousness (whatever exactly they are) and produce the same behavior as that of a real conscious human… then I think that there’s some real sense of “actually computing the behavior and conscious thoughts of this human” that you cannot replicate unless you actually run that specific computation. (I also agree with @MichaelStJules ’s comment that the various functions performed inside an actual human seem generally much less open to interpretation than the kinds of toy functions mentioned in this post.)
Interesting post!
One thing that confused me was that there were sections where it felt like you suggested that X is an argument against functionalism, where my own inclination would be to just somewhat reinterpret functionalism in light of X. For example:
If it turns out that the information in separate neurons is somehow bound together by electromagnetic fields (I’ll admit that I didn’t read the papers you linked, so don’t understand what exactly this means), then why couldn’t we have a functionalist theory that included electromagnetic fields as its own communication channel? If we currently think that neurons communicate mostly by electric and chemical messages, then it doesn’t seem like a huge issue to revise that theory to say that the causal properties involved are achieved in part electromagnetically.
My understanding is that functionalist theories are characterized by their implicit ontological assumption that p-consciousness is an abstract entity; namely, a function. But “there are multiple ways to physically realize any (Turing-level) computation, and multiple ways to interpret a physical realization as computation, and no privileged way to choose between them” (Johnson, 2024, p.5). If a functionalist theory identifies an abstract entity that can only be implemented within a particular physical substrate (e.g., quantum theories of consciousness) then you solve the reality mapping problem (cf. Johnson, 2016, p.61). But most functionalist theories are not physically constrained in this way; a theory which identifies function p as sufficient for consciousness has to be open to p being realized within any physical system where the relevant causal mappings are preserved (both brains and silicone chips). EM field theories of consciousness are an elegant solution to the phenomenal binding problem precisely because there already exists a physical mechanism for drawing nontrivial boundaries between two conscious experiences: topological segmentation.
Right, most functionalist theories are currently not physically constrained. But if it turns out that consciousness requires causal properties implemented by EM fields, then the functions used by the theory would become ones defined in part by the math of EM fields. Which could then turn out to be physically constrained in practice, if the relevant causal properties could only be implemented by EM fields. (Though it might still allow a sufficiently good computer simulation of a brain with EM fields to be conscious.)
This argument feels somewhat unconvincing to me. Of course, there are situations where you can validly interpret a physical realization as multiple different computations. But I tend to agree with e.g. Scott Aaronson’s argument (p. 22-25) that if you e.g. want to interpret a waterfall as computing a good chess move, then you probably need to include in your interpretation a component that calculates the chess move and which could do so even without making use of the waterfall. And then you get an objective way of checking whether the waterfall actually implements the chess algorithm:
Likewise, I think that if a system is claimed to implement all of the functions of consciousness (whatever exactly they are) and produce the same behavior as that of a real conscious human… then I think that there’s some real sense of “actually computing the behavior and conscious thoughts of this human” that you cannot replicate unless you actually run that specific computation. (I also agree with @MichaelStJules ’s comment that the various functions performed inside an actual human seem generally much less open to interpretation than the kinds of toy functions mentioned in this post.)