Interesting post!
One thing that confused me was that there were sections where it felt like you suggested that X is an argument against functionalism, where my own inclination would be to just somewhat reinterpret functionalism in light of X. For example:
I’ve yet to see a satisfactory functionalist account of how binding can happen (and I’ve come to believe that it’s not even possible in principle). At the same time, possible solutions positing that binding happens, for example, at the level of the electromagnetic fields produced by neurons strike me as elegant, parsimonious, and rigorous.
If it turns out that the information in separate neurons is somehow bound together by electromagnetic fields (I’ll admit that I didn’t read the papers you linked, so don’t understand what exactly this means), then why couldn’t we have a functionalist theory that included electromagnetic fields as its own communication channel? If we currently think that neurons communicate mostly by electric and chemical messages, then it doesn’t seem like a huge issue to revise that theory to say that the causal properties involved are achieved in part electromagnetically.
Right, most functionalist theories are currently not physically constrained. But if it turns out that consciousness requires causal properties implemented by EM fields, then the functions used by the theory would become ones defined in part by the math of EM fields. Which could then turn out to be physically constrained in practice, if the relevant causal properties could only be implemented by EM fields. (Though it might still allow a sufficiently good computer simulation of a brain with EM fields to be conscious.)
This argument feels somewhat unconvincing to me. Of course, there are situations where you can validly interpret a physical realization as multiple different computations. But I tend to agree with e.g. Scott Aaronson’s argument (p. 22-25) that if you e.g. want to interpret a waterfall as computing a good chess move, then you probably need to include in your interpretation a component that calculates the chess move and which could do so even without making use of the waterfall. And then you get an objective way of checking whether the waterfall actually implements the chess algorithm:
Likewise, I think that if a system is claimed to implement all of the functions of consciousness (whatever exactly they are) and produce the same behavior as that of a real conscious human… then I think that there’s some real sense of “actually computing the behavior and conscious thoughts of this human” that you cannot replicate unless you actually run that specific computation. (I also agree with @MichaelStJules ’s comment that the various functions performed inside an actual human seem generally much less open to interpretation than the kinds of toy functions mentioned in this post.)