I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as[.]

I haven’t read most of this paper, but it seems to argue that.

3.4. Counterfactually stable account of implementation
To claim a computational understanding of a system, it is necessary for us to be able to map its instantaneous states and variables to those of a model. Such a mapping is, however, far from sufficient to establish that the system is actually implementing the model: without additional constraints, a large enough conglomerate of objects and events can be mapped so as to realize any arbitrary computation (Chalmers, 1994; Putnam, 1988). A careful analysis of what it means for a physical system to implement an abstract computation (Chalmers, 1994; Maudlin, 1989) suggests that, in addition to specifying a mapping between the respective instantaneous states of the system and the computational model, one needs to spell out the rules that govern the causal transitions between corresponding instantaneous states in a counterfactually resistant manner.

In the case of modeling phenomenal experience, the stakes are actually much higher: one expects a model of qualia to be not merely good (in the sense of the goodness of fit between the model and its object), but true and unique. Given that a multitude of distinct but equally good computational models may exist, why is not the system realizing a multitude of different experiences at a given time? Dodging this question amounts to conceding that computation is not nomologically related to qualia.

Construing computation in terms of causal interactions between instantaneous states and variables of a system has ramifications that may seem problematic for modeling experience. If computations and their implementations are individuated in terms of causal networks, then any given, specific experience or quale is individuated (in part) by the system’s entire space of possible instantaneous states and their causal interrelationships. In other words, the experience that is unfolding now is defined in part by the entire spectrum of possible experiences available to the system.

In subsequent sections, we will show that this explanatory problem is not in fact insurmountable, by outlining a solution for it. Meanwhile, we stress that while computation can be explicated by numbering the instantaneous states of a system and listing rules of transition between these states, it can also be formulated equivalently in dynamical terms, by defining (local) variables and the dynamics that govern their changes over time. For example, in neural-like models computation can be explicated in terms of the instantaneous state of ‘‘representational units’’ and the differential equations that together with present input lead to the unfolding of each unit’s activity over time. Under this description, computational structure results entirely from local physical interactions.

It’s a little bit difficult to parse precisely how they believe they solve the multiple realization of computational interpretations of a system, but the key passage seems to be:

Third, because of multiple realizability of computation, one computational process or system can represent another, in that a correspondence can be drawn between certain organizational aspects of one process and those of the other. In the simplest representational scenario, correspondence holds between successive states of the two processes, as well as between their respective timings. In this case, the state-space trajectory of one system unfolds in lockstep with that of the other system, because the dynamics of the two systems are sufficiently close to one another; for example, formal neurons can be wired up into a network whose dynamics would emulate (Grush, 2004) that of the falling rock mentioned above. More interesting are cases in which the correspondence exists on a more abstract level, for instance between a certain similarity structure over some physical variables ‘‘out there’’ in the world (e.g., between objects that fall like a rock and those that drift down like a leaf) and a conceptual structure over certain instances of neural activity, as well as cases in which the system emulates aspects of its own dynamics. Further still, note that once representational mechanisms have been set in place, they can also be used ‘‘offline’’ (Grush, 2004). In all cases, the combinatorics of the world ensures that the correspondence relationship behind instances of representation is highly non-trivial, that is, unlikely to persist purely as a result of a chance configurational alignment between two randomly picked systems (Chalmers, 1994).

My attempt at paraphrasing this: if we can model the evolution of a physical system and the evolution of a computational system with the same phase space for some finite time t, then as t increases we can be increasingly confident the physical system is instantiating this computational system. At the limit (t->∞), this may offer a method for uniquely identifying which computational system a physical system is instantiating.

My intuition here is that the closer they get to solving the problem of how to ‘objectively’ determine what computations a physical system is realizing, the further their framework will stray from the Turing paradigm of computation and the closer it will get to a hypercomputation paradigm (which in turn may essentially turn out to be isomorphic to physics). But, I’m sure I’m biased, too. :) Might be worth a look.

The counterfactual response is typically viewed as inadequate in the face of triviality
arguments. However, when we count the number of automata permitted
under that response, we find it succeeds in limiting token physical systems to
realizing at most a vanishingly small fraction of the computational systems they
could realize if their causal structure could be ‘repurposed’ as needed. Therefore,
the counterfactual response is a prima facie promising reply to triviality
arguments.
Someone might object this result nonetheless does not effectively handle
the metaphysical issues raised by those arguments. Specifically, an ‘absolutist’
regarding the goals of an account of computational realization might hold that
any satisfactory response to triviality arguments must reduce the number of
possibly-realized computational systems to one, or to some number close to
one. While the counterfactual response may eliminate the vast majority of
computational systems from consideration, in comparison to any small constant,
the number of remaining possibly-realized computational systems is still too high
(2^n).

That seems like a useful approach- in particular,

On the other hand, the argument suggests at least some computational hypotheses
regarding cognition are empirically substantive: by identifying types of computation characteristic of cognition (e.g., systematicity, perhaps), we limit potential cognitive devices to those whose causal structure includes these types of computation in the sets of possibilities they support.

This does seem to support the idea that progress can be made on this problem! On the other hand, the author’s starting assumption is we can treat a physical system as a computational (digital) automata, which seems like a pretty big assumption.

I think this assumption may or may not turn out to be ultimately true (Wolfram et al), but given current theory it seems difficult to reduce actual physical systems to computational automata in practice. In particular, it seems difficult to apply this framework to (1) quantum systems (which all physical systems ultimately are), and (2) biological systems which have messy levels of abstraction such as the brain (which we’d want to be able to do for the purposes of functionalism).

From a physics perspective, I wonder if we could figure out a way to feed in a bounded wavefunction, and get identify some minimum upper bound of reasonable computational interpretations of the system. My instinct is that David Deutsch might be doing relevant work? But I’m not at all sure of this.

Aaronson’s “Is ‘information is physical’ contentful?” also seems relevant to this discussion (though I’m not sure exactly how to apply his arguments):

But we should’ve learned by now to doubt this sort of argument. There’s no general principle, in our universe, saying that you can hide as many bits as you want in a physical object, without those bits influencing the object’s observable properties. On the contrary, in case after case, our laws of physics seem to be intolerant of “wallflower bits,” which hide in a corner without talking to anyone. If a bit is there, the laws of physics want it to affect other nearby bits and be affected by them in turn.
…
In summary, our laws of physics are structured in such a way that even pure information often has “nowhere to hide”: if the bits are there at all in the abstract machinery of the world, then they’re forced to pipe up and have a measurable effect. And this is not a tautology, but comes about only because of nontrivial facts about special and general relativity, quantum mechanics, quantum field theory, and thermodynamics. And this is what I think people should mean when they say “information is physical.”

I haven’t read most of this paper, but it seems to argue that.

You may also like Towards a computational theory of experience by Fekete and Edelman- here’s their setup:

It’s a little bit difficult to parse

precisely howthey believe they solve the multiple realization of computational interpretations of a system, but the key passage seems to be:My attempt at paraphrasing this: if we can model the evolution of a physical system and the evolution of a computational system with the same phase space for some finite time

t, then astincreases we can be increasingly confident the physical system is instantiating this computational system. At the limit (t->∞), this may offer a method for uniquely identifying which computational system a physical system is instantiating.My intuition here is that the closer they get to solving the problem of how to ‘objectively’ determine what computations a physical system is realizing, the further their framework will stray from the Turing paradigm of computation and the closer it will get to a hypercomputation paradigm (which in turn may essentially turn out to be isomorphic to physics). But, I’m sure I’m biased, too. :) Might be worth a look.

That seems like a useful approach- in particular,

This does seem to support the idea that progress can be made on this problem! On the other hand, the author’s starting assumption is we can treat a physical system as a computational (digital) automata, which seems like a pretty big assumption.

I think this assumption may or may not turn out to be

ultimatelytrue (Wolfram et al), but given current theory it seems difficult to reduce actual physical systems to computational automata in practice. In particular, it seems difficult to apply this framework to (1) quantum systems (which all physical systems ultimately are), and (2) biological systems which have messy levels of abstraction such as the brain (which we’d want to be able to do for the purposes of functionalism).From a physics perspective, I wonder if we could figure out a way to feed in a bounded wavefunction, and get identify some minimum upper bound of reasonable computational interpretations of the system. My instinct is that David Deutsch might be doing relevant work? But I’m not at all sure of this.

Aaronson’s “Is ‘information is physical’ contentful?” also seems relevant to this discussion (though I’m not sure exactly how to apply his arguments):

https://www.scottaaronson.com/blog/?p=3327