The counterfactual response is typically viewed as inadequate in the face of triviality
arguments. However, when we count the number of automata permitted
under that response, we find it succeeds in limiting token physical systems to
realizing at most a vanishingly small fraction of the computational systems they
could realize if their causal structure could be ‘repurposed’ as needed. Therefore,
the counterfactual response is a prima facie promising reply to triviality
arguments.
Someone might object this result nonetheless does not effectively handle
the metaphysical issues raised by those arguments. Specifically, an ‘absolutist’
regarding the goals of an account of computational realization might hold that
any satisfactory response to triviality arguments must reduce the number of
possibly-realized computational systems to one, or to some number close to
one. While the counterfactual response may eliminate the vast majority of
computational systems from consideration, in comparison to any small constant,
the number of remaining possibly-realized computational systems is still too high
(2^n).
That seems like a useful approach- in particular,
On the other hand, the argument suggests at least some computational hypotheses
regarding cognition are empirically substantive: by identifying types of computation characteristic of cognition (e.g., systematicity, perhaps), we limit potential cognitive devices to those whose causal structure includes these types of computation in the sets of possibilities they support.
This does seem to support the idea that progress can be made on this problem! On the other hand, the author’s starting assumption is we can treat a physical system as a computational (digital) automata, which seems like a pretty big assumption.
I think this assumption may or may not turn out to be ultimately true (Wolfram et al), but given current theory it seems difficult to reduce actual physical systems to computational automata in practice. In particular, it seems difficult to apply this framework to (1) quantum systems (which all physical systems ultimately are), and (2) biological systems which have messy levels of abstraction such as the brain (which we’d want to be able to do for the purposes of functionalism).
From a physics perspective, I wonder if we could figure out a way to feed in a bounded wavefunction, and get identify some minimum upper bound of reasonable computational interpretations of the system. My instinct is that David Deutsch might be doing relevant work? But I’m not at all sure of this.
That seems like a useful approach- in particular,
This does seem to support the idea that progress can be made on this problem! On the other hand, the author’s starting assumption is we can treat a physical system as a computational (digital) automata, which seems like a pretty big assumption.
I think this assumption may or may not turn out to be ultimately true (Wolfram et al), but given current theory it seems difficult to reduce actual physical systems to computational automata in practice. In particular, it seems difficult to apply this framework to (1) quantum systems (which all physical systems ultimately are), and (2) biological systems which have messy levels of abstraction such as the brain (which we’d want to be able to do for the purposes of functionalism).
From a physics perspective, I wonder if we could figure out a way to feed in a bounded wavefunction, and get identify some minimum upper bound of reasonable computational interpretations of the system. My instinct is that David Deutsch might be doing relevant work? But I’m not at all sure of this.