I lean towards functionalism and illusionism, but am quite skeptical of computationalism and computational functionalism, and I think itâs important to distinguish them. Functionalism is, AFAIK, a fairly popular position among relevant experts, but computationalism much less so.
Under my favoured version of functionalism, the âfunctionsâ we should worry about are functional/âcausal roles with effects on things like attention and (dispositional or augmented hypothetical) externally directed behaviours, like approach, avoidance, beliefs, things we say (and how they are grounded through associations with real world states). These seem much less up to interpretation than computed mathematical âfunctionsâ like â0001, 0001 â 0010â. However, you can find simple versions of these functional/âcausal roles in many places if you squint, hence fuzziness.
Functionalism this way is still compatible with digital consciousness.
And I think we can use debunking arguments to support functionalism of some kind, but it could end up being a very fine-grained view, even the kind of view you propose here, with the necessary functional/âcausal roles at the level of fundamental physics. I doubt we need such fine-grained roles, though, and suspect similar debunking arguments can rule out their necessity. And I think those roles would be digitally simulatable in principle anyway.
It seems unlikely a large share of our AI will be fine-grained simulations of biological brains like this, given its inefficiency and the direction of AI development, but the absolute number could still be large.
Or, we could end up with a version of functonalism where nonphysical properties or nonphysical substances actually play parts in some necessary functional/âcausal roles. But again, Iâm skeptical, and those roles may also be digitally (and purely physically) simulatable.
I lean towards functionalism and illusionism, but am quite skeptical of computationalism and computational functionalism, and I think itâs important to distinguish them. Functionalism is, AFAIK, a fairly popular position among relevant experts, but computationalism much less so.
Under my favoured version of functionalism, the âfunctionsâ we should worry about are functional/âcausal roles with effects on things like attention and (dispositional or augmented hypothetical) externally directed behaviours, like approach, avoidance, beliefs, things we say (and how they are grounded through associations with real world states). These seem much less up to interpretation than computed mathematical âfunctionsâ like â0001, 0001 â 0010â. However, you can find simple versions of these functional/âcausal roles in many places if you squint, hence fuzziness.
Functionalism this way is still compatible with digital consciousness.
And I think we can use debunking arguments to support functionalism of some kind, but it could end up being a very fine-grained view, even the kind of view you propose here, with the necessary functional/âcausal roles at the level of fundamental physics. I doubt we need such fine-grained roles, though, and suspect similar debunking arguments can rule out their necessity. And I think those roles would be digitally simulatable in principle anyway.
It seems unlikely a large share of our AI will be fine-grained simulations of biological brains like this, given its inefficiency and the direction of AI development, but the absolute number could still be large.
Or, we could end up with a version of functonalism where nonphysical properties or nonphysical substances actually play parts in some necessary functional/âcausal roles. But again, Iâm skeptical, and those roles may also be digitally (and purely physically) simulatable.