I concluded around the age of 20 that something more than pure physicalism was needed in order to account for consciousness. I don’t remember all the details, but I think even then I was struck by the idea that the conscious self has a kind of holistic unity that assemblages of particles don’t possess. One of my first ideas was a property dualism in which the physical substrate would be knots of electromagnetic or gravitational flux, related systematically by some psychophysical law to the intentional states which I saw as being the real “substance” of consciousness.
I mention this to convey my sympathy for Andrés’s idea that nontrivial topological structures provide the physical substrate of consciousness—that was my first idea too.
Years later, I had learned quantum mechanics and wondered if quantum entanglement could provide the complex ontological unities that consciousness seems to require. I worked for Stuart Hameroff for a year, and got to know his and Penrose’s ideas quite well. The problem with entanglement is that it potentially gives you too much unity—you need an ontology in which the parts of the conscious self are objectively tied together, but are also objectively disjoint from other selves. In principle, you can have that in a quantum theory, but it implies a dynamics or an ontology that is a little unusual, from the perspective of the usual ontological options.
In the end I took up the study of truly fundamental physics (quantum field theory, string theory), because that seemed like the surest path to the correct quantum ontology, and it looked like I would need that for the correct ontology of mind. Also by that time, AI had advanced far enough that I wanted to know the correct ontology of mind, not just from a desire to know the truth, but because it would be needed for AI alignment. An AI might have the right values but the wrong ontology of personhood.
What would I say these days? First of all, the nature of the structures that hypothetically bridge fundamental physics and conscious states is still wide open, because the mathematics of fundamental physics is still wide open. Topological objects, Hilbert space objects, they are definitely contenders to be involved, but so are many other kinds of structure. One really needs to look for a convergence between the mathematical ontology of fundamental physics, the phenomenological ontology of consciousness, and the biology and biophysics of the brain.
Of these, I think the second is somewhat neglected by scientifically minded philosophers. Thanks to David Chalmers, qualia are taken seriously, but they seem inhibited about going beyond that, e.g. to talking of the self as something real. I suppose that sounds too much like a soul-substance; and also their habits of thought reduce everything to particles or to bits in atomistic interaction. Simple qualia, like points of color, are OK from this perspective, but larger wholes or Gestalts or complex unities run against their reductionist instincts. (Philosophical schools that explore consciousness without a materialistic prior, like Husserl’s transcendental phenomenology, are much less inhibited about noticing complex ontology of mind, and taking it seriously.)
On the other hand, many people think they can get ontological wholes through systems or bound structures, made of parts that interact persistently. To give a contemporary example, many people in the schools of thought associated with Michael Levin and Karl Friston seem to think this way. Given the absence of clear evidence of e.g. quantum biology playing a role in consciousness (more on this in a moment), a critique of the systems approach to consciousness would be useful for people like Andrés and myself, who want to argue strongly for a “substance” theory of mind (in which fundamental substrate matters), rather than an “information” theory of mind (in which consciousness has to supervene on coarse-grained state machines).
For me, the core arguments against substrate-indifferent information-based theory of consciousness, revolve around vagueness. There is a kind of ontological exactness that states of consciousness must possess, whereas coarse-grained informational or computational states inherently have some vagueness from a microphysical perspective (or else must be made exact in arbitrary ways). But there are a number of challenges to this argument—aren’t states of mind vague too? doesn’t functional output provide an exact criterion for categorizing a state? - which require precision to be countered. Perhaps it’s a shame that I never set out this argument as forcefully and successfully as Chalmers made the case for the importance of the “hard problem”.
This is relevant to the present article, in regards to the hidden complexities of the neuronal state, which make biological neurons so much more complicated than their virtual artificial counterparts. If we’re talking about replacing neurons in a living organism with digital emulators, then all those processes that take place alongside the action potential may be pragmatically relevant—they may also need to be represented in your emulator—but they do not actually challenge the computational theory of mind. They only require that your simulation is a little more fine-grained than we used to believe necessary.
In any case, at least for quantum theories of mind to become widely convincing, there needs to be some evidence that quantum biology is playing a role in conscious cognition, evidence which I believe is still quite lacking. Hameroff’s microtubules are still by far the best candidate I have, for a biological locus of persistent coherent quantum states, but it seems difficult to get decisive evidence of coherence. The length of the debate about whether quantum coherence occurs in photosynthesis shows how difficult it can be.
Thanks so much for your thoughtful and detailed comment, Mitchell! It seems like we’re roughly on the same page regarding the various constraints that a successful theory of consciousness should meet, as well as the class of approaches that seem most promising. Let me just share some immediate reactions I had while reading your comment. :)
The problem with entanglement is that it potentially gives you too much unity
Potentially, yes (though my understanding of entanglement is limited). On the other hand, as Atai has pointed out, “most binding-appreciators strongly, strongly underestimate just how ‘insane’ it is that we can have any candidate solution to the binding problem *at all* [entanglement] in a universe that remotely resembles the universe described by classical physics.” (Here’s his full writeup, which I find very compelling.) This makes me think that maybe we will find that entanglement gives us just the right amount of unity (though the specific mechanism might turn out to be pretty elaborate). Do you have any resources on the point about “too much unity”? I’d love to learn more.
First of all, the nature of the structures that hypothetically bridge fundamental physics and conscious states is still wide open, because the mathematics of fundamental physics is still wide open.
Agree, and this is part of what motivates the argument outlined in the last paragraph of the section “Sufficiently detailed replicas/simulations” above.
For me, the core arguments against substrate-indifferent information-based theory of consciousness, revolve around vagueness.
But there are a number of challenges to this argument—aren’t states of mind vague too?
Maybe, yeah, depending on how we define a state of mind. But as you pointed out, “there is a kind of ontological exactness that states of consciousness must possess,” which I also agree with—namely, that at least some moments of experience seem to exhibit some amount of fundamentally integrated information / binding. So if an ontology can’t accommodate that, it’s doomed. I believe that’s the case for information-based theories, since any unity is interpreted by us arbitrarily, i.e. it’s epiphenomenal.
They only require that your simulation is a little more fine-grained than we used to believe necessary.
I think “a little more” is doing a lot of work here. If consciousness is a thing/substrate, then any emulation that abstracts away finer levels of granularity will, by definition, not be that substrate, and therefore not be conscious (unless maybe one commits to the claim that the deepest layer of reality is binary/bits, as pointed out above).
In any case, at least for quantum theories of mind to become widely convincing, there needs to be some evidence that quantum biology is playing a role in conscious cognition, evidence which I believe is still quite lacking. Hameroff’s microtubules are still by far the best candidate I have, for a biological locus of persistent coherent quantum states, but it seems difficult to get decisive evidence of coherence. The length of the debate about whether quantum coherence occurs in photosynthesis shows how difficult it can be.
I confess I still don’t fully understand why we need to definitively prove that coherence has to be sustained. QM plays a causal role in the brain because it plays a causal role in everything, as I was hoping to convey with my xenon example. But I’ll keep thinking!
I’ll add another candidate for quantum biology into the mix: the Posner molecule (also mentioned by Atai here).
I concluded around the age of 20 that something more than pure physicalism was needed in order to account for consciousness. I don’t remember all the details, but I think even then I was struck by the idea that the conscious self has a kind of holistic unity that assemblages of particles don’t possess. One of my first ideas was a property dualism in which the physical substrate would be knots of electromagnetic or gravitational flux, related systematically by some psychophysical law to the intentional states which I saw as being the real “substance” of consciousness.
I mention this to convey my sympathy for Andrés’s idea that nontrivial topological structures provide the physical substrate of consciousness—that was my first idea too.
Years later, I had learned quantum mechanics and wondered if quantum entanglement could provide the complex ontological unities that consciousness seems to require. I worked for Stuart Hameroff for a year, and got to know his and Penrose’s ideas quite well. The problem with entanglement is that it potentially gives you too much unity—you need an ontology in which the parts of the conscious self are objectively tied together, but are also objectively disjoint from other selves. In principle, you can have that in a quantum theory, but it implies a dynamics or an ontology that is a little unusual, from the perspective of the usual ontological options.
In the end I took up the study of truly fundamental physics (quantum field theory, string theory), because that seemed like the surest path to the correct quantum ontology, and it looked like I would need that for the correct ontology of mind. Also by that time, AI had advanced far enough that I wanted to know the correct ontology of mind, not just from a desire to know the truth, but because it would be needed for AI alignment. An AI might have the right values but the wrong ontology of personhood.
What would I say these days? First of all, the nature of the structures that hypothetically bridge fundamental physics and conscious states is still wide open, because the mathematics of fundamental physics is still wide open. Topological objects, Hilbert space objects, they are definitely contenders to be involved, but so are many other kinds of structure. One really needs to look for a convergence between the mathematical ontology of fundamental physics, the phenomenological ontology of consciousness, and the biology and biophysics of the brain.
Of these, I think the second is somewhat neglected by scientifically minded philosophers. Thanks to David Chalmers, qualia are taken seriously, but they seem inhibited about going beyond that, e.g. to talking of the self as something real. I suppose that sounds too much like a soul-substance; and also their habits of thought reduce everything to particles or to bits in atomistic interaction. Simple qualia, like points of color, are OK from this perspective, but larger wholes or Gestalts or complex unities run against their reductionist instincts. (Philosophical schools that explore consciousness without a materialistic prior, like Husserl’s transcendental phenomenology, are much less inhibited about noticing complex ontology of mind, and taking it seriously.)
On the other hand, many people think they can get ontological wholes through systems or bound structures, made of parts that interact persistently. To give a contemporary example, many people in the schools of thought associated with Michael Levin and Karl Friston seem to think this way. Given the absence of clear evidence of e.g. quantum biology playing a role in consciousness (more on this in a moment), a critique of the systems approach to consciousness would be useful for people like Andrés and myself, who want to argue strongly for a “substance” theory of mind (in which fundamental substrate matters), rather than an “information” theory of mind (in which consciousness has to supervene on coarse-grained state machines).
For me, the core arguments against substrate-indifferent information-based theory of consciousness, revolve around vagueness. There is a kind of ontological exactness that states of consciousness must possess, whereas coarse-grained informational or computational states inherently have some vagueness from a microphysical perspective (or else must be made exact in arbitrary ways). But there are a number of challenges to this argument—aren’t states of mind vague too? doesn’t functional output provide an exact criterion for categorizing a state? - which require precision to be countered. Perhaps it’s a shame that I never set out this argument as forcefully and successfully as Chalmers made the case for the importance of the “hard problem”.
This is relevant to the present article, in regards to the hidden complexities of the neuronal state, which make biological neurons so much more complicated than their virtual artificial counterparts. If we’re talking about replacing neurons in a living organism with digital emulators, then all those processes that take place alongside the action potential may be pragmatically relevant—they may also need to be represented in your emulator—but they do not actually challenge the computational theory of mind. They only require that your simulation is a little more fine-grained than we used to believe necessary.
In any case, at least for quantum theories of mind to become widely convincing, there needs to be some evidence that quantum biology is playing a role in conscious cognition, evidence which I believe is still quite lacking. Hameroff’s microtubules are still by far the best candidate I have, for a biological locus of persistent coherent quantum states, but it seems difficult to get decisive evidence of coherence. The length of the debate about whether quantum coherence occurs in photosynthesis shows how difficult it can be.
Thanks so much for your thoughtful and detailed comment, Mitchell! It seems like we’re roughly on the same page regarding the various constraints that a successful theory of consciousness should meet, as well as the class of approaches that seem most promising. Let me just share some immediate reactions I had while reading your comment. :)
Potentially, yes (though my understanding of entanglement is limited). On the other hand, as Atai has pointed out, “most binding-appreciators strongly, strongly underestimate just how ‘insane’ it is that we can have any candidate solution to the binding problem *at all* [entanglement] in a universe that remotely resembles the universe described by classical physics.” (Here’s his full writeup, which I find very compelling.) This makes me think that maybe we will find that entanglement gives us just the right amount of unity (though the specific mechanism might turn out to be pretty elaborate). Do you have any resources on the point about “too much unity”? I’d love to learn more.
Agree, and this is part of what motivates the argument outlined in the last paragraph of the section “Sufficiently detailed replicas/simulations” above.
Same for me. The paper “Are algorithms always arbitrary?” makes this case nicely.
Maybe, yeah, depending on how we define a state of mind. But as you pointed out, “there is a kind of ontological exactness that states of consciousness must possess,” which I also agree with—namely, that at least some moments of experience seem to exhibit some amount of fundamentally integrated information / binding. So if an ontology can’t accommodate that, it’s doomed. I believe that’s the case for information-based theories, since any unity is interpreted by us arbitrarily, i.e. it’s epiphenomenal.
I think “a little more” is doing a lot of work here. If consciousness is a thing/substrate, then any emulation that abstracts away finer levels of granularity will, by definition, not be that substrate, and therefore not be conscious (unless maybe one commits to the claim that the deepest layer of reality is binary/bits, as pointed out above).
I confess I still don’t fully understand why we need to definitively prove that coherence has to be sustained. QM plays a causal role in the brain because it plays a causal role in everything, as I was hoping to convey with my xenon example. But I’ll keep thinking!
I’ll add another candidate for quantum biology into the mix: the Posner molecule (also mentioned by Atai here).
Thanks again! :)