The brain operating according to the known laws of physics doesn’t imply we can simulate it on a modern computer (assuming you mean a digital computer). A trivial example is certain quantum phenomena. Digital hardware doesn’t cut it.
Could you explain what you mean by this..? I wasn’t aware that there were any quantum phenomena that could not be simulated on a digital computer? Where do the non-computable functions appear in quantum theory? (My background: I have a PhD in theoretical physics, which certainly doesn’t make me an expert on this question, but I’d be very surprised if this was true and I’d never heard about it! And I’d be a bit embarrassed if it was a fact considered ‘trivial’ and I was unaware of it!)
There are quantum processes that can’t be simulated efficiently on a digital computer, but that is a different question.
Thanks, and sorry, I could have been more precise there. I guess I was thinking of the fact that, for example, some quantum systems would take, I don’t know, the age of the universe to compute on a digital computer. And as I hinted in my previous response, the runtime complexity matters. I illustrated this point in a previous post, using the example of an optical setup used to compute Fourier transforms at the speed of light, which you might find interesting. Curious if you have any thoughts!
Thanks for the link, I’ve just given your previous post a read. It is great! Extremely well written! Thanks for sharing!
I have a few thoughts on it I thought I’d just share. Would be interested to read a reply but don’t worry if it would be too time consuming.
I agree that your laser example is a good response to the “replace one neuron at a time” argument, and that at least in the context of that argument, computational complexity does matter. You can’t replace components of a brain with simulated parts if the simulated parts can’t keep up with the rest. If neurons are not individually replaceable, or at least not individually replaceable with something that can match the speed of a real neuron, (and I accept this seems possible) then I agree that the ‘replace one neuron at a time’ thought experiment fails.
Computational complexity still seems pretty irrelevant for the other thought experiments: whether we can simulate a whole brain on a computer, and whether we can simulate a brain with a pencil and paper. Sure, it’s going to take a very long time to get results, but why does that matter? It’s a thought experiment anyway.
I agree with you that the answer to the question “is this system conscious?” should be observer independent. But I didn’t really follow why this belief is incompatible with functionalism?
I like the ‘replace one neuron at a time’ thought-experiment, but accept it has flaws. For me, it’s that we could in principle simulate a brain on a digital computer and have it behave identically, that convinces me of functionalism. I can’t grok how some system could behave identically but its thoughts not ‘exist’.
I really appreciate your feedback and your questions! 🙏
I’d love to reply in detail but it would take me a while. 😅 But maybe two quick points:
On observer independence: The main challenge that computational functionalism faces (IMO) is that there’s no principled way to say “THIS is the (observer-independent) system I posit to be conscious” because algorithms, simulations, etc. don’t have clearly-defined boundaries. It’s up to us (as conscious agents) to arbitrarily determine those boundaries, so anything goes! The section “Is simulation an intrinsic property?” in this post sums it up quite neatly, I think. Field topology, as well as, say, entanglement networks, do give us observer-independent boundaries.
On the simulation behaving identically to the brain: Here I think one could reasonably ask: What if, in order for the simulation to behave identically, we had to simulate the brain even at the smallest physical scale? Many people think this isn’t necessary and that the “neuron as digital switches” abstraction is enough. But say we actually had to simulate EM field phenomena, quantum phenomena, etc. Then I think runtime complexity matters, since maybe some parts of the brain can be simulated easily and others take millions of years. Can one bootstrap a coherent simulation from that? Now imagine trying to simulate multiple brains interacting with each other, running physics experiments, etc. Can one set up the simulation such that e.g. they all measure the speed of light to be the same? Or otherwise always get the same experimental results? I kind of doubt so. But regardless, the previous point about observer dependence would still stand.
Thanks for the reply, this definitely helps!
Could you explain what you mean by this..? I wasn’t aware that there were any quantum phenomena that could not be simulated on a digital computer? Where do the non-computable functions appear in quantum theory? (My background: I have a PhD in theoretical physics, which certainly doesn’t make me an expert on this question, but I’d be very surprised if this was true and I’d never heard about it! And I’d be a bit embarrassed if it was a fact considered ‘trivial’ and I was unaware of it!)
There are quantum processes that can’t be simulated efficiently on a digital computer, but that is a different question.
Thanks, and sorry, I could have been more precise there. I guess I was thinking of the fact that, for example, some quantum systems would take, I don’t know, the age of the universe to compute on a digital computer. And as I hinted in my previous response, the runtime complexity matters. I illustrated this point in a previous post, using the example of an optical setup used to compute Fourier transforms at the speed of light, which you might find interesting. Curious if you have any thoughts!
Thanks for the link, I’ve just given your previous post a read. It is great! Extremely well written! Thanks for sharing!
I have a few thoughts on it I thought I’d just share. Would be interested to read a reply but don’t worry if it would be too time consuming.
I agree that your laser example is a good response to the “replace one neuron at a time” argument, and that at least in the context of that argument, computational complexity does matter. You can’t replace components of a brain with simulated parts if the simulated parts can’t keep up with the rest. If neurons are not individually replaceable, or at least not individually replaceable with something that can match the speed of a real neuron, (and I accept this seems possible) then I agree that the ‘replace one neuron at a time’ thought experiment fails.
Computational complexity still seems pretty irrelevant for the other thought experiments: whether we can simulate a whole brain on a computer, and whether we can simulate a brain with a pencil and paper. Sure, it’s going to take a very long time to get results, but why does that matter? It’s a thought experiment anyway.
I agree with you that the answer to the question “is this system conscious?” should be observer independent. But I didn’t really follow why this belief is incompatible with functionalism?
I like the ‘replace one neuron at a time’ thought-experiment, but accept it has flaws. For me, it’s that we could in principle simulate a brain on a digital computer and have it behave identically, that convinces me of functionalism. I can’t grok how some system could behave identically but its thoughts not ‘exist’.
I really appreciate your feedback and your questions! 🙏
I’d love to reply in detail but it would take me a while. 😅 But maybe two quick points:
On observer independence: The main challenge that computational functionalism faces (IMO) is that there’s no principled way to say “THIS is the (observer-independent) system I posit to be conscious” because algorithms, simulations, etc. don’t have clearly-defined boundaries. It’s up to us (as conscious agents) to arbitrarily determine those boundaries, so anything goes! The section “Is simulation an intrinsic property?” in this post sums it up quite neatly, I think. Field topology, as well as, say, entanglement networks, do give us observer-independent boundaries.
On the simulation behaving identically to the brain: Here I think one could reasonably ask: What if, in order for the simulation to behave identically, we had to simulate the brain even at the smallest physical scale? Many people think this isn’t necessary and that the “neuron as digital switches” abstraction is enough. But say we actually had to simulate EM field phenomena, quantum phenomena, etc. Then I think runtime complexity matters, since maybe some parts of the brain can be simulated easily and others take millions of years. Can one bootstrap a coherent simulation from that? Now imagine trying to simulate multiple brains interacting with each other, running physics experiments, etc. Can one set up the simulation such that e.g. they all measure the speed of light to be the same? Or otherwise always get the same experimental results? I kind of doubt so. But regardless, the previous point about observer dependence would still stand.