Some other arguments that push in favour of functionalism, the consciousness of simulated brains, including the China brain and digital minds, and brains with other artificial neurons:
We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn’t necessarily have access to what the simulation is run on.
In a simulated brain and the conscious biological brain it simulates, introspection would give the brains same beliefs about phenomenal properties and qualia, because it’s only sensitive to the causal/functional structure at a given level of detail, and those details are by design/assumption preserved under simulation. If the biological brain is phenomenally conscious, but the simulated brain is not, then it’s a surprising coincidence that the resulting beliefs about phenomenal consciousness are accurate in the biological brain but not in the simulated brain. Introspection doesn’t seem to give the biological brain any more reason to believe in its own phenomenal consciousness than it should for the simulated brain in its own, because introspection is only sensitive to causal/functional details common to both.[1]
It’s hard for me to imagine a compelling explanation of our consciousness that doesn’t extend to simulated brains, including the China brain and digital minds. Theories out there now don’t seem on track to address the hard problem, and this and other reasons (like above) incline me to dissolve it and accept illusionism about phenomenal properties/consciousness. Illusionism is generally functionalist, and I don’t see how an illusionist theory would deny the consciousness of the China brain and digital simulations of brains.
We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn’t necessarily have access to what the simulation is run on.
It seems weird to meaningfully update in favour of some concrete view on the basis that something might be true but that
we have no evidence for it, and
if it is true then everything we know about the universe is equally undermined
I agree there is something a bit weird about it, but I’m not sure I endorse that reaction. This doesn’t seem so different from p-zombies, and probably some moral thought experiments.
I don’t think it’s true that everything we know about the universe would be equally undermined. Most things wouldn’t be undermined at all or at worst would need to be slightly reinterpreted. Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There’s just more stuff outside our universe.
I guess you can imagine short simulations where all our understanding of physics is actually just implanted memories and fabricated records. But in doing so, you’re throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable. Longer simulations can preserve that causal structure.
This doesn’t seem so different from p-zombies, and probably some moral thought experiments.
I’m not sure what you mean here. That the simulation argument doesn’t seem different from those? Or that the argument that ‘we have no evidence of their existence and therefore shouldn’t update on speculation about them’ is comparable to what I’m saying about the simulation hypothesis?
If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only option we can think of, but one from which philosophers don’t feel nearly enough urgency to find alternatives to which to move.
Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There’s just more stuff outside our universe.
I don’t see how this would allow us to update on anything based on speculation about the ‘more stuff’. Yeah, we might choose to presume our pocket simulation will continue to behave as it has, but we don’t get to then say ’there’s some class of matter other than our own simulated matter which generates consciousness therefore consciousnessness is substrate independence.
As you say in your other comment, there’s probably some minimal level of substrate independence that non-solipsists have to accept, but that turns it into an empirical question (as it should be) - so an imagined metaverse gives us no reason to change our view on how substrate independent consciousness is.
in doing so, you’re throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable
This seems like an argument from sadness. What we would lose by imagining some outcomes shouldn’t affect our overall epistemics.
Some other arguments that push in favour of functionalism, the consciousness of simulated brains, including the China brain and digital minds, and brains with other artificial neurons:
We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn’t necessarily have access to what the simulation is run on.
In a simulated brain and the conscious biological brain it simulates, introspection would give the brains same beliefs about phenomenal properties and qualia, because it’s only sensitive to the causal/functional structure at a given level of detail, and those details are by design/assumption preserved under simulation. If the biological brain is phenomenally conscious, but the simulated brain is not, then it’s a surprising coincidence that the resulting beliefs about phenomenal consciousness are accurate in the biological brain but not in the simulated brain. Introspection doesn’t seem to give the biological brain any more reason to believe in its own phenomenal consciousness than it should for the simulated brain in its own, because introspection is only sensitive to causal/functional details common to both.[1]
It’s hard for me to imagine a compelling explanation of our consciousness that doesn’t extend to simulated brains, including the China brain and digital minds. Theories out there now don’t seem on track to address the hard problem, and this and other reasons (like above) incline me to dissolve it and accept illusionism about phenomenal properties/consciousness. Illusionism is generally functionalist, and I don’t see how an illusionist theory would deny the consciousness of the China brain and digital simulations of brains.
This is essentially the coincidence argument for illusionism in Chalmers, 2018.
It seems weird to meaningfully update in favour of some concrete view on the basis that something might be true but that
we have no evidence for it, and
if it is true then everything we know about the universe is equally undermined
I agree there is something a bit weird about it, but I’m not sure I endorse that reaction. This doesn’t seem so different from p-zombies, and probably some moral thought experiments.
I don’t think it’s true that everything we know about the universe would be equally undermined. Most things wouldn’t be undermined at all or at worst would need to be slightly reinterpreted. Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There’s just more stuff outside our universe.
I guess you can imagine short simulations where all our understanding of physics is actually just implanted memories and fabricated records. But in doing so, you’re throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable. Longer simulations can preserve that causal structure.
I’m not sure what you mean here. That the simulation argument doesn’t seem different from those? Or that the argument that ‘we have no evidence of their existence and therefore shouldn’t update on speculation about them’ is comparable to what I’m saying about the simulation hypothesis?
If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only option we can think of, but one from which philosophers don’t feel nearly enough urgency to find alternatives to which to move.
I don’t see how this would allow us to update on anything based on speculation about the ‘more stuff’. Yeah, we might choose to presume our pocket simulation will continue to behave as it has, but we don’t get to then say ’there’s some class of matter other than our own simulated matter which generates consciousness therefore consciousnessness is substrate independence.
As you say in your other comment, there’s probably some minimal level of substrate independence that non-solipsists have to accept, but that turns it into an empirical question (as it should be) - so an imagined metaverse gives us no reason to change our view on how substrate independent consciousness is.
This seems like an argument from sadness. What we would lose by imagining some outcomes shouldn’t affect our overall epistemics.