Yes, it’s literally a physical difference, but, by hypothesis, it had no influence on anything else in the brain at the time, and your behaviour and reports would be the same. Empty space (or a disconnected or differently connected neuron) could play the same non-firing neuron role in the actual sequence of events. Of course, empty space couldn’t also play the firing neuron role in counterfactuals (and a differently connected neuron wouldn’t play identical roles across counterfactuals), but why would what didn’t happen matter?
I can get your intuition about your case. Here is another with the same logic in which I don’t have the corresponding intuition:
Suppose that instead of just removing all non-firing neurons, we also remove all neurons both before they are triggered and after they trigger the next neurons in the sequence. E.g. you brain consists of neurons that magically pop into existence just in time to have the right effect on the next neurons that pop into existence in the sequence, and then they disappear back into nothing. We could also go a level down and have your brain consist only in atoms that briefly pop into existence in time to interact with the next atoms.
Your behavior and introspective reports wouldn’t change—do you think you’d still be conscious?
If the signals are still there to ensure causal influence, I think I would still be conscious like normal. The argument is exactly the same: whenever something is inactive and not affecting other things, it doesn’t need to be there at all.
We could also go a level down and have your brain consist only in atoms that briefly pop into existence in time to interact with the next atoms.
This is getting close to the problem I’m grappling with, once we step away from neurons and look at individual particles (or atoms). First, I could imagine individual atoms acting like neurons to implement a human-like neural network in a counterfactually robust way, too, and that would very likely be conscious. The atoms could literally pass photons or electrons to one another. Or maybe the signals would be their (changes in the) exertion of elementary forces (or gravity?). If during a particular sequence of events, whenever something happened to be inactive, it happened to disappear, then this shouldn’t make a difference.
But if you start from something that was never counterfactually robust in the first place, which I think is your intention, and its events just happen to match a conscious sequence of activity in a human brain, then it seems like it probably wouldn’t be conscious (although this is less unintuitive to me than is accepting counterfactual robustness mattering in a system that is usually counterfactually robust). Rejecting counterfactual robustness (together with my other views, and assuming things are arranged and mapped correctly) seems to imply that this should be conscious, and the consequences seem crazy if this turns out to be morally relevant.
It seems like counterfactual robustness might matter for consciousness in systems that aren’t normally conscious but very likely doesn’t matter in systems that are normally conscious, which doesn’t make much sense to me.
I can get your intuition about your case. Here is another with the same logic in which I don’t have the corresponding intuition:
Suppose that instead of just removing all non-firing neurons, we also remove all neurons both before they are triggered and after they trigger the next neurons in the sequence. E.g. you brain consists of neurons that magically pop into existence just in time to have the right effect on the next neurons that pop into existence in the sequence, and then they disappear back into nothing. We could also go a level down and have your brain consist only in atoms that briefly pop into existence in time to interact with the next atoms.
Your behavior and introspective reports wouldn’t change—do you think you’d still be conscious?
If the signals are still there to ensure causal influence, I think I would still be conscious like normal. The argument is exactly the same: whenever something is inactive and not affecting other things, it doesn’t need to be there at all.
This is getting close to the problem I’m grappling with, once we step away from neurons and look at individual particles (or atoms). First, I could imagine individual atoms acting like neurons to implement a human-like neural network in a counterfactually robust way, too, and that would very likely be conscious. The atoms could literally pass photons or electrons to one another. Or maybe the signals would be their (changes in the) exertion of elementary forces (or gravity?). If during a particular sequence of events, whenever something happened to be inactive, it happened to disappear, then this shouldn’t make a difference.
But if you start from something that was never counterfactually robust in the first place, which I think is your intention, and its events just happen to match a conscious sequence of activity in a human brain, then it seems like it probably wouldn’t be conscious (although this is less unintuitive to me than is accepting counterfactual robustness mattering in a system that is usually counterfactually robust). Rejecting counterfactual robustness (together with my other views, and assuming things are arranged and mapped correctly) seems to imply that this should be conscious, and the consequences seem crazy if this turns out to be morally relevant.
It seems like counterfactual robustness might matter for consciousness in systems that aren’t normally conscious but very likely doesn’t matter in systems that are normally conscious, which doesn’t make much sense to me.