I need to think about this in more detail, but here are some rough ideas, mostly thinking out loud (and perhaps not worth your time to go through these):
One possibility is that because we only care about when the neurons are firing if we reject counterfactual robustness anyway, we don’t even need to represent when they’re not firing with particle properties. Then the signals from one neuron to the next can just be represented by the force exerted by the corresponding particle to the next corresponding particle. However, this way, the force doesn’t seem responsible for the “firing” state (i.e. that Y exerts a force on Z is not because of some Z that exerted a force on Y before that), so this probably doesn’t work.
We can just pick any specific property, and pick a threshold between firing and non-firing that puts every particle well-above the threshold into firing. But again, the force wouldn’t be responsible for the state being above the threshold.
We can use a particle’s position, velocity, acceleration, energy, net force, whatever as encoding whether or not a neuron is firing, but then we only care about when the neurons are firing anyway, and we could have independent freedom for each individual particle to decide which quantity or vector to use, which threshold to use, which side of the threshold counts as a neuron firing, etc.. If we use all of those independent degrees of freedom or even just one independent degree of freedom per particle, then this does seem pretty arbitrary and gerrymandered. But we can also imagine replacing individual neurons in a full typical human brain each with a different kind of artificial neuron (or particle) whose firing is replaced by a different kind of degree of freedom, and still preserve counterfactual robustness, and it could (I’m not sure) look the same once we get rid of all of the inactive neurons, so is it really gerrymandered?
If we only have a number of degrees of freedom much smaller than the number of times neurons fired, so we need to pick things for all the particles at once (a quantity or vector, a uniform threshold to separate firing from non-firing, the same side of the threshold for all), and not independently, then it doesn’t seem very gerrymandered, but you can still get a huge number of degrees of freedom by considering degrees of freedom that neurons should probably be allowed in our interpreting their activity as conscious:
which particles to use (given n particles, we have (nk) subsets of k particles to choose from)
which moment each particle counts as “firing”, and exactly which neuron firing event it gets mapped to.
But for 4, I still don’t know what “signals” to use, so that they are “responsible” for the states. Maybe any reasonable signal that relates to states in the right way will make it incredibly unlikely for walls to be conscious.
I need to think about this in more detail, but here are some rough ideas, mostly thinking out loud (and perhaps not worth your time to go through these):
One possibility is that because we only care about when the neurons are firing if we reject counterfactual robustness anyway, we don’t even need to represent when they’re not firing with particle properties. Then the signals from one neuron to the next can just be represented by the force exerted by the corresponding particle to the next corresponding particle. However, this way, the force doesn’t seem responsible for the “firing” state (i.e. that Y exerts a force on Z is not because of some Z that exerted a force on Y before that), so this probably doesn’t work.
We can just pick any specific property, and pick a threshold between firing and non-firing that puts every particle well-above the threshold into firing. But again, the force wouldn’t be responsible for the state being above the threshold.
We can use a particle’s position, velocity, acceleration, energy, net force, whatever as encoding whether or not a neuron is firing, but then we only care about when the neurons are firing anyway, and we could have independent freedom for each individual particle to decide which quantity or vector to use, which threshold to use, which side of the threshold counts as a neuron firing, etc.. If we use all of those independent degrees of freedom or even just one independent degree of freedom per particle, then this does seem pretty arbitrary and gerrymandered. But we can also imagine replacing individual neurons in a full typical human brain each with a different kind of artificial neuron (or particle) whose firing is replaced by a different kind of degree of freedom, and still preserve counterfactual robustness, and it could (I’m not sure) look the same once we get rid of all of the inactive neurons, so is it really gerrymandered?
If we only have a number of degrees of freedom much smaller than the number of times neurons fired, so we need to pick things for all the particles at once (a quantity or vector, a uniform threshold to separate firing from non-firing, the same side of the threshold for all), and not independently, then it doesn’t seem very gerrymandered, but you can still get a huge number of degrees of freedom by considering degrees of freedom that neurons should probably be allowed in our interpreting their activity as conscious:
which particles to use (given n particles, we have (nk) subsets of k particles to choose from)
which moment each particle counts as “firing”, and exactly which neuron firing event it gets mapped to.
But for 4, I still don’t know what “signals” to use, so that they are “responsible” for the states. Maybe any reasonable signal that relates to states in the right way will make it incredibly unlikely for walls to be conscious.