For n≫c, the number of directed graphs with n vertices labelled 1,...,n and at most c directed edges from any vertex (and no multiple edges going the same way between the same pair of vertices) has an upper bound of
c∑k=0n×(nk)≤c×n×(nc)+1
The number of directed acyclic graphs assuming the vertices are topologically sorted by their labels is smaller, though, with an upper bound like the following, since we only back-connect each vertex i to at most c of the previous i−1 vertices.
I think the question is whether or not we can find huge degrees of freedom and numbers of events in mundane places and with the flexibility we have for interpreting different events like particle movements as neurons firing and sending signals (via elementary forces between particles).
For example, if all the particles in a group of n continuously move and continuously exert force on one another, there are n! ways to order those n particles, one movement per particle to represent a neuron firing, and use (the changes in) exertions of force between particles to represent signals between neurons. 1 million! is about 10^5565709. Maybe these numbers don’t actually matter much, and we can pick any chronological ordering of particle movements for at most 100 trillion mutually interacting particles to represent each time a neuron fired, and map each signal from one neuron to the next to (a change in) the exertion of force between the corresponding particles.
However, this ignores all but one force exerted on each particle at a time (and there are at least n−1, by hypothesis) and so the net forces and net movements of particles aren’t explained this way. And maybe this is too classical (non-quantum) of a picture, anyway.
But there are many ordered subsets of merely trillions of interacting particles we can find, effectively signaling each other with forces and small changes to their positions.
In brains, patterns of neural activity stimulate further patterns of neural activity. We can abstract this out into a system of state changes and treat conscious episodes as patterns of state changes. Then if we can find similar causal networks of state changes in the wall, we might have reason to think they are conscious as well. Is this the idea? If so, what sort of states are you imagining to change in the wall? Is it the precise configurations of particles? I expect a lot of the states you’ll identify to fulfill the relevant patterns will be arbitrary or gerrymandered. That might be an important difference that should make us hesitate before ascribing conscious experiences to walls.
I need to think about this in more detail, but here are some rough ideas, mostly thinking out loud (and perhaps not worth your time to go through these):
One possibility is that because we only care about when the neurons are firing if we reject counterfactual robustness anyway, we don’t even need to represent when they’re not firing with particle properties. Then the signals from one neuron to the next can just be represented by the force exerted by the corresponding particle to the next corresponding particle. However, this way, the force doesn’t seem responsible for the “firing” state (i.e. that Y exerts a force on Z is not because of some Z that exerted a force on Y before that), so this probably doesn’t work.
We can just pick any specific property, and pick a threshold between firing and non-firing that puts every particle well-above the threshold into firing. But again, the force wouldn’t be responsible for the state being above the threshold.
We can use a particle’s position, velocity, acceleration, energy, net force, whatever as encoding whether or not a neuron is firing, but then we only care about when the neurons are firing anyway, and we could have independent freedom for each individual particle to decide which quantity or vector to use, which threshold to use, which side of the threshold counts as a neuron firing, etc.. If we use all of those independent degrees of freedom or even just one independent degree of freedom per particle, then this does seem pretty arbitrary and gerrymandered. But we can also imagine replacing individual neurons in a full typical human brain each with a different kind of artificial neuron (or particle) whose firing is replaced by a different kind of degree of freedom, and still preserve counterfactual robustness, and it could (I’m not sure) look the same once we get rid of all of the inactive neurons, so is it really gerrymandered?
If we only have a number of degrees of freedom much smaller than the number of times neurons fired, so we need to pick things for all the particles at once (a quantity or vector, a uniform threshold to separate firing from non-firing, the same side of the threshold for all), and not independently, then it doesn’t seem very gerrymandered, but you can still get a huge number of degrees of freedom by considering degrees of freedom that neurons should probably be allowed in our interpreting their activity as conscious:
which particles to use (given n particles, we have (nk) subsets of k particles to choose from)
which moment each particle counts as “firing”, and exactly which neuron firing event it gets mapped to.
But for 4, I still don’t know what “signals” to use, so that they are “responsible” for the states. Maybe any reasonable signal that relates to states in the right way will make it incredibly unlikely for walls to be conscious.
Yes, it’s literally a physical difference, but, by hypothesis, it had no influence on anything else in the brain at the time, and your behaviour and reports would be the same. Empty space (or a disconnected or differently connected neuron) could play the same non-firing neuron role in the actual sequence of events. Of course, empty space couldn’t also play the firing neuron role in counterfactuals (and a differently connected neuron wouldn’t play identical roles across counterfactuals), but why would what didn’t happen matter?
I can get your intuition about your case. Here is another with the same logic in which I don’t have the corresponding intuition:
Suppose that instead of just removing all non-firing neurons, we also remove all neurons both before they are triggered and after they trigger the next neurons in the sequence. E.g. you brain consists of neurons that magically pop into existence just in time to have the right effect on the next neurons that pop into existence in the sequence, and then they disappear back into nothing. We could also go a level down and have your brain consist only in atoms that briefly pop into existence in time to interact with the next atoms.
Your behavior and introspective reports wouldn’t change—do you think you’d still be conscious?
If the signals are still there to ensure causal influence, I think I would still be conscious like normal. The argument is exactly the same: whenever something is inactive and not affecting other things, it doesn’t need to be there at all.
We could also go a level down and have your brain consist only in atoms that briefly pop into existence in time to interact with the next atoms.
This is getting close to the problem I’m grappling with, once we step away from neurons and look at individual particles (or atoms). First, I could imagine individual atoms acting like neurons to implement a human-like neural network in a counterfactually robust way, too, and that would very likely be conscious. The atoms could literally pass photons or electrons to one another. Or maybe the signals would be their (changes in the) exertion of elementary forces (or gravity?). If during a particular sequence of events, whenever something happened to be inactive, it happened to disappear, then this shouldn’t make a difference.
But if you start from something that was never counterfactually robust in the first place, which I think is your intention, and its events just happen to match a conscious sequence of activity in a human brain, then it seems like it probably wouldn’t be conscious (although this is less unintuitive to me than is accepting counterfactual robustness mattering in a system that is usually counterfactually robust). Rejecting counterfactual robustness (together with my other views, and assuming things are arranged and mapped correctly) seems to imply that this should be conscious, and the consequences seem crazy if this turns out to be morally relevant.
It seems like counterfactual robustness might matter for consciousness in systems that aren’t normally conscious but very likely doesn’t matter in systems that are normally conscious, which doesn’t make much sense to me.
For n≫c, the number of directed graphs with n vertices labelled 1,...,n and at most c directed edges from any vertex (and no multiple edges going the same way between the same pair of vertices) has an upper bound of
c∑k=0n× (nk)≤c×n× (nc)+1The number of directed acyclic graphs assuming the vertices are topologically sorted by their labels is smaller, though, with an upper bound like the following, since we only back-connect each vertex i to at most c of the previous i−1 vertices.
n∑i=1c∑k=0(i−1k)But even 1 million choose 1000 is a huge huge number, 10^3432, and the number of atoms in the observable universe is only within a few orders of magnitude of 10^80, far far smaller. A very loose upper bound is 10^202681, for at most 100 trillion neuron firings (1000 firings in a second per neuron x 100 billion neurons in the human brain) and at most 20,000 connections per neuron (the average in the human brain is 1000-7000 according to this page and up to 15,000 for a given neuron here).
I think the question is whether or not we can find huge degrees of freedom and numbers of events in mundane places and with the flexibility we have for interpreting different events like particle movements as neurons firing and sending signals (via elementary forces between particles).
For example, if all the particles in a group of n continuously move and continuously exert force on one another, there are n! ways to order those n particles, one movement per particle to represent a neuron firing, and use (the changes in) exertions of force between particles to represent signals between neurons. 1 million! is about 10^5565709. Maybe these numbers don’t actually matter much, and we can pick any chronological ordering of particle movements for at most 100 trillion mutually interacting particles to represent each time a neuron fired, and map each signal from one neuron to the next to (a change in) the exertion of force between the corresponding particles.
However, this ignores all but one force exerted on each particle at a time (and there are at least n−1, by hypothesis) and so the net forces and net movements of particles aren’t explained this way. And maybe this is too classical (non-quantum) of a picture, anyway.
In brains, patterns of neural activity stimulate further patterns of neural activity. We can abstract this out into a system of state changes and treat conscious episodes as patterns of state changes. Then if we can find similar causal networks of state changes in the wall, we might have reason to think they are conscious as well. Is this the idea? If so, what sort of states are you imagining to change in the wall? Is it the precise configurations of particles? I expect a lot of the states you’ll identify to fulfill the relevant patterns will be arbitrary or gerrymandered. That might be an important difference that should make us hesitate before ascribing conscious experiences to walls.
I need to think about this in more detail, but here are some rough ideas, mostly thinking out loud (and perhaps not worth your time to go through these):
One possibility is that because we only care about when the neurons are firing if we reject counterfactual robustness anyway, we don’t even need to represent when they’re not firing with particle properties. Then the signals from one neuron to the next can just be represented by the force exerted by the corresponding particle to the next corresponding particle. However, this way, the force doesn’t seem responsible for the “firing” state (i.e. that Y exerts a force on Z is not because of some Z that exerted a force on Y before that), so this probably doesn’t work.
We can just pick any specific property, and pick a threshold between firing and non-firing that puts every particle well-above the threshold into firing. But again, the force wouldn’t be responsible for the state being above the threshold.
We can use a particle’s position, velocity, acceleration, energy, net force, whatever as encoding whether or not a neuron is firing, but then we only care about when the neurons are firing anyway, and we could have independent freedom for each individual particle to decide which quantity or vector to use, which threshold to use, which side of the threshold counts as a neuron firing, etc.. If we use all of those independent degrees of freedom or even just one independent degree of freedom per particle, then this does seem pretty arbitrary and gerrymandered. But we can also imagine replacing individual neurons in a full typical human brain each with a different kind of artificial neuron (or particle) whose firing is replaced by a different kind of degree of freedom, and still preserve counterfactual robustness, and it could (I’m not sure) look the same once we get rid of all of the inactive neurons, so is it really gerrymandered?
If we only have a number of degrees of freedom much smaller than the number of times neurons fired, so we need to pick things for all the particles at once (a quantity or vector, a uniform threshold to separate firing from non-firing, the same side of the threshold for all), and not independently, then it doesn’t seem very gerrymandered, but you can still get a huge number of degrees of freedom by considering degrees of freedom that neurons should probably be allowed in our interpreting their activity as conscious:
which particles to use (given n particles, we have (nk) subsets of k particles to choose from)
which moment each particle counts as “firing”, and exactly which neuron firing event it gets mapped to.
But for 4, I still don’t know what “signals” to use, so that they are “responsible” for the states. Maybe any reasonable signal that relates to states in the right way will make it incredibly unlikely for walls to be conscious.
I can get your intuition about your case. Here is another with the same logic in which I don’t have the corresponding intuition:
Suppose that instead of just removing all non-firing neurons, we also remove all neurons both before they are triggered and after they trigger the next neurons in the sequence. E.g. you brain consists of neurons that magically pop into existence just in time to have the right effect on the next neurons that pop into existence in the sequence, and then they disappear back into nothing. We could also go a level down and have your brain consist only in atoms that briefly pop into existence in time to interact with the next atoms.
Your behavior and introspective reports wouldn’t change—do you think you’d still be conscious?
If the signals are still there to ensure causal influence, I think I would still be conscious like normal. The argument is exactly the same: whenever something is inactive and not affecting other things, it doesn’t need to be there at all.
This is getting close to the problem I’m grappling with, once we step away from neurons and look at individual particles (or atoms). First, I could imagine individual atoms acting like neurons to implement a human-like neural network in a counterfactually robust way, too, and that would very likely be conscious. The atoms could literally pass photons or electrons to one another. Or maybe the signals would be their (changes in the) exertion of elementary forces (or gravity?). If during a particular sequence of events, whenever something happened to be inactive, it happened to disappear, then this shouldn’t make a difference.
But if you start from something that was never counterfactually robust in the first place, which I think is your intention, and its events just happen to match a conscious sequence of activity in a human brain, then it seems like it probably wouldn’t be conscious (although this is less unintuitive to me than is accepting counterfactual robustness mattering in a system that is usually counterfactually robust). Rejecting counterfactual robustness (together with my other views, and assuming things are arranged and mapped correctly) seems to imply that this should be conscious, and the consequences seem crazy if this turns out to be morally relevant.
It seems like counterfactual robustness might matter for consciousness in systems that aren’t normally conscious but very likely doesn’t matter in systems that are normally conscious, which doesn’t make much sense to me.