The bomb trap example is very interesting! Can’t be counterfactually robust if you’re dead. Instead of bombs, we could also just use sudden overwhelming sensory inputs in the modality they’re fastest in to interrupt other processing. However, one objection could be that there exist some counterfactuals (for the same unmodified brain) where the person does what they’re supposed to. Objects we normally think of as unconscious don’t even have this weaker kind of counterfactual robustness: they need to be altered into different systems to do what they’re supposed to to be conscious.
But pointing out that path takes a lot of information which might only be present inside the pointer, so I think it’s possible that we’re effectively “sneaking in” the person via our pointer.
Interesting. Do you think if someone kept the mapping between the states and “firing” and “non-firing” neurons, and translated the events as they were happening (on paper, automatically on a computer, in their own heads), this would generate (further) consciousness?
I often also use Conway’s Game of Life when I think about this issue. In the Game of Life, bits are often encoded as the presence or absence of a glider. This means that causality has to be able to travel the void of dead cells, so that the absence of a glider can be causal. This gives a pretty good argument that every cell has some causal effect on its neighbours, even dead ones.
But if we allow that, we can suddenly draw effectively arbitrary causal arrows inside a completely dead board! So I don’t think that can be right, either.
Doesn’t counting the causal effects of dead cells on dead cells, especially on a totally dead board, bring us back counterfactual robustness, though?
To expand a bit on the OP, the way I’ve tentatively been thinking about causality as the basis for consciousness is more like active physical signalling than like full counterfactuals, to avoid counterfactual robustness (and counting static objects as conscious, but there are probably plenty of ways to avoid that). On this view, dead cells don’t send signals to other cells, and there’s no signalling in a dead board or a dead brain, so there’s no consciousness in them (at the cell-level) either. What I care about for a neuron (and I’m not sure how well this translates to Conway’s Game of Life) is whether it actually just received a signal, whether it actually just “fired”, and whether removing/killing it would have prevented it from sending a signal to another that it did actually send. In this way, its presence had to actually make a non-trivial difference compared just to the counterfactual where it’s manipulated to be gone/dead.
On your approach and examples, it does certainly seem like information is correlated with the stuff that matters in some way. It would be interesting to see this explored further. Have you found any similar theories in the literature?
The bomb trap example is very interesting! Can’t be counterfactually robust if you’re dead. Instead of bombs, we could also just use sudden overwhelming sensory inputs in the modality they’re fastest in to interrupt other processing. However, one objection could be that there exist some counterfactuals (for the same unmodified brain) where the person does what they’re supposed to. Objects we normally think of as unconscious don’t even have this weaker kind of counterfactual robustness: they need to be altered into different systems to do what they’re supposed to to be conscious.
Interesting. Do you think if someone kept the mapping between the states and “firing” and “non-firing” neurons, and translated the events as they were happening (on paper, automatically on a computer, in their own heads), this would generate (further) consciousness?
Doesn’t counting the causal effects of dead cells on dead cells, especially on a totally dead board, bring us back counterfactual robustness, though?
To expand a bit on the OP, the way I’ve tentatively been thinking about causality as the basis for consciousness is more like active physical signalling than like full counterfactuals, to avoid counterfactual robustness (and counting static objects as conscious, but there are probably plenty of ways to avoid that). On this view, dead cells don’t send signals to other cells, and there’s no signalling in a dead board or a dead brain, so there’s no consciousness in them (at the cell-level) either. What I care about for a neuron (and I’m not sure how well this translates to Conway’s Game of Life) is whether it actually just received a signal, whether it actually just “fired”, and whether removing/killing it would have prevented it from sending a signal to another that it did actually send. In this way, its presence had to actually make a non-trivial difference compared just to the counterfactual where it’s manipulated to be gone/dead.
Another related point is that while shadows (voids) and light spots can “move” faster than light, no actual particles are moving faster than light, and information can still only travel at most at the speed of light.
On your approach and examples, it does certainly seem like information is correlated with the stuff that matters in some way. It would be interesting to see this explored further. Have you found any similar theories in the literature?