The bomb trap example is very interesting! Canât be counterfactually robust if youâre dead. Instead of bombs, we could also just use sudden overwhelming sensory inputs in the modality theyâre fastest in to interrupt other processing. However, one objection could be that there exist some counterfactuals (for the same unmodified brain) where the person does what theyâre supposed to. Objects we normally think of as unconscious donât even have this weaker kind of counterfactual robustness: they need to be altered into different systems to do what theyâre supposed to to be conscious.
But pointing out that path takes a lot of information which might only be present inside the pointer, so I think itâs possible that weâre effectively âsneaking inâ the person via our pointer.
Interesting. Do you think if someone kept the mapping between the states and âfiringâ and ânon-firingâ neurons, and translated the events as they were happening (on paper, automatically on a computer, in their own heads), this would generate (further) consciousness?
I often also use Conwayâs Game of Life when I think about this issue. In the Game of Life, bits are often encoded as the presence or absence of a glider. This means that causality has to be able to travel the void of dead cells, so that the absence of a glider can be causal. This gives a pretty good argument that every cell has some causal effect on its neighbours, even dead ones.
But if we allow that, we can suddenly draw effectively arbitrary causal arrows inside a completely dead board! So I donât think that can be right, either.
Doesnât counting the causal effects of dead cells on dead cells, especially on a totally dead board, bring us back counterfactual robustness, though?
To expand a bit on the OP, the way Iâve tentatively been thinking about causality as the basis for consciousness is more like active physical signalling than like full counterfactuals, to avoid counterfactual robustness (and counting static objects as conscious, but there are probably plenty of ways to avoid that). On this view, dead cells donât send signals to other cells, and thereâs no signalling in a dead board or a dead brain, so thereâs no consciousness in them (at the cell-level) either. What I care about for a neuron (and Iâm not sure how well this translates to Conwayâs Game of Life) is whether it actually just received a signal, whether it actually just âfiredâ, and whether removing/âkilling it would have prevented it from sending a signal to another that it did actually send. In this way, its presence had to actually make a non-trivial difference compared just to the counterfactual where itâs manipulated to be gone/âdead.
On your approach and examples, it does certainly seem like information is correlated with the stuff that matters in some way. It would be interesting to see this explored further. Have you found any similar theories in the literature?
The bomb trap example is very interesting! Canât be counterfactually robust if youâre dead. Instead of bombs, we could also just use sudden overwhelming sensory inputs in the modality theyâre fastest in to interrupt other processing. However, one objection could be that there exist some counterfactuals (for the same unmodified brain) where the person does what theyâre supposed to. Objects we normally think of as unconscious donât even have this weaker kind of counterfactual robustness: they need to be altered into different systems to do what theyâre supposed to to be conscious.
Interesting. Do you think if someone kept the mapping between the states and âfiringâ and ânon-firingâ neurons, and translated the events as they were happening (on paper, automatically on a computer, in their own heads), this would generate (further) consciousness?
Doesnât counting the causal effects of dead cells on dead cells, especially on a totally dead board, bring us back counterfactual robustness, though?
To expand a bit on the OP, the way Iâve tentatively been thinking about causality as the basis for consciousness is more like active physical signalling than like full counterfactuals, to avoid counterfactual robustness (and counting static objects as conscious, but there are probably plenty of ways to avoid that). On this view, dead cells donât send signals to other cells, and thereâs no signalling in a dead board or a dead brain, so thereâs no consciousness in them (at the cell-level) either. What I care about for a neuron (and Iâm not sure how well this translates to Conwayâs Game of Life) is whether it actually just received a signal, whether it actually just âfiredâ, and whether removing/âkilling it would have prevented it from sending a signal to another that it did actually send. In this way, its presence had to actually make a non-trivial difference compared just to the counterfactual where itâs manipulated to be gone/âdead.
Another related point is that while shadows (voids) and light spots can âmoveâ faster than light, no actual particles are moving faster than light, and information can still only travel at most at the speed of light.
On your approach and examples, it does certainly seem like information is correlated with the stuff that matters in some way. It would be interesting to see this explored further. Have you found any similar theories in the literature?