I see a huge gap between the optimized and organized rhythm of 302 neurons acting in concert with the rest of the body, on the one hand, and roughly random particle movements on the other hand. I think there’s even a big gap between the optimized behavior of a bacterium versus the unoptimized behavior of individual particles (except insofar as we see particles themselves as optimizing for a lowest-energy configuration, etc).
If it’s true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons. Perhaps we could build a neural-network RL agent to mimic the learning abilities of C. elegans, but that would likely leave out lots of other cool stuff that those 302 neurons are doing that we haven’t discovered yet. Our RL neural network might be like trying to replace the complex nutrition of real foods with synthetic calories and a multivitamin.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, it’s plausible to me that the robot itself would warrant nontrivial moral concern.
What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.
My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate conscious valence. So it seems that around 50 out of the 302 neurons would be enough to simulate, and maybe even a few times less. Maybe this would be overgeneralizing to nematodes, though.
If it’s true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons.
I did have something like this in mind, but was probably thinking something like biological neurons are 10x more expressive than artificial ones, based on the comments here. Even if that’s not more likely than not, a non-tiny chance of at most around 10x could be enough, and even a tiny chance could get us a wager for panpsychism.
I suppose an artificial neuron could also be much more complex than a few particles, but I can also imagine that could not be the case. And invertebrate neuron potentials are often graded rather than spiking, which could make a difference in how many particles are needed.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, it’s plausible to me that the robot itself would warrant nontrivial moral concern.
I’d be willing to buy something like this. In my view, a real C elegans brain separated from the body and receiving misleading inputs should have valence as intense as C elegans with a body, on the right kinds of inputs. On views other than hedonism, maybe a body makes an important difference, and all else equal, I’d expect having a body and interacting with the real world to just mean greater (more positive and less negative) welfare overall, basically for experience machine reasons.
these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second
I see. :) I think counterfactual robustness is important, so maybe I’m less worried about that than you? Apart from gerrymandered interpretations, I assume that even 50 nematode neurons are vanishingly rare in particle movements?
In your post on counterfactual robustness, you mention as an example that if we eliminated the unused neural pathways during torture of you, you would still scream out in pain, so it seems like the unused pathways shouldn’t matter for valenced experience. But I would say that whether those unused pathways are present determines how much we should see a “you” as being there to begin with. There might still be sound waves coming from your mouth, but if they’re created just by some particles knocking into each other in random ways rather than as part of a robust, organized system, I don’t think there’s much of a “you” who is actually screaming.
For the same reason, I’m wary of trying to eliminate too much context as unimportant to valence and whittling the neurons down to just a small set. I think the larger context is what turns some seemingly meaningless signal transmission into something that we can see holistically as more than the sum of its parts.
As an analogy, suppose we’re trying to find the mountain in a drawing. I could draw just a triangle shape like ^ and say that’s the mountain, and everything else is non-mountain stuff. But just seeing a ^ shape in isolation doesn’t mean much. We have to add some foreground objects, the sky, etc as well before it starts to actually look like a mountain. I think a similar thing applies to valence generation in brains. The surrounding neural machinery is what makes a series of neural firings meaningful rather than just being some seemingly arbitrary signals being passed along.
This point about context mattering is also why I have an intuition that a body and real environment contribute something to the total sentience of a brain, although I’m not sure how much they matter, especially if the brain is complex and already creates a lot of the important context within itself based on the relations between the different brain parts. One way to see why a body and environment could matter a little bit is if we think of them as the “extended mind” of the nervous system, doing extra computations that aren’t being done by the neurons themselves.
I see a huge gap between the optimized and organized rhythm of 302 neurons acting in concert with the rest of the body, on the one hand, and roughly random particle movements on the other hand. I think there’s even a big gap between the optimized behavior of a bacterium versus the unoptimized behavior of individual particles (except insofar as we see particles themselves as optimizing for a lowest-energy configuration, etc).
If it’s true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons. Perhaps we could build a neural-network RL agent to mimic the learning abilities of C. elegans, but that would likely leave out lots of other cool stuff that those 302 neurons are doing that we haven’t discovered yet. Our RL neural network might be like trying to replace the complex nutrition of real foods with synthetic calories and a multivitamin.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, it’s plausible to me that the robot itself would warrant nontrivial moral concern.
What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.
My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate conscious valence. So it seems that around 50 out of the 302 neurons would be enough to simulate, and maybe even a few times less. Maybe this would be overgeneralizing to nematodes, though.
I did have something like this in mind, but was probably thinking something like biological neurons are 10x more expressive than artificial ones, based on the comments here. Even if that’s not more likely than not, a non-tiny chance of at most around 10x could be enough, and even a tiny chance could get us a wager for panpsychism.
I suppose an artificial neuron could also be much more complex than a few particles, but I can also imagine that could not be the case. And invertebrate neuron potentials are often graded rather than spiking, which could make a difference in how many particles are needed.
I’d be willing to buy something like this. In my view, a real C elegans brain separated from the body and receiving misleading inputs should have valence as intense as C elegans with a body, on the right kinds of inputs. On views other than hedonism, maybe a body makes an important difference, and all else equal, I’d expect having a body and interacting with the real world to just mean greater (more positive and less negative) welfare overall, basically for experience machine reasons.
I see. :) I think counterfactual robustness is important, so maybe I’m less worried about that than you? Apart from gerrymandered interpretations, I assume that even 50 nematode neurons are vanishingly rare in particle movements?
In your post on counterfactual robustness, you mention as an example that if we eliminated the unused neural pathways during torture of you, you would still scream out in pain, so it seems like the unused pathways shouldn’t matter for valenced experience. But I would say that whether those unused pathways are present determines how much we should see a “you” as being there to begin with. There might still be sound waves coming from your mouth, but if they’re created just by some particles knocking into each other in random ways rather than as part of a robust, organized system, I don’t think there’s much of a “you” who is actually screaming.
For the same reason, I’m wary of trying to eliminate too much context as unimportant to valence and whittling the neurons down to just a small set. I think the larger context is what turns some seemingly meaningless signal transmission into something that we can see holistically as more than the sum of its parts.
As an analogy, suppose we’re trying to find the mountain in a drawing. I could draw just a triangle shape like
^
and say that’s the mountain, and everything else is non-mountain stuff. But just seeing a^
shape in isolation doesn’t mean much. We have to add some foreground objects, the sky, etc as well before it starts to actually look like a mountain. I think a similar thing applies to valence generation in brains. The surrounding neural machinery is what makes a series of neural firings meaningful rather than just being some seemingly arbitrary signals being passed along.This point about context mattering is also why I have an intuition that a body and real environment contribute something to the total sentience of a brain, although I’m not sure how much they matter, especially if the brain is complex and already creates a lot of the important context within itself based on the relations between the different brain parts. One way to see why a body and environment could matter a little bit is if we think of them as the “extended mind” of the nervous system, doing extra computations that aren’t being done by the neurons themselves.