Interview with Jon Mallatt about invertebrate consciousness
Jon Mallatt is a biologist who along with his colleague Todd Feinberg have recently published some books and articles on the evolution of consciousness on the question of which animals are conscious, as well as their general position on consciousness. These include the Ancient Origins of Consciousness and Consciousness Demystified.
Their books combine biology, neuroscience, and philosophy, and I would consider them to be among the leading experts on invertebrate consciousness and their publications to be among the best on the subject. I am conducting these interviews to try and advance knowledge on the question of which (if any) invertebrate animals are conscious to help with efforts to extend due moral consideration to groups of invertebrates that are conscious. For more background as to why I’m focusing on this question, see my post here.
Questions:
1. In The Ancient Origins of Consciousness you wrote that you think that having multiple orders of sensory processing indicates or is evidence of consciousness. Would you mind elaborating on why you think that having multiple orders of sensory processing indicates consciousness in some way?
I should start by saying that I and my colleague Todd Feinberg, like most other scientists, say consciousness is a strictly natural phenomenon produced in living organisms by neurons, and not by a fundamental or exotic mind force. We are not dualists.
It is also important to state that we study only the most basic type of consciousness, called phenomenal (primary) consciousness, defined as the ability to experience (feel) anything at all. Unlike higher types of consciousness, it need not involve any reflection, higher thought, self-consciousness, nor the ability to report the feelings being felt. The difficult problem is finding how phenomenal consciousness appeared; it is less difficult to discover how the higher levels evolved from there. We see phenomenal conscious as having two main aspects: 1) building and experiencing a mapped, mental simulation of the world (and of one’s own body) from the extensive sensory information one receives; and 2) feeling affects, which are emotions and moods, and which we boil down to either positive (good) or negative (bad) feelings.
Your question refers to aspect 1, building a mental image from sensory input. The answer is that if there were just one level of sensory processing—where a sensory nerve cell (neuron) receives a stimulus and sends it directly to a motor neuron (which signals a behavioral response) --- then this would be just a reflex, and we know reflexes are not consciousness. The additional levels of neurons are needed to process the bits of sensory information, received from many different senses (seen, heard, smelled, touched), and to assemble all these into a sensory image of the world, a mapped, conscious image to guide one’s movements and behaviors in the environment.
2. One of your main doubts about the possibility of insect consciousness is a relatively small number of neurons that insects have. Would you mind fleshing out why you find this objection to be compelling?
The point was that consciousness is an extremely complex neural process, so one wonders whether something as tiny as an insect brain has enough neurons to bring it about. The brains of the only other animals that met our criteria for consciousness—all the vertebrates and the cephalopod molluscs like octopuses and squids—have millions of neurons. However, since our Ancient Origins book came out in 2016, we put aside our doubts and accepted that insects and other arthropods are conscious. The evidence for this is that their compact brains do build mapped images of the world from many different senses and that arthropods can learn new, complex behaviors from rewards and punishments. This learning implies that they feel affects and also remember the affects. More on our change of opinion can be found in our newer book, Consciousness Demystified, 2018, MIT Press.
3. The neuron count that you cite for crabs and lobsters in your book, The Ancient Origins of Consciousness, is quite surprisingly low, only around 100,000. In contrast, you cite ants as having 250,000 neurons. Lobsters are vastly larger animals. Do you have any ideas as to why lobsters would have so few neurons for their size?
Actually, the answer comes from restating the question in the opposite way, why the neuron numbers in ant brains are so high. Ants have relatively complex brains among the insects, with relatively many neurons, perhaps reflecting their many advanced social behaviors in ant colonies. Lobsters, crabs, and their shrimp relatives may have the ancestral number of brain neurons because they more closely resemble the first, ocean-dwelling, fossil arthropods from over half a billion years ago, many of which were roughly shrimp-like in appearance and size. With this body type, it is clear that the ancestral arthropods moved deftly through complex environmental spaces near the sea floor, as they found, tracked, captured, and handled fast prey. All this implies the first arthropods had phenomenal consciousness based on a world map. Thus, it seems the 100,000 neurons in modern lobster brains matches the number in the first arthropods, and that this number is sufficient for consciousness. Fossil brains of the first arthropods are known, and their brain has the same structural plan as those of lobsters (and of all the other living arthropods). This is documented in Chapter 9 of the Ancient Origins book.
I should mention that crabs, lobsters, and crayfish pass all our tests for consciousness, especially the behavioral tests in which they reveal a good sense of space; and crabs tend their wounds is if they feel pain. This was shown especially in the studies of Robert W. Elwood and his colleagues at the University of Belfast. See https://animalstudiesrepository.org/cgi/viewcontent.cgi?referer=https://scholar.google.com/&httpsredir=1&article=1156&context=animsent
All the above assumes that the number of brain neurons reflects whether or not a group of animals has consciousness, which may be only roughly true. Still it is fun to see if we can identify a lower limit, the fewest neurons that can support consciousness. Gastropod molluscs—the snails, slugs, etc. --- have about a tenth as many brain neurons as lobsters, and garden snails show behaviors that suggest to us they are at the cusp of consciousness. Mostly, these snails only show simple learning of the nonconscious type, but they also seem to know how to operate in space, mapping out a home range, for example. This is described in a recent paper by Eric Schwitzgebel at http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Snails-181025.pdf
Once more, we think that the only conscious animals are the vertebrates, cephalopods, arthropods, to which we might add some gastropods. These are the groups that fit our criteria of having complex sensory organs, mapped images and learning new behaviors from experience. The ocean-dwelling polychaete worms are quite diverse and some of them are actively swimming predators with big eyes, such as the alciopids. They might be worth investigating as possible members of the consciousness club. Interestingly, of the five candidate animal groups (vertebrates, arthropods, cephalopods, and some gastropods and polychaetes), none is closely related to another and all evolved from less-active, nonconscious animals with much simpler senses many million years ago. By this interpretation, consciousness arose independently multiple times, by parallel evolution. This implies that the best way to compete in an open habitat containing conscious competitors that are alert, sensitive, spatially oriented, and highly mobile, is to evolve consciousness oneself. This is emphasized in our Ancient Origins and Consciousness Demystified books.
4. Do you think that nematode worms such as C elegans are phenomenally conscious?
I agree with the deduction of Colin Klein and Andrew B. Barron that nematodes lack consciousness: https://animalstudiesrepository.org/cgi/viewcontent.cgi?referer=https://scholar.google.com/&httpsredir=1&article=1113&context=animsent. Klein and Barron point out that although nematodes have a centralized nervous system that can integrate sensory information (such as touch) to influence body movements, they lack all the distance senses that could map the surrounding environment: no image-forming eyes, no sound-detecting ears and their sense of smell does not detect odors from a distance. Thus, they lack the sensory map of the world that we say is essential to consciousness, for navigating through complex environments. Instead, nematodes can only follow sensory trails such as a trail of food chemicals or smells, and if they lose the trail, they can only search randomly hoping to find it again. The same is true for other types of worms, and most other invertebrates with simple or no brains. By contrast, the vertebrates, arthropods, and cephalopods can move right to the place where they found food before, by following a mental map to that place.
Though we deduced that nematodes lack imaged-based consciousness, it is harder to decide if they lack affective consciousness (meaning their behaviors would be automatic rather than driven by felt, emotional, attractions and avoidances). Nematodes’ learning ability is simple, and seems not to include learning new, complex behaviors from experience. That is, they don’t show our behavioral marker for affective consciousness. However, some of their brain neurons contain chemicals that contribute to affects in the conscious animals, such as dopamine. And in these ways nematodes match the other worms and invertebrates we have identified as nonconscious.
5. Do you think that consciousness is something that can come in degrees? If invertebrate species such as insects are conscious, do you think it makes sense to talk about them being in some sense less conscious than other animals? What I have in mind here is something like the claim that if invertebrates do feel pain they may feel only 1⁄100 as much of the human in similar circumstances.
I think consciousness can come in degrees, but not in a way related to the invertebrate-versus-vertebrate status. Rather, it relates to the number of sensory receptors possessed. For example, some people have few pain-sensing neurons and are insensitive to pain whereas other people with more pain neurons are more sensitive to pain. Less conscious versus more conscious; these are your degrees. Likewise moles and hagfishes, both of which live burrowed in sediment, have small eyes that cannot see edges or details very well so these animals are less conscious of vision than are other vertebrates.
But those invertebrates that have elaborate sensory organs (bee eyes, the “smell” antennae of carrion flies) are exquisitely conscious of what they see, smell, etc. For everything they can sense, their consciousness uses it to influence behavior; otherwise the receptors would not be there in the first place. Again, receptor sophistication reflects the degree of consciousness of what is received.
The above is about the strictly sensory part of primary consciousness. What about the affective part that feels the good and the bad? There, I would say that all conscious animals experience very strong affects and use these to choose the correct responses. This is a matter of self-preservation that determines survival. Animals that do not “like” food or mating enough, or that don’t “dislike” danger enough, don’t survive or propagate as much as those who do and are eliminated from the evolutionary lineage. The particular affect of pain is more complicated, however (next question).
Here is another perspective on degrees of consciousness. Very young members of conscious species must have some form of partial and progressively increasing consciousness, such as a human fetus developing in the womb. This must be the case because nothing appears in its full-blown state. Similarly, partial consciousness must have characterized the very first conscious animals in the evolutionary lineages, those who had not yet evolved the most complex sensory equipment (i.e., who had eyes that did not yet see the sharpest images, early noses that smelled only a few kinds of odors). I am not sure how this partial consciousness was experienced by these founder animals. Maybe it involved a lot of dormant or sleeping periods of unconsciousness, with consciousness flashing on only when needed to help find food and mates or to avoid danger.
6 Do you have any thoughts on whether, if invertebrates do feel pain and pleasure, they might feel these in response to different stimuli than do other animals? For example, some have argued (such as Wigglesworth 1980) that insects do not feel pain from bodily injury, but they likely feel pain in response to other stimuli.
Concerning pleasure, I would think that almost everything that benefits the conscious arthropod or cephalopod induces the pleasure affect (good foods, a desirable mate, etc.).
Pain is more difficult to judge. Pain is a complex problem because it is not just one thing and it is also contingent (some types exist only under certain circumstances). I think insects can feel pain because they have the sensory receptors that respond to tissue damage (nociceptors) and because insects can learn from experience to avoid all kinds of noxious stimuli. However, there are stories where bugs try to eat their own guts when they are being dissected, they still try to use a leg after most of its segments have been torn off, and keep eating prey even while being eaten themselves from the back! This apparent insensitivity to pain is a puzzle. (See Adamo, 2016, Do insects feel pain? Animal Behavior, 118, 75079).
I approach the puzzle by first recognizing there are two types of pain: 1) fast and sharp, and 2) slow and burning. The latter is the basis of suffering. Animal suffering is the source of animal welfare concerns but I do not think all animals have it. That is, suffering only benefits animals who can hide, then tend their wound for a long time as they heal. Animals that can do these things—such as most mammals, octopuses, and crabs—have the suffering type of pain, as that pain nicely tells them the progress of their healing. By contrast, animals that live in the open in view of predators cannot show any signs of pain or weakness because it is a red flag that signals the predators to attack. In theory, such vulnerable animals could suffer pain and just ignore it, but the most successful members are those that don’t suffer the pain at all: then, they don’t have to fake a lack of pain. Animals in this group are most of the fishes and insects. Whenever the bad aspect of suffering (vulnerability to predators) outweighs its benefits (time for better healing), there will be natural selection for less ability to feel suffering. This is discussed in the Ancient Origins of Consciousness book.
But I should make one thing clear. I affirm that all conscious animals can feel the sharp type of pain. Exposing animals to repeated or continuous sharp pain will cause a lot of misery even if these animals cannot “suffer” long from single wounds. This still makes it unethical to harm conscious animals like this.
I am not sure what to make of the insects that eat their guts or keep eating prey while they themselves are being consumed. Again, it seems to indicate they feel no pain at all, contrary to my interpretations. Maybe insects are adapted to live fast, and to eat continually, in order to produce so many offspring that they let nothing interfere with a meal. It’s strange, though. Recall that not all arthropods are like this, as crabs tend their wounds and show the signs of suffering pain. Some of the above ideas on insect pain come from Edgar T. Walters (see Walters, Nociceptive biology of molluscs and arthropods. Frontiers in Physiology, August 2018, Volume 9, Article 1049.)
8. Some people think that even if invertebrates are phenomenally conscious, they still may be unable to feel pain (they may still lack affective consciousness). Do you view this as a likely scenario?
Please see my answer to the previous question: I view all conscious invertebrates as experiencing the sharp type of pain.
9. Are there any experiments relating to invertebrate consciousness that you would particularly like to see done?
Certain kinds of electromagnetic waves run back and forth between the participating brain regions of conscious humans, and also of other mammals and birds, who are thought likely to be conscious. These waves include the familiar “brain waves” and some of them are said to be real markers of consciousness. I would like to see more studies that use electrodes to record in the brains of reptiles, amphibians, fish, arthropods and cephalopods, seeking the distinctive waves.
10. In your books you make some comments that suggest that you might not think that digital minds would be conscious. Is that the case? Do you think that future AI would likely become conscious if it became functionally similar to the minds of conscious animals?
Todd Feinberg and I are very skeptical about human-designed computers and AI gaining consciousness. Todd says it never can and I say that it can “if it became functionally similar to the minds of conscious animals” but that all today’s AI systems are so far away from this that it can’t happen in any foreseeable future.
The problem is that consciousness is enormously complex, far beyond what known or conceivable computers can mimic. Paul L. Nunez makes this point in his 2018 book, The New Science of Consciousness, Prometheus Books. He illustrates it by an analogy. He points out that no current or foreseeable computer can master the physical phenomenon of fluid turbulence (that is, describing how water moves when it boils in a pot, or predicting winds and weather into the future in weather forecasting). He continues: “Life and especially consciousness appear to depend on systems that are far more complex than fluid turbulence, yet claims that true artificial consciousness is just around the corner are commonplace. Does this really make sense?”
Todd Feinberg believes consciousness absolutely needs real life (and all its complexities), whereas I allow it might come from artificial life, but only if that artificial life is as complex as real life is. Either way, artificial consciousness is a long way off.
Here is another reason why I feel predictions of artificial consciousness in our lifetimes are way off target. According to our theory, consciousness evolved to take in lots of sensory information about the world, to rank all these incoming stimuli by their importance through assigning each stimulus an affect (good, bad and everything in between), then using the ranking to signal the best behaviors for survival and reproduction in a dangerous and complex world. An argument can be made that an AI system must be able to do all these things to be conscious (after all, that path is the only way we know consciousness was ever achieved). Free-living, self-defending, self-repairing and reproducing computers that find their own power source and adapt to their environment? Not on the horizon. Today’s computer systems that are claimed to be approaching human intelligence are so fragile that they can’t even survive pulling the plug or letting their battery run down.
Acknowledgements:
Many thanks to Caleb Ontiveros for funding me to perform this interview and EA hotel (where I live) for their support of all of my work..
- EA Hotel Fundraiser 6: Concrete outputs after 17 months by 31 Oct 2019 21:39 UTC; 78 points) (
- Latest EA Updates for April 2019 by 1 May 2019 15:08 UTC; 55 points) (
- Interview with Shelley Adamo about invertebrate consciousness by 21 Jun 2019 14:40 UTC; 37 points) (
- Interview with Michael Tye about invertebrate consciousness by 8 Aug 2019 10:13 UTC; 32 points) (
- EA Forum Prize: Winners for April 2019 by 4 Jun 2019 0:09 UTC; 29 points) (
- 13 May 2019 14:25 UTC; 6 points) 's comment on Thoughts on the welfare of farmed insects by (
- 22 May 2020 3:17 UTC; 2 points) 's comment on Interview with Jon Mallatt about invertebrate consciousness by (
This is super interesting...thanks Max!
I haven’t read the books so I assume they deal with this there, but what about cases of blindsight, where people self-report that they don’t see objects in certain parts of their visual field but nevertheless are able to respond above chance on forced choice tasks and even make appropriate grasping motions for objects in that area of the visual field? Wouldn’t those, if true, be cases where we have maps of our surrounding environment that guide behaviour but nevertheless are not phenomenally conscious?
Also, a couple of the things he said about pain seem incorrect to me. He says “I think consciousness can come in degrees, but not in a way related to the invertebrate-versus-vertebrate status. Rather, it relates to the number of sensory receptors possessed. For example,some people have few pain-sensing neurons and are insensitive to pain whereas other people with more pain neurons are more sensitive to pain.” But in all the studies I’ve seen, individual differences in how sensitive people are to pain have to do with neurons *in the brain* not with how many nociceptors in the peripheral nervous system someone has. After all, if all of the information from the peripheral nervous system is funnelled through the spinal cord to specific tracts in the brain, what ultimately matters is the coded information coming from that tract. In other words, if a given spinal neuron passes on the information [50 actions potentials in time span X] to a particular brain region, it doesn’t really matter whether that signal was caused by 1 nociceptor or 100 nociceptors.
Finally, he seems to fall into a inexplicably common mistake of assuming that the difference between “fast pain” (transmitted through A-delta fibres) and “slow pain” (via C fibres) maps onto the difference between sensory and affective pain. I’m not sure why that is such a common move, but it’s not really true...fast pain can lead to pain affect and suffering. I guess since he says that fast pain can lead to “misery,” this suggests that he’s not ruling out all affect, but it seems like an odd definition of suffering to me to restrict it to slow pain.
I think he may be answering the question in terms of sensory pain rather than affective pain. I was mainly interested in affective pain, I probably should have specified that in the question. In terms of sensory pain it seems to me like his answer make sense and is right because it makes sense that more nociceptors would give you a richer and more complex sensory pain. But it doesn’t make sense in terms of affective pain.
I agree with Siebe that he is using ‘suffering’ in a nonstandard way. He seems to be using ‘pain’ to refer to ‘acute pain” and ‘suffering’ to refer to ‘long-lasting, non-acute pain.’
He seems to use the term suffering differently as well: the standard way is to define suffering as negative experience (perhaps above a certain threshold, so not to include dust specks in eyes). Pain that is experienced as bad, and thus morally wrong to create, is suffering.
Thanks for posting this fascinating interview! I was impressed by how clearly the author explained how he’d changed his mind on insect consciousness.
If anyone enjoyed this article and wants to hear more about the neuron counts of small animals, they might also like this post on that topic, which won a monthly EA Forum Prize.
Thank you!
Great, I hadn’t noticed that article. Reading it now
Wow, this is incredible. Such a great write-up and so much here. Two questions:
1) Do you think there is a consensus that Jon Mallatt and Todd Feinberg are among the leading experts and their books among the best on the subject? Just trying to figure out how much to update based on this.
2) Jon Mallatt seems from the way he talks like the type of biologist who could be amenable to wild -animal welfare research. Has anyone reached out to him?
Re: 1) I’m not sure. I would say that the number of people who might be considered experts on the subject of invertebrate consciousness is very low.
I can’t remember reading anything by these experts about who they consider to be the leading experts on the subject.
Re: 2) I have talked to him about it yet, but may do so at some point in the future. I doubt that anyone else has.
I’m puzzled by Mallatt’s response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don’t know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, and hiding when injured, as well as structural tests such as a multiple levels of intermediate processing from the sensory input to motor output. Existing AIs already pass the structural test I listed, and I believe they could pass the behavior tests with a simple virtual environment and reward function. I don’t see a principled way of including the simplest types of animal conscious while any form of computer consciousness.
Yeah, I think this is a worry for his view. I do also personally assign a somewhat higher likelihood to invertebrate consciousness than modern AI consciousness because of evolutionary relatedness, greater structural homology, and because they probably satisfy more of the criteria for consciousness that I would use.
You might be interested in my next interview on this subject which will be with someone who discusses modern AI and robotics findings in the context of invertebrate consciousness, and comes to a more sceptical conclusion based on that.
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement: