Interview with Jon Mallatt about invertebrate consciousness

Jon Mal­latt is a biol­o­gist who along with his col­league Todd Fein­berg have re­cently pub­lished some books and ar­ti­cles on the evolu­tion of con­scious­ness on the ques­tion of which an­i­mals are con­scious, as well as their gen­eral po­si­tion on con­scious­ness. Th­ese in­clude the An­cient Ori­gins of Con­scious­ness and Con­scious­ness De­mys­tified.

Their books com­bine biol­ogy, neu­ro­science, and philos­o­phy, and I would con­sider them to be among the lead­ing ex­perts on in­ver­te­brate con­scious­ness and their pub­li­ca­tions to be among the best on the sub­ject. I am con­duct­ing these in­ter­views to try and ad­vance knowl­edge on the ques­tion of which (if any) in­ver­te­brate an­i­mals are con­scious to help with efforts to ex­tend due moral con­sid­er­a­tion to groups of in­ver­te­brates that are con­scious. For more back­ground as to why I’m fo­cus­ing on this ques­tion, see my post here.


1. In The An­cient Ori­gins of Con­scious­ness you wrote that you think that hav­ing mul­ti­ple or­ders of sen­sory pro­cess­ing in­di­cates or is ev­i­dence of con­scious­ness. Would you mind elab­o­rat­ing on why you think that hav­ing mul­ti­ple or­ders of sen­sory pro­cess­ing in­di­cates con­scious­ness in some way?

I should start by say­ing that I and my col­league Todd Fein­berg, like most other sci­en­tists, say con­scious­ness is a strictly nat­u­ral phe­nomenon pro­duced in liv­ing or­ganisms by neu­rons, and not by a fun­da­men­tal or ex­otic mind force. We are not du­al­ists.

It is also im­por­tant to state that we study only the most ba­sic type of con­scious­ness, called phe­nom­e­nal (pri­mary) con­scious­ness, defined as the abil­ity to ex­pe­rience (feel) any­thing at all. Un­like higher types of con­scious­ness, it need not in­volve any re­flec­tion, higher thought, self-con­scious­ness, nor the abil­ity to re­port the feel­ings be­ing felt. The difficult prob­lem is find­ing how phe­nom­e­nal con­scious­ness ap­peared; it is less difficult to dis­cover how the higher lev­els evolved from there. We see phe­nom­e­nal con­scious as hav­ing two main as­pects: 1) build­ing and ex­pe­rienc­ing a mapped, men­tal simu­la­tion of the world (and of one’s own body) from the ex­ten­sive sen­sory in­for­ma­tion one re­ceives; and 2) feel­ing af­fects, which are emo­tions and moods, and which we boil down to ei­ther pos­i­tive (good) or nega­tive (bad) feel­ings.

Your ques­tion refers to as­pect 1, build­ing a men­tal image from sen­sory in­put. The an­swer is that if there were just one level of sen­sory pro­cess­ing—where a sen­sory nerve cell (neu­ron) re­ceives a stim­u­lus and sends it di­rectly to a mo­tor neu­ron (which sig­nals a be­hav­ioral re­sponse) --- then this would be just a re­flex, and we know re­flexes are not con­scious­ness. The ad­di­tional lev­els of neu­rons are needed to pro­cess the bits of sen­sory in­for­ma­tion, re­ceived from many differ­ent senses (seen, heard, smelled, touched), and to as­sem­ble all these into a sen­sory image of the world, a mapped, con­scious image to guide one’s move­ments and be­hav­iors in the en­vi­ron­ment.

2. One of your main doubts about the pos­si­bil­ity of in­sect con­scious­ness is a rel­a­tively small num­ber of neu­rons that in­sects have. Would you mind flesh­ing out why you find this ob­jec­tion to be com­pel­ling?

The point was that con­scious­ness is an ex­tremely com­plex neu­ral pro­cess, so one won­ders whether some­thing as tiny as an in­sect brain has enough neu­rons to bring it about. The brains of the only other an­i­mals that met our crite­ria for con­scious­ness—all the ver­te­brates and the cephalo­pod mol­luscs like oc­to­puses and squids—have mil­lions of neu­rons. How­ever, since our An­cient Ori­gins book came out in 2016, we put aside our doubts and ac­cepted that in­sects and other arthro­pods are con­scious. The ev­i­dence for this is that their com­pact brains do build mapped images of the world from many differ­ent senses and that arthro­pods can learn new, com­plex be­hav­iors from re­wards and pun­ish­ments. This learn­ing im­plies that they feel af­fects and also re­mem­ber the af­fects. More on our change of opinion can be found in our newer book, Con­scious­ness De­mys­tified, 2018, MIT Press.

3. The neu­ron count that you cite for crabs and lob­sters in your book, The An­cient Ori­gins of Con­scious­ness, is quite sur­pris­ingly low, only around 100,000. In con­trast, you cite ants as hav­ing 250,000 neu­rons. Lob­sters are vastly larger an­i­mals. Do you have any ideas as to why lob­sters would have so few neu­rons for their size?

Ac­tu­ally, the an­swer comes from restat­ing the ques­tion in the op­po­site way, why the neu­ron num­bers in ant brains are so high. Ants have rel­a­tively com­plex brains among the in­sects, with rel­a­tively many neu­rons, per­haps re­flect­ing their many ad­vanced so­cial be­hav­iors in ant colonies. Lob­sters, crabs, and their shrimp rel­a­tives may have the an­ces­tral num­ber of brain neu­rons be­cause they more closely re­sem­ble the first, ocean-dwelling, fos­sil arthro­pods from over half a billion years ago, many of which were roughly shrimp-like in ap­pear­ance and size. With this body type, it is clear that the an­ces­tral arthro­pods moved deftly through com­plex en­vi­ron­men­tal spaces near the sea floor, as they found, tracked, cap­tured, and han­dled fast prey. All this im­plies the first arthro­pods had phe­nom­e­nal con­scious­ness based on a world map. Thus, it seems the 100,000 neu­rons in mod­ern lob­ster brains matches the num­ber in the first arthro­pods, and that this num­ber is suffi­cient for con­scious­ness. Fos­sil brains of the first arthro­pods are known, and their brain has the same struc­tural plan as those of lob­sters (and of all the other liv­ing arthro­pods). This is doc­u­mented in Chap­ter 9 of the An­cient Ori­gins book.

I should men­tion that crabs, lob­sters, and crayfish pass all our tests for con­scious­ness, es­pe­cially the be­hav­ioral tests in which they re­veal a good sense of space; and crabs tend their wounds is if they feel pain. This was shown es­pe­cially in the stud­ies of Robert W. El­wood and his col­leagues at the Univer­sity of Belfast. See https://​​an­i­mals­tud­ies­repos­i­​​cgi/​​view­con­tent.cgi?referer=https://​​​​&http­sredir=1&ar­ti­cle=1156&con­text=animsent

All the above as­sumes that the num­ber of brain neu­rons re­flects whether or not a group of an­i­mals has con­scious­ness, which may be only roughly true. Still it is fun to see if we can iden­tify a lower limit, the fewest neu­rons that can sup­port con­scious­ness. Gas­tro­pod mol­luscs—the snails, slugs, etc. --- have about a tenth as many brain neu­rons as lob­sters, and gar­den snails show be­hav­iors that sug­gest to us they are at the cusp of con­scious­ness. Mostly, these snails only show sim­ple learn­ing of the non­con­scious type, but they also seem to know how to op­er­ate in space, map­ping out a home range, for ex­am­ple. This is de­scribed in a re­cent pa­per by Eric Sch­witzgebel at http://​​www.fac­​​~es­chwitz/​​Sch­witzPapers/​​Snails-181025.pdf

Once more, we think that the only con­scious an­i­mals are the ver­te­brates, cephalopods, arthro­pods, to which we might add some gas­tropods. Th­ese are the groups that fit our crite­ria of hav­ing com­plex sen­sory or­gans, mapped images and learn­ing new be­hav­iors from ex­pe­rience. The ocean-dwelling poly­chaete worms are quite di­verse and some of them are ac­tively swim­ming preda­tors with big eyes, such as the al­ciopids. They might be worth in­ves­ti­gat­ing as pos­si­ble mem­bers of the con­scious­ness club. In­ter­est­ingly, of the five can­di­date an­i­mal groups (ver­te­brates, arthro­pods, cephalopods, and some gas­tropods and poly­chaetes), none is closely re­lated to an­other and all evolved from less-ac­tive, non­con­scious an­i­mals with much sim­pler senses many mil­lion years ago. By this in­ter­pre­ta­tion, con­scious­ness arose in­de­pen­dently mul­ti­ple times, by par­allel evolu­tion. This im­plies that the best way to com­pete in an open habitat con­tain­ing con­scious com­peti­tors that are alert, sen­si­tive, spa­tially ori­ented, and highly mo­bile, is to evolve con­scious­ness one­self. This is em­pha­sized in our An­cient Ori­gins and Con­scious­ness De­mys­tified books.

4. Do you think that ne­ma­tode worms such as C el­e­gans are phe­nom­e­nally con­scious?

I agree with the de­duc­tion of Colin Klein and An­drew B. Bar­ron that ne­ma­todes lack con­scious­ness: https://​​an­i­mals­tud­ies­repos­i­​​cgi/​​view­con­tent.cgi?referer=https://​​​​&http­sredir=1&ar­ti­cle=1113&con­text=an­i­m­sent. Klein and Bar­ron point out that al­though ne­ma­todes have a cen­tral­ized ner­vous sys­tem that can in­te­grate sen­sory in­for­ma­tion (such as touch) to in­fluence body move­ments, they lack all the dis­tance senses that could map the sur­round­ing en­vi­ron­ment: no image-form­ing eyes, no sound-de­tect­ing ears and their sense of smell does not de­tect odors from a dis­tance. Thus, they lack the sen­sory map of the world that we say is es­sen­tial to con­scious­ness, for nav­i­gat­ing through com­plex en­vi­ron­ments. In­stead, ne­ma­todes can only fol­low sen­sory trails such as a trail of food chem­i­cals or smells, and if they lose the trail, they can only search ran­domly hop­ing to find it again. The same is true for other types of worms, and most other in­ver­te­brates with sim­ple or no brains. By con­trast, the ver­te­brates, arthro­pods, and cephalopods can move right to the place where they found food be­fore, by fol­low­ing a men­tal map to that place.

Though we de­duced that ne­ma­todes lack imaged-based con­scious­ness, it is harder to de­cide if they lack af­fec­tive con­scious­ness (mean­ing their be­hav­iors would be au­to­matic rather than driven by felt, emo­tional, at­trac­tions and avoidances). Ne­ma­todes’ learn­ing abil­ity is sim­ple, and seems not to in­clude learn­ing new, com­plex be­hav­iors from ex­pe­rience. That is, they don’t show our be­hav­ioral marker for af­fec­tive con­scious­ness. How­ever, some of their brain neu­rons con­tain chem­i­cals that con­tribute to af­fects in the con­scious an­i­mals, such as dopamine. And in these ways ne­ma­todes match the other worms and in­ver­te­brates we have iden­ti­fied as non­con­scious.

5. Do you think that con­scious­ness is some­thing that can come in de­grees? If in­ver­te­brate species such as in­sects are con­scious, do you think it makes sense to talk about them be­ing in some sense less con­scious than other an­i­mals? What I have in mind here is some­thing like the claim that if in­ver­te­brates do feel pain they may feel only 1100 as much of the hu­man in similar cir­cum­stances.

I think con­scious­ness can come in de­grees, but not in a way re­lated to the in­ver­te­brate-ver­sus-ver­te­brate sta­tus. Rather, it re­lates to the num­ber of sen­sory re­cep­tors pos­sessed. For ex­am­ple, some peo­ple have few pain-sens­ing neu­rons and are in­sen­si­tive to pain whereas other peo­ple with more pain neu­rons are more sen­si­tive to pain. Less con­scious ver­sus more con­scious; these are your de­grees. Like­wise moles and hagfishes, both of which live bur­rowed in sed­i­ment, have small eyes that can­not see edges or de­tails very well so these an­i­mals are less con­scious of vi­sion than are other ver­te­brates.

But those in­ver­te­brates that have elab­o­rate sen­sory or­gans (bee eyes, the “smell” an­ten­nae of car­rion flies) are exquisitely con­scious of what they see, smell, etc. For ev­ery­thing they can sense, their con­scious­ness uses it to in­fluence be­hav­ior; oth­er­wise the re­cep­tors would not be there in the first place. Again, re­cep­tor so­phis­ti­ca­tion re­flects the de­gree of con­scious­ness of what is re­ceived.

The above is about the strictly sen­sory part of pri­mary con­scious­ness. What about the af­fec­tive part that feels the good and the bad? There, I would say that all con­scious an­i­mals ex­pe­rience very strong af­fects and use these to choose the cor­rect re­sponses. This is a mat­ter of self-preser­va­tion that de­ter­mines sur­vival. An­i­mals that do not “like” food or mat­ing enough, or that don’t “dis­like” dan­ger enough, don’t sur­vive or prop­a­gate as much as those who do and are elimi­nated from the evolu­tion­ary lineage. The par­tic­u­lar af­fect of pain is more com­pli­cated, how­ever (next ques­tion).

Here is an­other per­spec­tive on de­grees of con­scious­ness. Very young mem­bers of con­scious species must have some form of par­tial and pro­gres­sively in­creas­ing con­scious­ness, such as a hu­man fe­tus de­vel­op­ing in the womb. This must be the case be­cause noth­ing ap­pears in its full-blown state. Similarly, par­tial con­scious­ness must have char­ac­ter­ized the very first con­scious an­i­mals in the evolu­tion­ary lineages, those who had not yet evolved the most com­plex sen­sory equip­ment (i.e., who had eyes that did not yet see the sharpest images, early noses that smelled only a few kinds of odors). I am not sure how this par­tial con­scious­ness was ex­pe­rienced by these founder an­i­mals. Maybe it in­volved a lot of dor­mant or sleep­ing pe­ri­ods of un­con­scious­ness, with con­scious­ness flash­ing on only when needed to help find food and mates or to avoid dan­ger.

6 Do you have any thoughts on whether, if in­ver­te­brates do feel pain and plea­sure, they might feel these in re­sponse to differ­ent stim­uli than do other an­i­mals? For ex­am­ple, some have ar­gued (such as Wig­glesworth 1980) that in­sects do not feel pain from bod­ily in­jury, but they likely feel pain in re­sponse to other stim­uli.

Con­cern­ing plea­sure, I would think that al­most ev­ery­thing that benefits the con­scious arthro­pod or cephalo­pod in­duces the plea­sure af­fect (good foods, a de­sir­able mate, etc.).

Pain is more difficult to judge. Pain is a com­plex prob­lem be­cause it is not just one thing and it is also con­tin­gent (some types ex­ist only un­der cer­tain cir­cum­stances). I think in­sects can feel pain be­cause they have the sen­sory re­cep­tors that re­spond to tis­sue dam­age (no­ci­cep­tors) and be­cause in­sects can learn from ex­pe­rience to avoid all kinds of nox­ious stim­uli. How­ever, there are sto­ries where bugs try to eat their own guts when they are be­ing dis­sected, they still try to use a leg af­ter most of its seg­ments have been torn off, and keep eat­ing prey even while be­ing eaten them­selves from the back! This ap­par­ent in­sen­si­tivity to pain is a puz­zle. (See Adamo, 2016, Do in­sects feel pain? An­i­mal Be­hav­ior, 118, 75079).

I ap­proach the puz­zle by first rec­og­niz­ing there are two types of pain: 1) fast and sharp, and 2) slow and burn­ing. The lat­ter is the ba­sis of suffer­ing. An­i­mal suffer­ing is the source of an­i­mal welfare con­cerns but I do not think all an­i­mals have it. That is, suffer­ing only benefits an­i­mals who can hide, then tend their wound for a long time as they heal. An­i­mals that can do these things—such as most mam­mals, oc­to­puses, and crabs—have the suffer­ing type of pain, as that pain nicely tells them the progress of their heal­ing. By con­trast, an­i­mals that live in the open in view of preda­tors can­not show any signs of pain or weak­ness be­cause it is a red flag that sig­nals the preda­tors to at­tack. In the­ory, such vuln­er­a­ble an­i­mals could suffer pain and just ig­nore it, but the most suc­cess­ful mem­bers are those that don’t suffer the pain at all: then, they don’t have to fake a lack of pain. An­i­mals in this group are most of the fishes and in­sects. When­ever the bad as­pect of suffer­ing (vuln­er­a­bil­ity to preda­tors) out­weighs its benefits (time for bet­ter heal­ing), there will be nat­u­ral se­lec­tion for less abil­ity to feel suffer­ing. This is dis­cussed in the An­cient Ori­gins of Con­scious­ness book.

But I should make one thing clear. I af­firm that all con­scious an­i­mals can feel the sharp type of pain. Ex­pos­ing an­i­mals to re­peated or con­tin­u­ous sharp pain will cause a lot of mis­ery even if these an­i­mals can­not “suffer” long from sin­gle wounds. This still makes it un­eth­i­cal to harm con­scious an­i­mals like this.

I am not sure what to make of the in­sects that eat their guts or keep eat­ing prey while they them­selves are be­ing con­sumed. Again, it seems to in­di­cate they feel no pain at all, con­trary to my in­ter­pre­ta­tions. Maybe in­sects are adapted to live fast, and to eat con­tinu­ally, in or­der to pro­duce so many offspring that they let noth­ing in­terfere with a meal. It’s strange, though. Re­call that not all arthro­pods are like this, as crabs tend their wounds and show the signs of suffer­ing pain. Some of the above ideas on in­sect pain come from Edgar T. Walters (see Walters, No­ci­cep­tive biol­ogy of mol­luscs and arthro­pods. Fron­tiers in Phys­iol­ogy, Au­gust 2018, Vol­ume 9, Ar­ti­cle 1049.)

8. Some peo­ple think that even if in­ver­te­brates are phe­nom­e­nally con­scious, they still may be un­able to feel pain (they may still lack af­fec­tive con­scious­ness). Do you view this as a likely sce­nario?

Please see my an­swer to the pre­vi­ous ques­tion: I view all con­scious in­ver­te­brates as ex­pe­rienc­ing the sharp type of pain.

9. Are there any ex­per­i­ments re­lat­ing to in­ver­te­brate con­scious­ness that you would par­tic­u­larly like to see done?

Cer­tain kinds of elec­tro­mag­netic waves run back and forth be­tween the par­ti­ci­pat­ing brain re­gions of con­scious hu­mans, and also of other mam­mals and birds, who are thought likely to be con­scious. Th­ese waves in­clude the fa­mil­iar “brain waves” and some of them are said to be real mark­ers of con­scious­ness. I would like to see more stud­ies that use elec­trodes to record in the brains of rep­tiles, am­phibi­ans, fish, arthro­pods and cephalopods, seek­ing the dis­tinc­tive waves.

10. In your books you make some com­ments that sug­gest that you might not think that digi­tal minds would be con­scious. Is that the case? Do you think that fu­ture AI would likely be­come con­scious if it be­came func­tion­ally similar to the minds of con­scious an­i­mals?

Todd Fein­berg and I are very skep­ti­cal about hu­man-de­signed com­put­ers and AI gain­ing con­scious­ness. Todd says it never can and I say that it can “if it be­came func­tion­ally similar to the minds of con­scious an­i­mals” but that all to­day’s AI sys­tems are so far away from this that it can’t hap­pen in any fore­see­able fu­ture.

The prob­lem is that con­scious­ness is enor­mously com­plex, far be­yond what known or con­ceiv­able com­put­ers can mimic. Paul L. Nunez makes this point in his 2018 book, The New Science of Con­scious­ness, Prometheus Books. He illus­trates it by an anal­ogy. He points out that no cur­rent or fore­see­able com­puter can mas­ter the phys­i­cal phe­nomenon of fluid tur­bu­lence (that is, de­scribing how wa­ter moves when it boils in a pot, or pre­dict­ing winds and weather into the fu­ture in weather fore­cast­ing). He con­tinues: “Life and es­pe­cially con­scious­ness ap­pear to de­pend on sys­tems that are far more com­plex than fluid tur­bu­lence, yet claims that true ar­tifi­cial con­scious­ness is just around the cor­ner are com­mon­place. Does this re­ally make sense?”

Todd Fein­berg be­lieves con­scious­ness ab­solutely needs real life (and all its com­plex­ities), whereas I al­low it might come from ar­tifi­cial life, but only if that ar­tifi­cial life is as com­plex as real life is. Either way, ar­tifi­cial con­scious­ness is a long way off.

Here is an­other rea­son why I feel pre­dic­tions of ar­tifi­cial con­scious­ness in our life­times are way off tar­get. Ac­cord­ing to our the­ory, con­scious­ness evolved to take in lots of sen­sory in­for­ma­tion about the world, to rank all these in­com­ing stim­uli by their im­por­tance through as­sign­ing each stim­u­lus an af­fect (good, bad and ev­ery­thing in be­tween), then us­ing the rank­ing to sig­nal the best be­hav­iors for sur­vival and re­pro­duc­tion in a dan­ger­ous and com­plex world. An ar­gu­ment can be made that an AI sys­tem must be able to do all these things to be con­scious (af­ter all, that path is the only way we know con­scious­ness was ever achieved). Free-liv­ing, self-defend­ing, self-re­pairing and re­pro­duc­ing com­put­ers that find their own power source and adapt to their en­vi­ron­ment? Not on the hori­zon. To­day’s com­puter sys­tems that are claimed to be ap­proach­ing hu­man in­tel­li­gence are so frag­ile that they can’t even sur­vive pul­ling the plug or let­ting their bat­tery run down.


Many thanks to Caleb On­tiveros for fund­ing me to perform this in­ter­view and EA ho­tel (where I live) for their sup­port of all of my work..