Should ChatGPT make us downweight our belief in the consciousness of non-human animals?

The remarkable capabilities of ChatGPT and other tools based on large language models (LLMs) have generated a fair amount of idle speculation over whether such programs might in some sense be considered sentient. The conventional wisdom could be summarized as: of course not, but they are way spookier than anticipated, in a way that is waking people up to just how weird it might be interact with a truly intelligent machine.

(It is also worth noting that some very knowledgeable people are open to granting LLMs at least a smidgen of consciousness.)

Given that LLMs are not, in my view, in any way conscious, they raise another question: should the human-like behavior of non-sentient computer programs cause me to re-evaluate my opinions on the consciousness of other species?

My beliefs about the consciousness of other species are held lightly. Because there is no general scientific understanding of the material basis of consciousness, all I have to go on is intuition based on my sense of the complexity of other animals and their similarity to the only animals I know to be conscious (i.e., humans).

Over time, my opinions have shifted in the direction of allowing more animals into the “consciousness club.” At one point, my beliefs were roughly thus:

  • Humans: conscious

  • Non-human primates: almost certainly conscious

  • Dogs: overwhelmingly likely to be conscious (just look at that face)

  • Mice: Probably conscious, but getting trickier to litigate

  • Fish and amphibians: maybe conscious in some limited way but probably not

  • Insects: almost certainly not conscious

  • Single-celled organisms: not conscious

I certainly may be guilty of chauvinism towards non-mammals, but, again, these opinions are lightly held.

These days, based on a greater awareness of the complexity of many animal behaviors, I’m more likely to let fish and amphibians into the club and admit to greater uncertainty regarding insects. (Sorry, protozoa.)

LLMs, however, raise a challenging counterexample to the idea that complexity of behavior serves as evidence of consciousness. The internet abounds with examples of these programs engaging in conversations that are not only shockingly sophisticated but also deeply unsettling in the way they seem to convey personality, desire, and intent.

I don’t think many people seriously entertain the notion that these programs are conscious. Consciousness aside, are LLMs, in some sense, smarter than a bee? A trout? A squirrel? They clearly have capabilities that these other animals don’t, and just as clearly have deficits these other animals don’t. If LLMs are an existence proof of extremely complex behavior in the absence of consciousness, should we revise our beliefs about the likelihood of consciousness in other animals?

One obvious objection is that complexity might be a correlate of consciousness in biological organisms but not in machines. For example, the U.S. electrical grid is extremely complex, but no one suspects it of being sentient, because it lacks the basic features and organization of other conscious systems.

We know fairly well how LLMs work. We know that they are organized in a way that is not biologically plausible. Brains have features such as memory, attention, sensory awareness, and self-reflexivity that may be necessary to support consciousness.

In this view, LLMs can churn out human-like output via an extremely complex mathematical function while revealing little about animal minds. LLMs don’t have any interiority. They don’t even have a representation of the world. (This is evident in the way they are happy to spout reasonable sounding untruths.) We might therefore conclude that LLMs teach us nothing about consciousness, even if they may hold lessons about certain functions of brains (such as language construction).

I think this is a powerful objection, but also maybe a little too quick. Insects such as the Sphex wasp are famous for displaying behavior that is both fairly complex and also extremely stereotyped. And it’s worth underscoring just how deeply spooky conversing with LLMs can be. It’s easy enough for me to write off ChatGPT as a machine. It feels somewhat dissonant, however, to write ChatGPT off as a machine while also allowing that the architecture of a spider’s brain makes it conscious of the world. These things both can be true. But are they?

It strikes me as more plausible that it once did that simpler organisms—including, yes, fish and amphibians—might be “mere automatons” displaying behavior that, like ChatGPT, seems to carry intentionality but is really “just” a sophisticated algorithm.

As before, I hold this opinion lightly.