Only mammals and birds are sentient, according to neuroscientist Nick Humphrey’s theory of consciousness, recently explained in “Sentience: The invention of consciousness”
In 2023, Nick Humphrey published his book Sentience: The invention of consciousness (S:TIOC). In this book he proposed a theory of consciousness that implies, he says, that only mammals and birds have any kind of internal awareness.
[EDIT: This post aims to summarize that book. Nick Humphrey has written a précis, a summary, of his own book on aeon here, and has done a much better job summarizing his book than I have. Consider reading that instead of this post, then coming back to comment on this forum post]
His theory of consciousness has a lot in common with the picture of consciousness is described in recent books by two other authors, neuroscientist Antonio Damasio and consciousness researcher Anil Seth. All three agree on the importance of feelings, or proprioception, as the evolutionary and experiential base of sentience. Damasio and Seth, if I recall correctly, each put a lot of emphasis on homeostasis as a driving evolutionary force. All three agree sentience evolved as an extension of our senses–touch, sight, hearing, and so on. But S:TIOC is a bolder book which not only describes what we know about the evolutionary base of consciousness but proposes a plausible theory coming as close as can be to describing what it is short of actually solving Chalmers’ Hard Problem.
The purpose of this post is to describe Humphrey’s theory of sentience, as described in S:TIOC, and explain why Humphrey is strongly convinced that mammals and birds–not octopuses, fish, or shrimp–have any kind of internal experience. Right up front I want to acknowledge that cause areas focused on animals like fish and shrimp seem on-expectation impactful even if there’s only a fairly small chance those animals might have capacity for suffering or other internal experiences. Those areas might be impactful because of the huge absolute numbers of fish and shrimp who are suffering if they have any internal experience at all. But nevertheless, a theory with reasonable odds of being true that can identify which animals have conscious experience should update us on our relative priorities. Furthermore, if there is substantial uncertainty, which I think there is, such a theory should motivate hypothesis testing to help us reduce uncertainty.
Blindsight
To understand this story, you should hear about three fascinating personal encounters which lead Humphrey to some intuitions about consciousness. Humphrey describes blindsight in a monkey and a couple of people. Blindsight is the ability for an organism to see without conscious awareness of seeing. Humphrey tells of a story of a monkey named Helen whose visual cortex had been removed. Subsequent to the removal of her visual cortex, Helen was miserable and unmotivated to move about in the indoor world she lived in. After a year of this misery, her handlers allowed her to get out into the outside world and explore it. Over the course of time she learned to navigate around the world with an unmistakable ability to see, avoid obstacles, and quickly locate food. But Humphrey, knowing Helen quite well, thought she lacked the confidence in herself to be able to have the awareness that she clearly did. This was a clue that perhaps Helen was using her midbrain system, the superior colliculus, which processes visual information in parallel with the visual cortex, and that she was unaware of the visual information her brain could nevertheless use to navigate her body around obstacles and to locate food. Of course this is somewhat wild speculation considering that Helen couldn’t report her own experience back to Humphrey.
The second observation was of a man known to the scientific community as D.B. In an attempt to relieve D.B. of terribly painful headaches, doctors had removed D.B.’s right visual cortex. D.B. reported not being able to see anything presented only to his left eye (the left and right eyes each reporting information to the visual cortex in the opposite side of the brain). But strangely, when doctors encouraged him to guess what as in his left visual field, he could correctly identify shape, color, and position of objects presented to him, even as he had no conscious awareness of seeing them.
I’d like to add two caveats to this story about D. B.. First, I am a little sceptical this story is really evidence for unconscious sight. Split-brain patients, patients whose cortex has been cut in half to reduce seizures, can only describe objects presented to the one eye connected to the side of their brain that produces speech. Present the object to the other eye, and the information will to go the other side of their brain and these patients will verbally report not being able to see the object. Nevertheless, they will correctly be able to write the name of the object down on a piece of paper. The fascinating possibility here is that split-brain patients might have split-consciousness: potentially parallel tracks of conscious experience that are to some degree uncoordinated and independent. But in the context of D.B., the patient whose visual cortex was removed, we might infer that perhaps D.B. was still conscious of the objects he was seeing with his superior colliculus, but was merely unable to describe them because of his remaining visual awareness was disconnected from his phonological systems.
Second, @mako yass suggested an interesting empirical test of Humphrey’s observations about D. B.. If we were to help D. B. train to use his unconscious sight by giving continuous feedback on his guesses as to what he was seeing, would he learn to recognize–and be conscious of–whatever intuitions he is drawing on to report his awareness? If he did, then perhaps he has had some kind of conscious experience of the visual stimuli after all–just not the sort of qualia you get with a visual cortex–and in that case, perhaps a theory of conscious vision that places conscious visual sensation entirely in the visual cortex is misplaced.
Having added those caveats, I’ll move on to discussing the final compelling case study that Humphrey uses to set up his theory of sentience in S:TIOC.
H.D. is a woman who tragically lost her eyesight at the age of 3 due to scarring of her corneas. Her corneas weren’t restored until the age of 27, and following the operation, she was convinced her vision hadn’t improved. Without any training from visual stimuli between the ages of 3 and 27, her visual cortex had perhaps atrophied and was unable to make sense of the signals coming in from her eyes. And like Helen the monkey, H.D. was able to identify obstacles, and she was able to point to objects in the world. But she reported a lack of any subjective sensory quality of visual experience.
The common thread running through the experiences of Helen, D. B., and H.D., is that although in each cases the evidence was not entirely complete, it seems fairly likely each were able to see but unable to experience the qualia of seeing. Perhaps it existed as a sort of sixth sense, inperceivable except as a sort of intuition. Somewhat like a Jedi could sense and learn to train to swing a light sabre using an intuitive sense without conscious awareness, these three seemed to be able to sense visual stimuli without consciously experiencing them.
The implication is that visual sensation and perception are separable in important ways. I’m sure I’m oversimplifying the story somewhat, but Humphrey’s rough sketch is that visual sensations are conscious experiences generated in the visual cortex, while perceptions are unconscious signals existing in the midbrain’s superior colliculus. In the normal operation of a human brain, sensation and perception might become intermingled, but take away one and something of the other will remain; an animal with only a midbrain might have the perception without the sensation.
Sensation, sentition, and the ipsundrum
In early animals, reflexive neural circuitry generates direct response to perceptions of stimuli. If an aversive stimulus is perceived on the left, the organism reflexively moves right. In S:TIOC, that sort of reflex response Humphrey calls “sentition”: a meaningful but automatic response to stimuli. Humphrey proposes a four-step evolutionary development from those automatic sensory responses to conscious sensation.
In the first step, an additional copy, an efference copy, of that motor command is generated and set additional neurons internal to the brain that simply represent and store information about the motor response itself. The body monitors its own response (This is a remarkably similar story to Damasio’s “somatic marker hypothesis”) in order to do things with that response, such as to learn new associations with it.
In a second step, the animal reaches a level of evolutionary sophistication where some reflexive responses are no longer appropriate. At that point, the reflexive responses are privatized, so that only the internal model of the motor response remains–there’s no longer an automatic command going back out to the body. In this sense, the brain now for the first time has a privatized record of the response. This forms a proprioceptive map of the body internal to the brain.
In Step 3, because motor signals formerly sent to the body now go from one place in the brain to another, a feedback loop can generate. A sensory feedback loop can be initially triggered by an incoming sensory signal, but that signal can now reverberate in the brain in a continuous and lasting neural response. This ‘thickens up’ (Humphrey’s term) the response, giving the signal some persistence over time.
In Step 4 of our development towards consciousness, evolution shapes the brain to push that recursive activity into a stable attractor state which can repeat the same pattern at different times. That complex system Humphrey describes as the “ipsundrum”, and he says it is those stable, recursive patterns that are phenomenal sensations. I’m still not entirely sure why he thinks these patterns in particular are phenomenal, but lets say only they have the persistence and complexity to reach a threshold of conscious feeling. Because it’s a recursive feedback state shaped by evolution to a stable attractor system, the ipsundrum is “all-or-nothing”—you have a particular phenomenal consciousness, or you don’t. Animals without this complex recursive attractor system do not have conscious sensations at all.
I want to give an apology to the reader if you are at this point feeling a little lost. There are gaps for me at this point too, and I’m not sure I’ve entirely faithfully reproduced the argument. In particular, the distinction between Step 3: thickened up recursive sensory feedback loops, and Step 4: attractor tates for those loops, seems not clearly defined. But I hope I have communicated a gist!
Animals and sentience
There were two behavioral patterns Humphrey discusses which to me were compelling for his argument that birds, mammals, and no other animals are sentient. First, Humphrey claims that all animals with sentience, that experience internal qualia, now have a motivation to engage sensory play in order to experiment or learn about their internal qualia. When judging whether a species has sentience, then, a conclusive lack of sensory play is strong evidence that species lacks sentience. Play is necessary but not sufficient for sentience, in an evidential sense, i.e., sensory play is an inevitable consequence of experiencing sentience. Second, Humphrey says sensation seeking is strong evidence an animal is sentient; non-sentient animals have no reason to seek out sensations, he says. So sensation seeking is sufficient to indicate sentience, albeit not entirely necessary.
I go back and forth on this. Brian Christian gives a strong argument for the utility of reinforcement learning agents having intrinsic motivation to explore and learn about the world around them. That might imply a kind of intrinsic attraction to novelty that seems not too different to sensation-seeking. You might also imagine it is evolutionary adaptive for some specific kinds of perceptions, like the warmth of a close companion, to be intrinsically rewarding, irrespective of whether there are sensory qualia associated with them.
But if I understand Humphrey right, he’s clear that these behavioral patterns couldin theory be replicated in non-conscious, non-sentient machines. It’s just that in humans, the solution evolution has hit upon is to create sensory feedback loop attractor states that Humphrey calls ipsundrums (ipsundra?) which happen to generate conscious experiences. Humans engage in play in order to learn about those conscious experiences, and engage in sensation-seeking because some of those experiences are inherently pleasurable. Other mammals and birds exhibiting the same behavior, having much the same neural circuitry, are probably engaging in that behavior for the same reason humans are–because they have internal conscious experiences. Other animals like fish, reptiles, and octopuses do not engage in sensation-seeking or play and so do not have those internal conscious experiences.
Implications
Humphrey’s theory of consciousness in S:TIOC implies that machines could, in principle, be conscious, if they have the same kind of reflective systems that sentient animals like humans do; but also that there’s probably no particular function or kind of intelligence that would require consciousness to operate. But it does seem possible that, if we were to try to emulate the human proprioceptive, learning, and decision-making system, we might (accidentally or otherwise) produce machine consciousness.
Humphrey’s ipsundrum theory of consciousness suggests that efforts to improve the living conditions of fish and shrimp may not actually decrease suffering of sentient creatures, because those animals are not sentient. This will be quite a controversial implication.
My hope is that if we can all agree the ipsundrum theory is really just at the level of a hypothesis at this point, we can all agree that, on expectation, given the current state of knowledge, fish and shrimp welfare are morally relevant, but also that future evidence in favor of the ipsundrum hypothesis may change that expectation and suggest fish and shrimp welfare are no longer morally relevant. It may also continue to be that, due to the vast number of fish and shrimp, even a tiny probability that the ipsundrum hypothesis is wrong might continue to make fish and shrimp morally relevant relative to mammals and birds, who are more clearly sentient, but fewer in number.
Research should try to investigate the hypotheses Nick Humphrey describes, in order to reduce uncertainty about his hypothesis. Unfortunately, Humphrey doesn’t spend much time outlining hypothesis tests for his theories. There are several parts of the theory which might use additional testing:
At the neuroscientific level, how exactly should we identify the ipsundrum in humans?
Can we identify feedback loops attractor states that correlate with presence of conscious experience, appearing (for instance) during wakeful experience and REM sleep but disappearing within deep sleep?
Might we simply look for bidirectional connectivity patterns between relevant brain areas; if so, which are the relevant brain areas?
How do we separate out the presence of a sensory loop “attractor state” from mere accidental feedback loops? Should we look to structural connectivity?
Within animal ethology, is it really true that fish, shrimp, octopuses, and other animals of particular concern do not engage in sensation seeking or play?
Additional research into blindsight is also likely relevant.
There’s probably a much longer and more precise list of hypothesis tests we could respond with.
Probably most hypothesis-testing should concern biological organisms. But perhaps there is computational consciousness work to experiment with too. What sort of reinforcement-learning-based, embodied artificial intelligence endowed with sensory feedback loops to track its own embodiment might use play to learn about its own sensory processes? Is it possible that, without building explicit reward processes for sensation seeking, a sensation-seeking behavior might emerge simply because of the reward structure of the sensory system?
Such work run with appropriate agents would not necessarily be more unethical than research on animals, and might be much more ethical if agents were deliberately designed without, e.g., a desire for self-preservation, although if Humphrey is right, such drive does seem to be intrinsic to sentience. That sort of experimentation is also not in itself dangerous from an existential risk perspective provided it is performed on systems with fairly with limited intelligence, awareness of the wider world or how to modify its own basic reward system. If Humphrey is right, sentience could arise from a machine fairly limited in intelligence.
Only mammals and birds are sentient, according to neuroscientist Nick Humphrey’s theory of consciousness, recently explained in “Sentience: The invention of consciousness”
In 2023, Nick Humphrey published his book Sentience: The invention of consciousness (S:TIOC). In this book he proposed a theory of consciousness that implies, he says, that only mammals and birds have any kind of internal awareness.
[EDIT: This post aims to summarize that book. Nick Humphrey has written a précis, a summary, of his own book on aeon here, and has done a much better job summarizing his book than I have. Consider reading that instead of this post, then coming back to comment on this forum post]
His theory of consciousness has a lot in common with the picture of consciousness is described in recent books by two other authors, neuroscientist Antonio Damasio and consciousness researcher Anil Seth. All three agree on the importance of feelings, or proprioception, as the evolutionary and experiential base of sentience. Damasio and Seth, if I recall correctly, each put a lot of emphasis on homeostasis as a driving evolutionary force. All three agree sentience evolved as an extension of our senses–touch, sight, hearing, and so on. But S:TIOC is a bolder book which not only describes what we know about the evolutionary base of consciousness but proposes a plausible theory coming as close as can be to describing what it is short of actually solving Chalmers’ Hard Problem.
The purpose of this post is to describe Humphrey’s theory of sentience, as described in S:TIOC, and explain why Humphrey is strongly convinced that mammals and birds–not octopuses, fish, or shrimp–have any kind of internal experience. Right up front I want to acknowledge that cause areas focused on animals like fish and shrimp seem on-expectation impactful even if there’s only a fairly small chance those animals might have capacity for suffering or other internal experiences. Those areas might be impactful because of the huge absolute numbers of fish and shrimp who are suffering if they have any internal experience at all. But nevertheless, a theory with reasonable odds of being true that can identify which animals have conscious experience should update us on our relative priorities. Furthermore, if there is substantial uncertainty, which I think there is, such a theory should motivate hypothesis testing to help us reduce uncertainty.
Blindsight
To understand this story, you should hear about three fascinating personal encounters which lead Humphrey to some intuitions about consciousness. Humphrey describes blindsight in a monkey and a couple of people. Blindsight is the ability for an organism to see without conscious awareness of seeing. Humphrey tells of a story of a monkey named Helen whose visual cortex had been removed. Subsequent to the removal of her visual cortex, Helen was miserable and unmotivated to move about in the indoor world she lived in. After a year of this misery, her handlers allowed her to get out into the outside world and explore it. Over the course of time she learned to navigate around the world with an unmistakable ability to see, avoid obstacles, and quickly locate food. But Humphrey, knowing Helen quite well, thought she lacked the confidence in herself to be able to have the awareness that she clearly did. This was a clue that perhaps Helen was using her midbrain system, the superior colliculus, which processes visual information in parallel with the visual cortex, and that she was unaware of the visual information her brain could nevertheless use to navigate her body around obstacles and to locate food. Of course this is somewhat wild speculation considering that Helen couldn’t report her own experience back to Humphrey.
The second observation was of a man known to the scientific community as D.B. In an attempt to relieve D.B. of terribly painful headaches, doctors had removed D.B.’s right visual cortex. D.B. reported not being able to see anything presented only to his left eye (the left and right eyes each reporting information to the visual cortex in the opposite side of the brain). But strangely, when doctors encouraged him to guess what as in his left visual field, he could correctly identify shape, color, and position of objects presented to him, even as he had no conscious awareness of seeing them.
I’d like to add two caveats to this story about D. B.. First, I am a little sceptical this story is really evidence for unconscious sight. Split-brain patients, patients whose cortex has been cut in half to reduce seizures, can only describe objects presented to the one eye connected to the side of their brain that produces speech. Present the object to the other eye, and the information will to go the other side of their brain and these patients will verbally report not being able to see the object. Nevertheless, they will correctly be able to write the name of the object down on a piece of paper. The fascinating possibility here is that split-brain patients might have split-consciousness: potentially parallel tracks of conscious experience that are to some degree uncoordinated and independent. But in the context of D.B., the patient whose visual cortex was removed, we might infer that perhaps D.B. was still conscious of the objects he was seeing with his superior colliculus, but was merely unable to describe them because of his remaining visual awareness was disconnected from his phonological systems.
Second, @mako yass suggested an interesting empirical test of Humphrey’s observations about D. B.. If we were to help D. B. train to use his unconscious sight by giving continuous feedback on his guesses as to what he was seeing, would he learn to recognize–and be conscious of–whatever intuitions he is drawing on to report his awareness? If he did, then perhaps he has had some kind of conscious experience of the visual stimuli after all–just not the sort of qualia you get with a visual cortex–and in that case, perhaps a theory of conscious vision that places conscious visual sensation entirely in the visual cortex is misplaced.
Having added those caveats, I’ll move on to discussing the final compelling case study that Humphrey uses to set up his theory of sentience in S:TIOC.
H.D. is a woman who tragically lost her eyesight at the age of 3 due to scarring of her corneas. Her corneas weren’t restored until the age of 27, and following the operation, she was convinced her vision hadn’t improved. Without any training from visual stimuli between the ages of 3 and 27, her visual cortex had perhaps atrophied and was unable to make sense of the signals coming in from her eyes. And like Helen the monkey, H.D. was able to identify obstacles, and she was able to point to objects in the world. But she reported a lack of any subjective sensory quality of visual experience.
The common thread running through the experiences of Helen, D. B., and H.D., is that although in each cases the evidence was not entirely complete, it seems fairly likely each were able to see but unable to experience the qualia of seeing. Perhaps it existed as a sort of sixth sense, inperceivable except as a sort of intuition. Somewhat like a Jedi could sense and learn to train to swing a light sabre using an intuitive sense without conscious awareness, these three seemed to be able to sense visual stimuli without consciously experiencing them.
The implication is that visual sensation and perception are separable in important ways. I’m sure I’m oversimplifying the story somewhat, but Humphrey’s rough sketch is that visual sensations are conscious experiences generated in the visual cortex, while perceptions are unconscious signals existing in the midbrain’s superior colliculus. In the normal operation of a human brain, sensation and perception might become intermingled, but take away one and something of the other will remain; an animal with only a midbrain might have the perception without the sensation.
Sensation, sentition, and the ipsundrum
In early animals, reflexive neural circuitry generates direct response to perceptions of stimuli. If an aversive stimulus is perceived on the left, the organism reflexively moves right. In S:TIOC, that sort of reflex response Humphrey calls “sentition”: a meaningful but automatic response to stimuli. Humphrey proposes a four-step evolutionary development from those automatic sensory responses to conscious sensation.
In the first step, an additional copy, an efference copy, of that motor command is generated and set additional neurons internal to the brain that simply represent and store information about the motor response itself. The body monitors its own response (This is a remarkably similar story to Damasio’s “somatic marker hypothesis”) in order to do things with that response, such as to learn new associations with it.
In a second step, the animal reaches a level of evolutionary sophistication where some reflexive responses are no longer appropriate. At that point, the reflexive responses are privatized, so that only the internal model of the motor response remains–there’s no longer an automatic command going back out to the body. In this sense, the brain now for the first time has a privatized record of the response. This forms a proprioceptive map of the body internal to the brain.
In Step 3, because motor signals formerly sent to the body now go from one place in the brain to another, a feedback loop can generate. A sensory feedback loop can be initially triggered by an incoming sensory signal, but that signal can now reverberate in the brain in a continuous and lasting neural response. This ‘thickens up’ (Humphrey’s term) the response, giving the signal some persistence over time.
In Step 4 of our development towards consciousness, evolution shapes the brain to push that recursive activity into a stable attractor state which can repeat the same pattern at different times. That complex system Humphrey describes as the “ipsundrum”, and he says it is those stable, recursive patterns that are phenomenal sensations. I’m still not entirely sure why he thinks these patterns in particular are phenomenal, but lets say only they have the persistence and complexity to reach a threshold of conscious feeling. Because it’s a recursive feedback state shaped by evolution to a stable attractor system, the ipsundrum is “all-or-nothing”—you have a particular phenomenal consciousness, or you don’t. Animals without this complex recursive attractor system do not have conscious sensations at all.
I want to give an apology to the reader if you are at this point feeling a little lost. There are gaps for me at this point too, and I’m not sure I’ve entirely faithfully reproduced the argument. In particular, the distinction between Step 3: thickened up recursive sensory feedback loops, and Step 4: attractor tates for those loops, seems not clearly defined. But I hope I have communicated a gist!
Animals and sentience
There were two behavioral patterns Humphrey discusses which to me were compelling for his argument that birds, mammals, and no other animals are sentient. First, Humphrey claims that all animals with sentience, that experience internal qualia, now have a motivation to engage sensory play in order to experiment or learn about their internal qualia. When judging whether a species has sentience, then, a conclusive lack of sensory play is strong evidence that species lacks sentience. Play is necessary but not sufficient for sentience, in an evidential sense, i.e., sensory play is an inevitable consequence of experiencing sentience. Second, Humphrey says sensation seeking is strong evidence an animal is sentient; non-sentient animals have no reason to seek out sensations, he says. So sensation seeking is sufficient to indicate sentience, albeit not entirely necessary.
I go back and forth on this. Brian Christian gives a strong argument for the utility of reinforcement learning agents having intrinsic motivation to explore and learn about the world around them. That might imply a kind of intrinsic attraction to novelty that seems not too different to sensation-seeking. You might also imagine it is evolutionary adaptive for some specific kinds of perceptions, like the warmth of a close companion, to be intrinsically rewarding, irrespective of whether there are sensory qualia associated with them.
But if I understand Humphrey right, he’s clear that these behavioral patterns could in theory be replicated in non-conscious, non-sentient machines. It’s just that in humans, the solution evolution has hit upon is to create sensory feedback loop attractor states that Humphrey calls ipsundrums (ipsundra?) which happen to generate conscious experiences. Humans engage in play in order to learn about those conscious experiences, and engage in sensation-seeking because some of those experiences are inherently pleasurable. Other mammals and birds exhibiting the same behavior, having much the same neural circuitry, are probably engaging in that behavior for the same reason humans are–because they have internal conscious experiences. Other animals like fish, reptiles, and octopuses do not engage in sensation-seeking or play and so do not have those internal conscious experiences.
Implications
Humphrey’s theory of consciousness in S:TIOC implies that machines could, in principle, be conscious, if they have the same kind of reflective systems that sentient animals like humans do; but also that there’s probably no particular function or kind of intelligence that would require consciousness to operate. But it does seem possible that, if we were to try to emulate the human proprioceptive, learning, and decision-making system, we might (accidentally or otherwise) produce machine consciousness.
Humphrey’s ipsundrum theory of consciousness suggests that efforts to improve the living conditions of fish and shrimp may not actually decrease suffering of sentient creatures, because those animals are not sentient. This will be quite a controversial implication.
My hope is that if we can all agree the ipsundrum theory is really just at the level of a hypothesis at this point, we can all agree that, on expectation, given the current state of knowledge, fish and shrimp welfare are morally relevant, but also that future evidence in favor of the ipsundrum hypothesis may change that expectation and suggest fish and shrimp welfare are no longer morally relevant. It may also continue to be that, due to the vast number of fish and shrimp, even a tiny probability that the ipsundrum hypothesis is wrong might continue to make fish and shrimp morally relevant relative to mammals and birds, who are more clearly sentient, but fewer in number.
Research should try to investigate the hypotheses Nick Humphrey describes, in order to reduce uncertainty about his hypothesis. Unfortunately, Humphrey doesn’t spend much time outlining hypothesis tests for his theories. There are several parts of the theory which might use additional testing:
At the neuroscientific level, how exactly should we identify the ipsundrum in humans?
Can we identify feedback loops attractor states that correlate with presence of conscious experience, appearing (for instance) during wakeful experience and REM sleep but disappearing within deep sleep?
Might we simply look for bidirectional connectivity patterns between relevant brain areas; if so, which are the relevant brain areas?
How do we separate out the presence of a sensory loop “attractor state” from mere accidental feedback loops? Should we look to structural connectivity?
Within animal ethology, is it really true that fish, shrimp, octopuses, and other animals of particular concern do not engage in sensation seeking or play?
Additional research into blindsight is also likely relevant.
There’s probably a much longer and more precise list of hypothesis tests we could respond with.
Probably most hypothesis-testing should concern biological organisms. But perhaps there is computational consciousness work to experiment with too. What sort of reinforcement-learning-based, embodied artificial intelligence endowed with sensory feedback loops to track its own embodiment might use play to learn about its own sensory processes? Is it possible that, without building explicit reward processes for sensation seeking, a sensation-seeking behavior might emerge simply because of the reward structure of the sensory system?
Such work run with appropriate agents would not necessarily be more unethical than research on animals, and might be much more ethical if agents were deliberately designed without, e.g., a desire for self-preservation, although if Humphrey is right, such drive does seem to be intrinsic to sentience. That sort of experimentation is also not in itself dangerous from an existential risk perspective provided it is performed on systems with fairly with limited intelligence, awareness of the wider world or how to modify its own basic reward system. If Humphrey is right, sentience could arise from a machine fairly limited in intelligence.