I find this post interesting, because I think it’s important to be conceptually clear about animal minds, but I strongly disagree with its conclusions.
It’s true that animals (and AIs) might be automatons: they might simulate qualia without really experiencing them. And it’s true that humans might anthropomorphise by seeing qualia in animals, or AIs, or arbitrary shape that don’t really have them. (You might enjoy John Bradshaw’s The Animals Among Us, which has a chapter on just this topic).
But I don’t see why an ability to talk about your qualia would be a suitable test for your qualia’s realness. I can imagine talking automatons, and I can imagine non-talking non-automatons. If I prod an LLM with the right prompts, it might describe ‘its’ experiences to me; this is surreal and freaky, but it doesn’t yet persuade me that the LLM has qualia, that there is something which it is to be an LLM. And, likewise, I can imagine a mute person, or a person afflicted with locked-in syndrome, who experiences qualia but can’t talk about it. You write: “We expect that even if someone can’t (e.g., they can’t talk at all) but we ask them in writing or restore their ability to respond, they’d talk about qualia”. But I don’t see how “restor[ing] their ability to respond” is different to ‘granting animals the ability to respond’; just as you expect humans granted voice to talk about their qualia, I expect many animals granted voice to talk about their qualia. (It seems quixotic, but some researchers are really exploring this right now, using AI to try to translate animal languages). Your test would treat the “very human-like” screaming of pigs at slaughter as no evidence at all for their qualia. The boundary between screams and words is fuzzy, the distinction arbitrary. I think it’s a speciesist way to draw the line: the question is not, Can they talk?
I would be a little out of my depth talking about better tests for animal consciousness, but as far as I know the canonical book on fish consciousness is Do Fish Feel Pain? by Victoria Braithwaite. If you haven’t read it, I think you’d find it interesting. I also second Angelina and Constance’s comments, which share valuable information about our evidence base on invertebrate sentience.
Some evidence on animal consciousness is more convincing than other evidence. Braithwaite makes a stronger case than this post. But the questions definitely aren’t answered, and they might be fundamentally unanswerable! So: what do we do? I don’t think we can say, ‘I believe fish and shrimp don’t experience qualia, and therefore there are no ethical issues with eating them.’ We should adopt the Precautionary Principle: ‘I think there’s some chance, even if it’s a low chance, that fish and shrimp experience qualia, so there could be ethical issues with eating them’. In a world with uncertainty about whether fish and shrimp experience qualia, one scenario is the torture and exploitaton of trillions, and another scenario is a slightly narrower diet. Why risk an ethically catastrophic mistake?
If an LLM talks about qualia, either it has qualia or qualia somewhere else caused some texts to exist, and the LLM read those.
If the LLM describes “its experience” to you, and the experience matches your own subjective experience, you can be pretty sure there’s subjective experience somewhere in the causal structure behind the LLM’s outputs. If the LLM doesn’t have subjective experience but talks about it, that means someone had subjective experience, which made them write a text about it, which the LLM then read. You shouldn’t expect an LLM to talk about subjective experience if it was never trained by anything caused by subjective experience and doesn’t have subjective experience itself.
This means that the ability to talk about qualia is extremely strong evidence for having qualia or having learned about qualia as a result of something that has qualia talking.
I don’t think fish simulate qualia; I think they’re just automation, simply with nothing like experience and nothing resembling experience. They perform adaptations that include efficient reinforcement learning but don’t include experience of processed information.
How do you know whether you scream because of the subjective experience of pain or because of the mechanisms for the instinctive ways to avoid death- how do you know that the scream is caused by the outputs of the neural circuits running qualia and not just by the same stuff that causes the inputs to the circuits that you experience as extremely unpleasant?
It’s not about whether they can talk; parrots and LLMs can be trained to say words in reaction to stuff. If you can talk about having subjective experience, it is valid to assume there’s subjective experience somewhere down the line. If you can’t talk about subjective experience, other, indirect evidence is needed. Assuming you have subjective experience because you react to something external similarly to those with subjective experience is pattern-matching that works on humans for the above reasons, but invalid on everything else without valid evidence for qualia. Neural networks trained with RL would react to pain and whatever is the evolutionary reason for screaming on pain, if you provide similar incentives, RL agents would scream on pain; that doesn’t provide evidence for whether there’s also experience of anything in them.
I’m certain enough fish don’t have qualia to be ok with eating fish; if we solve more critical short-term problems, then, in the future, hopefully, we’ll figure out how subjective experience actually works and will know for sure.
I find this post interesting, because I think it’s important to be conceptually clear about animal minds, but I strongly disagree with its conclusions.
It’s true that animals (and AIs) might be automatons: they might simulate qualia without really experiencing them. And it’s true that humans might anthropomorphise by seeing qualia in animals, or AIs, or arbitrary shape that don’t really have them. (You might enjoy John Bradshaw’s The Animals Among Us, which has a chapter on just this topic).
But I don’t see why an ability to talk about your qualia would be a suitable test for your qualia’s realness. I can imagine talking automatons, and I can imagine non-talking non-automatons. If I prod an LLM with the right prompts, it might describe ‘its’ experiences to me; this is surreal and freaky, but it doesn’t yet persuade me that the LLM has qualia, that there is something which it is to be an LLM. And, likewise, I can imagine a mute person, or a person afflicted with locked-in syndrome, who experiences qualia but can’t talk about it. You write: “We expect that even if someone can’t (e.g., they can’t talk at all) but we ask them in writing or restore their ability to respond, they’d talk about qualia”. But I don’t see how “restor[ing] their ability to respond” is different to ‘granting animals the ability to respond’; just as you expect humans granted voice to talk about their qualia, I expect many animals granted voice to talk about their qualia. (It seems quixotic, but some researchers are really exploring this right now, using AI to try to translate animal languages). Your test would treat the “very human-like” screaming of pigs at slaughter as no evidence at all for their qualia. The boundary between screams and words is fuzzy, the distinction arbitrary. I think it’s a speciesist way to draw the line: the question is not, Can they talk?
I would be a little out of my depth talking about better tests for animal consciousness, but as far as I know the canonical book on fish consciousness is Do Fish Feel Pain? by Victoria Braithwaite. If you haven’t read it, I think you’d find it interesting. I also second Angelina and Constance’s comments, which share valuable information about our evidence base on invertebrate sentience.
Some evidence on animal consciousness is more convincing than other evidence. Braithwaite makes a stronger case than this post. But the questions definitely aren’t answered, and they might be fundamentally unanswerable! So: what do we do? I don’t think we can say, ‘I believe fish and shrimp don’t experience qualia, and therefore there are no ethical issues with eating them.’ We should adopt the Precautionary Principle: ‘I think there’s some chance, even if it’s a low chance, that fish and shrimp experience qualia, so there could be ethical issues with eating them’. In a world with uncertainty about whether fish and shrimp experience qualia, one scenario is the torture and exploitaton of trillions, and another scenario is a slightly narrower diet. Why risk an ethically catastrophic mistake?
(writing in a personal capacity)
Thanks for the comment!
As I mentioned in the post,
If the LLM describes “its experience” to you, and the experience matches your own subjective experience, you can be pretty sure there’s subjective experience somewhere in the causal structure behind the LLM’s outputs. If the LLM doesn’t have subjective experience but talks about it, that means someone had subjective experience, which made them write a text about it, which the LLM then read. You shouldn’t expect an LLM to talk about subjective experience if it was never trained by anything caused by subjective experience and doesn’t have subjective experience itself.
This means that the ability to talk about qualia is extremely strong evidence for having qualia or having learned about qualia as a result of something that has qualia talking.
I don’t think fish simulate qualia; I think they’re just automation, simply with nothing like experience and nothing resembling experience. They perform adaptations that include efficient reinforcement learning but don’t include experience of processed information.
How do you know whether you scream because of the subjective experience of pain or because of the mechanisms for the instinctive ways to avoid death- how do you know that the scream is caused by the outputs of the neural circuits running qualia and not just by the same stuff that causes the inputs to the circuits that you experience as extremely unpleasant?
It’s not about whether they can talk; parrots and LLMs can be trained to say words in reaction to stuff. If you can talk about having subjective experience, it is valid to assume there’s subjective experience somewhere down the line. If you can’t talk about subjective experience, other, indirect evidence is needed. Assuming you have subjective experience because you react to something external similarly to those with subjective experience is pattern-matching that works on humans for the above reasons, but invalid on everything else without valid evidence for qualia. Neural networks trained with RL would react to pain and whatever is the evolutionary reason for screaming on pain, if you provide similar incentives, RL agents would scream on pain; that doesn’t provide evidence for whether there’s also experience of anything in them.
I’m certain enough fish don’t have qualia to be ok with eating fish; if we solve more critical short-term problems, then, in the future, hopefully, we’ll figure out how subjective experience actually works and will know for sure.