splinter—if we restrict attention to sentience (capacity to feel pleasure/pain, or to flourish/suffer) rather than consciousness, then it would be very difficult for any AI findings or capabilities to challenge my conviction that most non-human, mobile animals are sentient.
The reasons are evolutionary and functional. Almost every animal nervous system evolves to be capable of adjusting its behavior based on feedback from the environment, in the form of positive and negative reinforcers, which basically boil down to pleasure and pain signals. My hunch is that any animal capable of operant conditioning is sentient in a legitimate sense—and that would include basically all vertebrates with a central nervous system (inc. mammals, birds, reptiles), and also most invertebrates that evolved to move around to find food and avoid predators.
So, consciousness is a red herring. If we’re interested in the question of whether non-human animals can suffer, we need to ask whether they can be operantly conditioned by any negative reinforcers. The answer, almost always, is ‘yes’.
I am using conscious and sentient as synonyms. Apologies if this is confusing.
I don’t doubt at all that all animals are sentient in the sense that you mean. But I am referring to the question of whether they have subjective experience—not just pleasure and pain signals but also a subjective experience of pleasure and pain.
This doesn’t feel like a red herring to me. Suffering only takes on a moral valence if it describes a conscious experience.
splinter—I strongly disagree on that. I think consciousness is built up out of valenced reactions to things (e.g. pleasure/pain signals); it’s not some qualitatively special overlay on top of those signals.
And I don’t agree that suffering is only morally relevant if it’s ‘consciously experienced’.
Not to rehash everyone’s well-rehearsed position on the hard problem, but surely in the above sentience is the red herring? If non-human animals are not conscious, i.e. “there are no lights on inside” not just “the lights are on but dimmer”, then there is actually no suffering?
Edit: A good intuition pump on this crux is David Chalmer’s ‘Vulcan’ thought experiment (see the 80k podcast transcript) - my intuition tells me we care about the Vulcans, but maybe the dominant philosophy of mind position in EA is to not care about them (I might be confounding overlap between illusionism and negative utiliarianism though)? That seems like a pretty big crux to me.
I don’t see, at the evolutionary-functional level, why human-type ‘consciousness’ (whatever that means) would be required for sentience (adaptive responsiveness to positive/negative reinforcers, i.e. pleasure/pain). Sentience seems much more foundational, operationalizable, testable, functional, and clear.
But then, 99% of philosophical writing about consciousness strikes me as wildly misguided, speculative, vague, and irrelevant.
Psychology has been studying ‘consciousness’ ever since the 1850s, and has made a lot of progress. Philosophy, not so much, IMHO.
Follow-up: I’ve never found Chalmers’ zombie or vulcan thought experiments at all compelling. They sound plausible at first glance as interesting edge cases, but I think they’re not at all plausible or illuminating if one asks how such a hypothetical being could have evolved, and whether their cognitive/affective architecture really makes sense. The notion of a mind that doesn’t have any valences regarding external objects, beings, or situations would boil down to a mind that can’t make any decisions, can’t learn anything (through operant conditioning), and can’t pursue any goals—i.e. not a ‘mind’ at all.
I critiqued the Chalmers zombie thought experiment in this essay from c. 1999. Also see this shorter essay about the possible functions of human consciousness, which I think center around ‘public relations’ functions in our hypersocial tribal context, more than anything else.
splinter—if we restrict attention to sentience (capacity to feel pleasure/pain, or to flourish/suffer) rather than consciousness, then it would be very difficult for any AI findings or capabilities to challenge my conviction that most non-human, mobile animals are sentient.
The reasons are evolutionary and functional. Almost every animal nervous system evolves to be capable of adjusting its behavior based on feedback from the environment, in the form of positive and negative reinforcers, which basically boil down to pleasure and pain signals. My hunch is that any animal capable of operant conditioning is sentient in a legitimate sense—and that would include basically all vertebrates with a central nervous system (inc. mammals, birds, reptiles), and also most invertebrates that evolved to move around to find food and avoid predators.
So, consciousness is a red herring. If we’re interested in the question of whether non-human animals can suffer, we need to ask whether they can be operantly conditioned by any negative reinforcers. The answer, almost always, is ‘yes’.
I am using conscious and sentient as synonyms. Apologies if this is confusing.
I don’t doubt at all that all animals are sentient in the sense that you mean. But I am referring to the question of whether they have subjective experience—not just pleasure and pain signals but also a subjective experience of pleasure and pain.
This doesn’t feel like a red herring to me. Suffering only takes on a moral valence if it describes a conscious experience.
splinter—I strongly disagree on that. I think consciousness is built up out of valenced reactions to things (e.g. pleasure/pain signals); it’s not some qualitatively special overlay on top of those signals.
And I don’t agree that suffering is only morally relevant if it’s ‘consciously experienced’.
Not to rehash everyone’s well-rehearsed position on the hard problem, but surely in the above sentience is the red herring? If non-human animals are not conscious, i.e. “there are no lights on inside” not just “the lights are on but dimmer”, then there is actually no suffering?
Edit: A good intuition pump on this crux is David Chalmer’s ‘Vulcan’ thought experiment (see the 80k podcast transcript) - my intuition tells me we care about the Vulcans, but maybe the dominant philosophy of mind position in EA is to not care about them (I might be confounding overlap between illusionism and negative utiliarianism though)? That seems like a pretty big crux to me.
I don’t see, at the evolutionary-functional level, why human-type ‘consciousness’ (whatever that means) would be required for sentience (adaptive responsiveness to positive/negative reinforcers, i.e. pleasure/pain). Sentience seems much more foundational, operationalizable, testable, functional, and clear.
But then, 99% of philosophical writing about consciousness strikes me as wildly misguided, speculative, vague, and irrelevant.
Psychology has been studying ‘consciousness’ ever since the 1850s, and has made a lot of progress. Philosophy, not so much, IMHO.
Follow-up: I’ve never found Chalmers’ zombie or vulcan thought experiments at all compelling. They sound plausible at first glance as interesting edge cases, but I think they’re not at all plausible or illuminating if one asks how such a hypothetical being could have evolved, and whether their cognitive/affective architecture really makes sense. The notion of a mind that doesn’t have any valences regarding external objects, beings, or situations would boil down to a mind that can’t make any decisions, can’t learn anything (through operant conditioning), and can’t pursue any goals—i.e. not a ‘mind’ at all.
I critiqued the Chalmers zombie thought experiment in this essay from c. 1999. Also see this shorter essay about the possible functions of human consciousness, which I think center around ‘public relations’ functions in our hypersocial tribal context, more than anything else.