I believe the “consciousness requires having a self-model” is the only coherent model for rejecting animals’ moral patienthood, but I don’t understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.
I’ve seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and we’re sure they’re sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing. The best explanation for this is that its just part of our concept of “conscious” that a conscious experience is one that you’re (at least potentially) introspectively aware that you’re having. Indeed (my point not Dennett’s), this is how we found out that there is such a thing as “unconscious perception”, we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we don’t think that conscious experiences are ones you’re (at least potentially) introspectively aware of having, it’s not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen.
Here’s Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness:
“It is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: ‘You are conscious of the redness of the apple.’ P: ‘I am? I don’t see any color. It just looks grey. Why do you think I’m consciously experiencing red?’ F&L: ‘Because we can detect recurrent processing in color areas in your visual cortex.’ P: ‘But I really don’t see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?’ F&L: ‘Yes, because local recurrency correlates with conscious awareness.’ P: ’Doesn’t it mean something that I am telling you I’m not experiencing red at all? Doesn’t that suggest local recurrency itself isn’t sufficient for conscious awareness?”
I don’t personally endorse Dennett’s view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don’t think we can just assume that animals can’t be introspectively aware of their own experiences. But I don’t think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct.
I don’t personally endorse Dennett’s view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don’t think we can just assume that animals can’t be introspectively aware of their own experiences.
FWIW, Dennett ended up believing chickens, octopuses and bees are conscious, anyway. He was an illusionist, but I think his view, like Keith Frankish’s, was not that an animal literally needs to have an illusion of phenomenal consciousness or be able to introspect to be conscious in a way that matters. The illusions and introspection just explain why we humans believe in phenomenal consciousness, but first-order consciousness still matters without them.
And he was a gradualist. He thought introspection and higher-order thoughts made for important differences and was skeptical of them in other animals (Dennett, 2018, p.168-169). I don’t know how morally important he found these differences to be, though.
I think there’s a difference between access and phenomenal consciousness. You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness. You also can have access consciousness that you can’t talk about (e.g. if you can’t speak). Not sure why we’d deny that animals have access consciousness.
“You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness” Maybe, but in the current context this is basically begging the question, whereas I’ve at least sketched an argument (albeit one you can probably resist without catastrophic cost).
EDIT: Strictly speaking, I don’t think people with the Dennettian view have to or should deny that there is phenomenally conscious content that isn’t in fact introspectively accessed. What they do/should deny is that there is p-conscious content that you couldn’t access even if you tried.
But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.
To prevent any misunderstanding, illusionism doesn’t deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.
I think illusionism is extremely crazy, but even if you adopt it, I don’t know why it dissolves the problem more to say “what we think of as consciousness is really just the brain modelling itself,” rather htan “what we think of as consciousness is really the brain integrating information.”
The brain modelling itself as having phenomenal properties would (partly) explain why people believe consciousness has phenomenal properties, i.e. that consciousness is phenomenal. In fact, you model yourself as having phenomenal properties whether or not illusionism is true, if it seems to you that you have phenomenal consciousness. That seeming, or appearance, has to have some basis in your brain, and that is a model.
Illusionism just says there aren’t actually any phenomenal properties, so their appearance, i.e. their seeming to exist, is an illusion, and your model is wrong.
The hard problem is dissolved by illusionism because phenomenal consciousness doesn’t exist under illusionism, because consciousness has no phenomenal properties. And we have a guide to solving the meta-problem under illusionism and verifying our dissolution of the hard problem:
we find the models of phenomenal properties in our brains (which exist whether or not illusionism is true), and check that they don’t depend (causally or constitutively) on any actual phenomenal properties existing, or
we otherwise give a persuasive argument that consciousness doesn’t have any phenomenal properties, or that phenomenal properties don’t exist.
On the other hand, saying consciousness just is information integration and denying phenomenal properties together would indeed also dissolve the hard problem. Saying phenomenal consciousness just is information integration would solve the hard problem.
But both information integration accounts are poorly motivated, and I don’t think anyone should give much credence to either. A good (dis)solution should be accompanied by an explanation for why many people believe consciousness has phenomenal properties and so solve the meta-problem, or at least give us a path to solving it. I don’t think this would happen with (phenomenal) consciousness as mere information integration. Why would information integration, generically, lead to beliefs in phenomenal consciousness?
There doesn’t seem to be much logical connection here. Of course, beliefs in phenomenal consciousness depend on information integration, but very few instances of information integration seem to have any connection to such beliefs at all. Information integration is nowhere close to a sufficient explanation.
And this seems to me to be the case for every attempted solution to the hard problem I’ve seen: they never give a good explanation for the causes of our beliefs in phenomenal consciousness.
I believe the “consciousness requires having a self-model” is the only coherent model for rejecting animals’ moral patienthood, but I don’t understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.
I’ve seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and we’re sure they’re sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing. The best explanation for this is that its just part of our concept of “conscious” that a conscious experience is one that you’re (at least potentially) introspectively aware that you’re having. Indeed (my point not Dennett’s), this is how we found out that there is such a thing as “unconscious perception”, we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we don’t think that conscious experiences are ones you’re (at least potentially) introspectively aware of having, it’s not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen.
Here’s Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness:
“It is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: ‘You are conscious of the redness of the apple.’ P: ‘I am? I don’t see any color. It just looks grey. Why do you think I’m consciously experiencing red?’ F&L: ‘Because we can detect recurrent processing in color areas in your visual cortex.’ P: ‘But I really don’t see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?’ F&L: ‘Yes, because local recurrency correlates with conscious awareness.’ P: ’Doesn’t it mean something that I am telling you I’m not experiencing red at all? Doesn’t that suggest local recurrency itself isn’t sufficient for conscious awareness?”
I don’t personally endorse Dennett’s view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don’t think we can just assume that animals can’t be introspectively aware of their own experiences. But I don’t think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct.
FWIW, Dennett ended up believing chickens, octopuses and bees are conscious, anyway. He was an illusionist, but I think his view, like Keith Frankish’s, was not that an animal literally needs to have an illusion of phenomenal consciousness or be able to introspect to be conscious in a way that matters. The illusions and introspection just explain why we humans believe in phenomenal consciousness, but first-order consciousness still matters without them.
And he was a gradualist. He thought introspection and higher-order thoughts made for important differences and was skeptical of them in other animals (Dennett, 2018, p.168-169). I don’t know how morally important he found these differences to be, though.
I think that Dennett probably said inconsistent things about this over time.
I think there’s a difference between access and phenomenal consciousness. You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness. You also can have access consciousness that you can’t talk about (e.g. if you can’t speak). Not sure why we’d deny that animals have access consciousness.
“You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness” Maybe, but in the current context this is basically begging the question, whereas I’ve at least sketched an argument (albeit one you can probably resist without catastrophic cost).
EDIT: Strictly speaking, I don’t think people with the Dennettian view have to or should deny that there is phenomenally conscious content that isn’t in fact introspectively accessed. What they do/should deny is that there is p-conscious content that you couldn’t access even if you tried.
From my comment above:
But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.
To prevent any misunderstanding, illusionism doesn’t deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.
I think illusionism is extremely crazy, but even if you adopt it, I don’t know why it dissolves the problem more to say “what we think of as consciousness is really just the brain modelling itself,” rather htan “what we think of as consciousness is really the brain integrating information.”
The brain modelling itself as having phenomenal properties would (partly) explain why people believe consciousness has phenomenal properties, i.e. that consciousness is phenomenal. In fact, you model yourself as having phenomenal properties whether or not illusionism is true, if it seems to you that you have phenomenal consciousness. That seeming, or appearance, has to have some basis in your brain, and that is a model.
Illusionism just says there aren’t actually any phenomenal properties, so their appearance, i.e. their seeming to exist, is an illusion, and your model is wrong.
The hard problem is dissolved by illusionism because phenomenal consciousness doesn’t exist under illusionism, because consciousness has no phenomenal properties. And we have a guide to solving the meta-problem under illusionism and verifying our dissolution of the hard problem:
we find the models of phenomenal properties in our brains (which exist whether or not illusionism is true), and check that they don’t depend (causally or constitutively) on any actual phenomenal properties existing, or
we otherwise give a persuasive argument that consciousness doesn’t have any phenomenal properties, or that phenomenal properties don’t exist.
On the other hand, saying consciousness just is information integration and denying phenomenal properties together would indeed also dissolve the hard problem. Saying phenomenal consciousness just is information integration would solve the hard problem.
But both information integration accounts are poorly motivated, and I don’t think anyone should give much credence to either. A good (dis)solution should be accompanied by an explanation for why many people believe consciousness has phenomenal properties and so solve the meta-problem, or at least give us a path to solving it. I don’t think this would happen with (phenomenal) consciousness as mere information integration. Why would information integration, generically, lead to beliefs in phenomenal consciousness?
There doesn’t seem to be much logical connection here. Of course, beliefs in phenomenal consciousness depend on information integration, but very few instances of information integration seem to have any connection to such beliefs at all. Information integration is nowhere close to a sufficient explanation.
And this seems to me to be the case for every attempted solution to the hard problem I’ve seen: they never give a good explanation for the causes of our beliefs in phenomenal consciousness.
Yeah it’s very bizarre. Seems just to be vibes.