though unlike Eliezer, I don’t come to my conclusions about animal consciousness from the armchair without reviewing any evidence
A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.
And this gets into the kind of views to which I’m sympathetic.
I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but I’m not confident about others. More on this kind of view here and here.
On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/worseness/good/bad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which I’m happy to share).
And I’m inclined to count these attitudes whether they’re “conscious” or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.
Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/or pleasure/unpleasantness-like states.
Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.
I believe the “consciousness requires having a self-model” is the only coherent model for rejecting animals’ moral patienthood, but I don’t understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.
I’ve seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and we’re sure they’re sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing. The best explanation for this is that its just part of our concept of “conscious” that a conscious experience is one that you’re (at least potentially) introspectively aware that you’re having. Indeed (my point not Dennett’s), this is how we found out that there is such a thing as “unconscious perception”, we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we don’t think that conscious experiences are ones you’re (at least potentially) introspectively aware of having, it’s not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen.
Here’s Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness:
“It is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: ‘You are conscious of the redness of the apple.’ P: ‘I am? I don’t see any color. It just looks grey. Why do you think I’m consciously experiencing red?’ F&L: ‘Because we can detect recurrent processing in color areas in your visual cortex.’ P: ‘But I really don’t see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?’ F&L: ‘Yes, because local recurrency correlates with conscious awareness.’ P: ’Doesn’t it mean something that I am telling you I’m not experiencing red at all? Doesn’t that suggest local recurrency itself isn’t sufficient for conscious awareness?”
I don’t personally endorse Dennett’s view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don’t think we can just assume that animals can’t be introspectively aware of their own experiences. But I don’t think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct.
I don’t personally endorse Dennett’s view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don’t think we can just assume that animals can’t be introspectively aware of their own experiences.
FWIW, Dennett ended up believing chickens, octopuses and bees are conscious, anyway. He was an illusionist, but I think his view, like Keith Frankish’s, was not that an animal literally needs to have an illusion of phenomenal consciousness or be able to introspect to be conscious in a way that matters. The illusions and introspection just explain why we humans believe in phenomenal consciousness, but first-order consciousness still matters without them.
And he was a gradualist. He thought introspection and higher-order thoughts made for important differences and was skeptical of them in other animals (Dennett, 2018, p.168-169). I don’t know how morally important he found these differences to be, though.
I think there’s a difference between access and phenomenal consciousness. You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness. You also can have access consciousness that you can’t talk about (e.g. if you can’t speak). Not sure why we’d deny that animals have access consciousness.
“You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness” Maybe, but in the current context this is basically begging the question, whereas I’ve at least sketched an argument (albeit one you can probably resist without catastrophic cost).
EDIT: Strictly speaking, I don’t think people with the Dennettian view have to or should deny that there is phenomenally conscious content that isn’t in fact introspectively accessed. What they do/should deny is that there is p-conscious content that you couldn’t access even if you tried.
But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.
To prevent any misunderstanding, illusionism doesn’t deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.
I think illusionism is extremely crazy, but even if you adopt it, I don’t know why it dissolves the problem more to say “what we think of as consciousness is really just the brain modelling itself,” rather htan “what we think of as consciousness is really the brain integrating information.”
The brain modelling itself as having phenomenal properties would (partly) explain why people believe consciousness has phenomenal properties, i.e. that consciousness is phenomenal. In fact, you model yourself as having phenomenal properties whether or not illusionism is true, if it seems to you that you have phenomenal consciousness. That seeming, or appearance, has to have some basis in your brain, and that is a model.
Illusionism just says there aren’t actually any phenomenal properties, so their appearance, i.e. their seeming to exist, is an illusion, and your model is wrong.
The hard problem is dissolved by illusionism because phenomenal consciousness doesn’t exist under illusionism, because consciousness has no phenomenal properties. And we have a guide to solving the meta-problem under illusionism and verifying our dissolution of the hard problem:
we find the models of phenomenal properties in our brains (which exist whether or not illusionism is true), and check that they don’t depend (causally or constitutively) on any actual phenomenal properties existing, or
we otherwise give a persuasive argument that consciousness doesn’t have any phenomenal properties, or that phenomenal properties don’t exist.
On the other hand, saying consciousness just is information integration and denying phenomenal properties together would indeed also dissolve the hard problem. Saying phenomenal consciousness just is information integration would solve the hard problem.
But both information integration accounts are poorly motivated, and I don’t think anyone should give much credence to either. A good (dis)solution should be accompanied by an explanation for why many people believe consciousness has phenomenal properties and so solve the meta-problem, or at least give us a path to solving it. I don’t think this would happen with (phenomenal) consciousness as mere information integration. Why would information integration, generically, lead to beliefs in phenomenal consciousness?
There doesn’t seem to be much logical connection here. Of course, beliefs in phenomenal consciousness depend on information integration, but very few instances of information integration seem to have any connection to such beliefs at all. Information integration is nowhere close to a sufficient explanation.
And this seems to me to be the case for every attempted solution to the hard problem I’ve seen: they never give a good explanation for the causes of our beliefs in phenomenal consciousness.
Interesting! I intended the post largely as a response to someone with views like yours. In short, I think the considerations I provided based on how animals behave is very well explained by the supposition that they’re conscious. I also find RP’s arguments against neuron counts completely devastating.
RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.
In regards to your first point, I don’t see either why we’d think that degree of attention correlates with neuron counts or determines the intensity of consciousness
RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.
I might have written some of them! I still have some sympathy for the hypothesis and that it matters when you reason using expected values, taking the arguments into account, even if you assign the hypothesis like 1% probability. The probabilities can matter here.
In regards to your first point, I don’t see either why we’d think that degree of attention correlates with neuron counts or determines the intensity of consciousness
I believe the intensity of suffering consists largely (maybe not exclusively) in how much it pulls your attention, specifically its motivational salience. Intense suffering that’s easy to ignore seems like an oxymoron. I discuss this a bit more here.
(...) Sufferers can ignore this sensation most of the time. Performance of cognitive tasks demanding attention are either not affected or only mildly affected. (...)
Hurtful pain:
(...) Different from Annoying pain, the ability to draw attention away from the sensation of pain is reduced: awareness of pain is likely to be present most of the time, interspersed by brief periods during which pain can be ignored depending on the level of distraction provided by other activities. (...)
Disabling pain:
(...) Inattention and unresponsiveness to milder forms of pain or other ongoing stimuli and surroundings is likely to be observed. (...)
Excruciating pain seems entirely behaviourally defined, but I would assume effects on attention like disabling pain or (much) stronger.
Then, we can ask “how much attention can be pulled?” And we might think:
having more things you’re aware of simultaneously (e.g. more details in your visual field) means you have more attention to pull, and
more neurons allows you to be aware of more things simultaneously,
so brains with more neurons can have more attention to pull.
I don’t think this is right. We could imagine a very simple creature experience very little pain but be totally focused on it. It’s true that normally for creatures like us, we tend to focus more on more intense pain, but this doesn’t mean that’s the relevant benchmark for intensity. My claim is the causal arrow goes the other way.
But if I did, I think this would make me think animal consciousness is even more serious. For simple creatures, pain takes up their whole world.
Maybe it’ll help for me to rephrase: if a being has more things it can attend to (be aware of, have in its attention) simultaneously, then it has more attention to pull. It can attend to more, all else equal, for example, if it has a richer/more detailed visual field, similar to more pixels in a computer screen.
We could imagine a very simple creature experience very little pain but be totally focused on it.
If it’s very simple, it would probably have very little attention to pull (relatively), so the pain would not be intense under the hypothesis I’m putting forward.
But if I did, I think this would make me think animal consciousness is even more serious. For simple creatures, pain takes up their whole world.
I also give some weight to this possibility, i.e. that we should measure attention in individual-relative terms, and it’s something more like the proportion of attention pulled that matters.
EDIT: And, as you may have had in mind, this seems more consistent with Welfare Footprint Project’s definitions of pain intensity, assuming annoying pain falls in the same intensity range across animals, hurtful in the same range across animals, disabling in the same range across animals and excruciating in the same range across animals.
A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.
And this gets into the kind of views to which I’m sympathetic.
I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but I’m not confident about others. More on this kind of view here and here.
On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/worseness/good/bad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which I’m happy to share).
And I’m inclined to count these attitudes whether they’re “conscious” or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.
Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/or pleasure/unpleasantness-like states.
However, there could still be important differences in degree if and because they meet different bars, and I have some sympathy for some neuron count-related arguments that favour brains with more neurons (point 2 here). I also give substantial weight to the possibilities that:
maximum intensities for desires as motivational salience (and maybe hedonic states like pleasure and unpleasantness) are similar,
there’s (often) no fact of the matter about how to compare them.
Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.
I believe the “consciousness requires having a self-model” is the only coherent model for rejecting animals’ moral patienthood, but I don’t understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.
I’ve seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and we’re sure they’re sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing. The best explanation for this is that its just part of our concept of “conscious” that a conscious experience is one that you’re (at least potentially) introspectively aware that you’re having. Indeed (my point not Dennett’s), this is how we found out that there is such a thing as “unconscious perception”, we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we don’t think that conscious experiences are ones you’re (at least potentially) introspectively aware of having, it’s not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen.
Here’s Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness:
“It is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: ‘You are conscious of the redness of the apple.’ P: ‘I am? I don’t see any color. It just looks grey. Why do you think I’m consciously experiencing red?’ F&L: ‘Because we can detect recurrent processing in color areas in your visual cortex.’ P: ‘But I really don’t see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?’ F&L: ‘Yes, because local recurrency correlates with conscious awareness.’ P: ’Doesn’t it mean something that I am telling you I’m not experiencing red at all? Doesn’t that suggest local recurrency itself isn’t sufficient for conscious awareness?”
I don’t personally endorse Dennett’s view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don’t think we can just assume that animals can’t be introspectively aware of their own experiences. But I don’t think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct.
FWIW, Dennett ended up believing chickens, octopuses and bees are conscious, anyway. He was an illusionist, but I think his view, like Keith Frankish’s, was not that an animal literally needs to have an illusion of phenomenal consciousness or be able to introspect to be conscious in a way that matters. The illusions and introspection just explain why we humans believe in phenomenal consciousness, but first-order consciousness still matters without them.
And he was a gradualist. He thought introspection and higher-order thoughts made for important differences and was skeptical of them in other animals (Dennett, 2018, p.168-169). I don’t know how morally important he found these differences to be, though.
I think that Dennett probably said inconsistent things about this over time.
I think there’s a difference between access and phenomenal consciousness. You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness. You also can have access consciousness that you can’t talk about (e.g. if you can’t speak). Not sure why we’d deny that animals have access consciousness.
“You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness” Maybe, but in the current context this is basically begging the question, whereas I’ve at least sketched an argument (albeit one you can probably resist without catastrophic cost).
EDIT: Strictly speaking, I don’t think people with the Dennettian view have to or should deny that there is phenomenally conscious content that isn’t in fact introspectively accessed. What they do/should deny is that there is p-conscious content that you couldn’t access even if you tried.
From my comment above:
But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.
To prevent any misunderstanding, illusionism doesn’t deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.
I think illusionism is extremely crazy, but even if you adopt it, I don’t know why it dissolves the problem more to say “what we think of as consciousness is really just the brain modelling itself,” rather htan “what we think of as consciousness is really the brain integrating information.”
The brain modelling itself as having phenomenal properties would (partly) explain why people believe consciousness has phenomenal properties, i.e. that consciousness is phenomenal. In fact, you model yourself as having phenomenal properties whether or not illusionism is true, if it seems to you that you have phenomenal consciousness. That seeming, or appearance, has to have some basis in your brain, and that is a model.
Illusionism just says there aren’t actually any phenomenal properties, so their appearance, i.e. their seeming to exist, is an illusion, and your model is wrong.
The hard problem is dissolved by illusionism because phenomenal consciousness doesn’t exist under illusionism, because consciousness has no phenomenal properties. And we have a guide to solving the meta-problem under illusionism and verifying our dissolution of the hard problem:
we find the models of phenomenal properties in our brains (which exist whether or not illusionism is true), and check that they don’t depend (causally or constitutively) on any actual phenomenal properties existing, or
we otherwise give a persuasive argument that consciousness doesn’t have any phenomenal properties, or that phenomenal properties don’t exist.
On the other hand, saying consciousness just is information integration and denying phenomenal properties together would indeed also dissolve the hard problem. Saying phenomenal consciousness just is information integration would solve the hard problem.
But both information integration accounts are poorly motivated, and I don’t think anyone should give much credence to either. A good (dis)solution should be accompanied by an explanation for why many people believe consciousness has phenomenal properties and so solve the meta-problem, or at least give us a path to solving it. I don’t think this would happen with (phenomenal) consciousness as mere information integration. Why would information integration, generically, lead to beliefs in phenomenal consciousness?
There doesn’t seem to be much logical connection here. Of course, beliefs in phenomenal consciousness depend on information integration, but very few instances of information integration seem to have any connection to such beliefs at all. Information integration is nowhere close to a sufficient explanation.
And this seems to me to be the case for every attempted solution to the hard problem I’ve seen: they never give a good explanation for the causes of our beliefs in phenomenal consciousness.
Yeah it’s very bizarre. Seems just to be vibes.
Interesting! I intended the post largely as a response to someone with views like yours. In short, I think the considerations I provided based on how animals behave is very well explained by the supposition that they’re conscious. I also find RP’s arguments against neuron counts completely devastating.
I worked on some of them with RP myself here.
FWIW, I found Adam’s arguments convincing against the kinds of views he argued against, but I don’t think they covered the cases in point 2 here.
RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.
In regards to your first point, I don’t see either why we’d think that degree of attention correlates with neuron counts or determines the intensity of consciousness
I might have written some of them! I still have some sympathy for the hypothesis and that it matters when you reason using expected values, taking the arguments into account, even if you assign the hypothesis like 1% probability. The probabilities can matter here.
I believe the intensity of suffering consists largely (maybe not exclusively) in how much it pulls your attention, specifically its motivational salience. Intense suffering that’s easy to ignore seems like an oxymoron. I discuss this a bit more here.
Welfare Footprint Project’s pain definitions also refer to attention as one of the criteria (along with other behaviours):
Annoying pain:
Hurtful pain:
Disabling pain:
Excruciating pain seems entirely behaviourally defined, but I would assume effects on attention like disabling pain or (much) stronger.
Then, we can ask “how much attention can be pulled?” And we might think:
having more things you’re aware of simultaneously (e.g. more details in your visual field) means you have more attention to pull, and
more neurons allows you to be aware of more things simultaneously,
so brains with more neurons can have more attention to pull.
I don’t think this is right. We could imagine a very simple creature experience very little pain but be totally focused on it. It’s true that normally for creatures like us, we tend to focus more on more intense pain, but this doesn’t mean that’s the relevant benchmark for intensity. My claim is the causal arrow goes the other way.
But if I did, I think this would make me think animal consciousness is even more serious. For simple creatures, pain takes up their whole world.
Maybe it’ll help for me to rephrase: if a being has more things it can attend to (be aware of, have in its attention) simultaneously, then it has more attention to pull. It can attend to more, all else equal, for example, if it has a richer/more detailed visual field, similar to more pixels in a computer screen.
If it’s very simple, it would probably have very little attention to pull (relatively), so the pain would not be intense under the hypothesis I’m putting forward.
I also give some weight to this possibility, i.e. that we should measure attention in individual-relative terms, and it’s something more like the proportion of attention pulled that matters.
EDIT: And, as you may have had in mind, this seems more consistent with Welfare Footprint Project’s definitions of pain intensity, assuming annoying pain falls in the same intensity range across animals, hurtful in the same range across animals, disabling in the same range across animals and excruciating in the same range across animals.