though unlike Eliezer, I donât come to my conclusions about animal consciousness from the armchair without reviewing any evidence
A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.
And this gets into the kind of views to which Iâm sympathetic.
I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but Iâm not confident about others. More on this kind of view here and here.
On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/âworseness/âgood/âbad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which Iâm happy to share).
And Iâm inclined to count these attitudes whether theyâre âconsciousâ or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.
Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/âor pleasure/âunpleasantness-like states.
Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.
I believe the âconsciousness requires having a self-modelâ is the only coherent model for rejecting animalsâ moral patienthood, but I donât understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.
Iâve seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and weâre sure theyâre sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing. The best explanation for this is that its just part of our concept of âconsciousâ that a conscious experience is one that youâre (at least potentially) introspectively aware that youâre having. Indeed (my point not Dennettâs), this is how we found out that there is such a thing as âunconscious perceptionâ, we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we donât think that conscious experiences are ones youâre (at least potentially) introspectively aware of having, itâs not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen.
Hereâs Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness:
âIt is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: âYou are conscious of the redness of the apple.â P: âI am? I donât see any color. It just looks grey. Why do you think Iâm consciously experiencing red?â F&L: âBecause we can detect recurrent processing in color areas in your visual cortex.â P: âBut I really donât see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?â F&L: âYes, because local recurrency correlates with conscious awareness.â P: âDoesnât it mean something that I am telling you Iâm not experiencing red at all? Doesnât that suggest local recurrency itself isnât sufficient for conscious awareness?â
I donât personally endorse Dennettâs view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I donât think we can just assume that animals canât be introspectively aware of their own experiences. But I donât think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct.
I donât personally endorse Dennettâs view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I donât think we can just assume that animals canât be introspectively aware of their own experiences.
FWIW, Dennett ended up believing chickens, octopuses and bees are conscious, anyway. He was an illusionist, but I think his view, like Keith Frankishâs, was not that an animal literally needs to have an illusion of phenomenal consciousness or be able to introspect to be conscious in a way that matters. The illusions and introspection just explain why we humans believe in phenomenal consciousness, but first-order consciousness still matters without them.
And he was a gradualist. He thought introspection and higher-order thoughts made for important differences and was skeptical of them in other animals (Dennett, 2018, p.168-169). I donât know how morally important he found these differences to be, though.
I think thereâs a difference between access and phenomenal consciousness. You can have bits of your visual field, for instance, that youâre not introspectively aware of but are part of your consciousness. You also can have access consciousness that you canât talk about (e.g. if you canât speak). Not sure why weâd deny that animals have access consciousness.
âYou can have bits of your visual field, for instance, that youâre not introspectively aware of but are part of your consciousnessâ Maybe, but in the current context this is basically begging the question, whereas Iâve at least sketched an argument (albeit one you can probably resist without catastrophic cost).
EDIT: Strictly speaking, I donât think people with the Dennettian view have to or should deny that there is phenomenally conscious content that isnât in fact introspectively accessed. What they do/âshould deny is that there is p-conscious content that you couldnât access even if you tried.
But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.
To prevent any misunderstanding, illusionism doesnât deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.
I think illusionism is extremely crazy, but even if you adopt it, I donât know why it dissolves the problem more to say âwhat we think of as consciousness is really just the brain modelling itself,â rather htan âwhat we think of as consciousness is really the brain integrating information.â
The brain modelling itself as having phenomenal properties would (partly) explain why people believe consciousness has phenomenal properties, i.e. that consciousness is phenomenal. In fact, you model yourself as having phenomenal properties whether or not illusionism is true, if it seems to you that you have phenomenal consciousness. That seeming, or appearance, has to have some basis in your brain, and that is a model.
Illusionism just says there arenât actually any phenomenal properties, so their appearance, i.e. their seeming to exist, is an illusion, and your model is wrong.
The hard problem is dissolved by illusionism because phenomenal consciousness doesnât exist under illusionism, because consciousness has no phenomenal properties. And we have a guide to solving the meta-problem under illusionism and verifying our dissolution of the hard problem:
we find the models of phenomenal properties in our brains (which exist whether or not illusionism is true), and check that they donât depend (causally or constitutively) on any actual phenomenal properties existing, or
we otherwise give a persuasive argument that consciousness doesnât have any phenomenal properties, or that phenomenal properties donât exist.
On the other hand, saying consciousness just is information integration and denying phenomenal properties together would indeed also dissolve the hard problem. Saying phenomenal consciousness just is information integration would solve the hard problem.
But both information integration accounts are poorly motivated, and I donât think anyone should give much credence to either. A good (dis)solution should be accompanied by an explanation for why many people believe consciousness has phenomenal properties and so solve the meta-problem, or at least give us a path to solving it. I donât think this would happen with (phenomenal) consciousness as mere information integration. Why would information integration, generically, lead to beliefs in phenomenal consciousness?
There doesnât seem to be much logical connection here. Of course, beliefs in phenomenal consciousness depend on information integration, but very few instances of information integration seem to have any connection to such beliefs at all. Information integration is nowhere close to a sufficient explanation.
And this seems to me to be the case for every attempted solution to the hard problem Iâve seen: they never give a good explanation for the causes of our beliefs in phenomenal consciousness.
Interesting! I intended the post largely as a response to someone with views like yours. In short, I think the considerations I provided based on how animals behave is very well explained by the supposition that theyâre conscious. I also find RPâs arguments against neuron counts completely devastating.
RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.
In regards to your first point, I donât see either why weâd think that degree of attention correlates with neuron counts or determines the intensity of consciousness
RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.
I might have written some of them! I still have some sympathy for the hypothesis and that it matters when you reason using expected values, taking the arguments into account, even if you assign the hypothesis like 1% probability. The probabilities can matter here.
In regards to your first point, I donât see either why weâd think that degree of attention correlates with neuron counts or determines the intensity of consciousness
I believe the intensity of suffering consists largely (maybe not exclusively) in how much it pulls your attention, specifically its motivational salience. Intense suffering thatâs easy to ignore seems like an oxymoron. I discuss this a bit more here.
(...) Sufferers can ignore this sensation most of the time. Performance of cognitive tasks demanding attention are either not affected or only mildly affected. (...)
Hurtful pain:
(...) Different from Annoying pain, the ability to draw attention away from the sensation of pain is reduced: awareness of pain is likely to be present most of the time, interspersed by brief periods during which pain can be ignored depending on the level of distraction provided by other activities. (...)
Disabling pain:
(...) Inattention and unresponsiveness to milder forms of pain or other ongoing stimuli and surroundings is likely to be observed. (...)
Excruciating pain seems entirely behaviourally defined, but I would assume effects on attention like disabling pain or (much) stronger.
Then, we can ask âhow much attention can be pulled?â And we might think:
having more things youâre aware of simultaneously (e.g. more details in your visual field) means you have more attention to pull, and
more neurons allows you to be aware of more things simultaneously,
so brains with more neurons can have more attention to pull.
I donât think this is right. We could imagine a very simple creature experience very little pain but be totally focused on it. Itâs true that normally for creatures like us, we tend to focus more on more intense pain, but this doesnât mean thatâs the relevant benchmark for intensity. My claim is the causal arrow goes the other way.
But if I did, I think this would make me think animal consciousness is even more serious. For simple creatures, pain takes up their whole world.
Maybe itâll help for me to rephrase: if a being has more things it can attend to (be aware of, have in its attention) simultaneously, then it has more attention to pull. It can attend to more, all else equal, for example, if it has a richer/âmore detailed visual field, similar to more pixels in a computer screen.
We could imagine a very simple creature experience very little pain but be totally focused on it.
If itâs very simple, it would probably have very little attention to pull (relatively), so the pain would not be intense under the hypothesis Iâm putting forward.
But if I did, I think this would make me think animal consciousness is even more serious. For simple creatures, pain takes up their whole world.
I also give some weight to this possibility, i.e. that we should measure attention in individual-relative terms, and itâs something more like the proportion of attention pulled that matters.
EDIT: And, as you may have had in mind, this seems more consistent with Welfare Footprint Projectâs definitions of pain intensity, assuming annoying pain falls in the same intensity range across animals, hurtful in the same range across animals, disabling in the same range across animals and excruciating in the same range across animals.
A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.
And this gets into the kind of views to which Iâm sympathetic.
I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but Iâm not confident about others. More on this kind of view here and here.
On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/âworseness/âgood/âbad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which Iâm happy to share).
And Iâm inclined to count these attitudes whether theyâre âconsciousâ or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.
Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/âor pleasure/âunpleasantness-like states.
However, there could still be important differences in degree if and because they meet different bars, and I have some sympathy for some neuron count-related arguments that favour brains with more neurons (point 2 here). I also give substantial weight to the possibilities that:
maximum intensities for desires as motivational salience (and maybe hedonic states like pleasure and unpleasantness) are similar,
thereâs (often) no fact of the matter about how to compare them.
Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.
I believe the âconsciousness requires having a self-modelâ is the only coherent model for rejecting animalsâ moral patienthood, but I donât understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.
Iâve seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and weâre sure theyâre sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing. The best explanation for this is that its just part of our concept of âconsciousâ that a conscious experience is one that youâre (at least potentially) introspectively aware that youâre having. Indeed (my point not Dennettâs), this is how we found out that there is such a thing as âunconscious perceptionâ, we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we donât think that conscious experiences are ones youâre (at least potentially) introspectively aware of having, itâs not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen.
Hereâs Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness:
âIt is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: âYou are conscious of the redness of the apple.â P: âI am? I donât see any color. It just looks grey. Why do you think Iâm consciously experiencing red?â F&L: âBecause we can detect recurrent processing in color areas in your visual cortex.â P: âBut I really donât see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?â F&L: âYes, because local recurrency correlates with conscious awareness.â P: âDoesnât it mean something that I am telling you Iâm not experiencing red at all? Doesnât that suggest local recurrency itself isnât sufficient for conscious awareness?â
I donât personally endorse Dennettâs view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I donât think we can just assume that animals canât be introspectively aware of their own experiences. But I donât think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct.
FWIW, Dennett ended up believing chickens, octopuses and bees are conscious, anyway. He was an illusionist, but I think his view, like Keith Frankishâs, was not that an animal literally needs to have an illusion of phenomenal consciousness or be able to introspect to be conscious in a way that matters. The illusions and introspection just explain why we humans believe in phenomenal consciousness, but first-order consciousness still matters without them.
And he was a gradualist. He thought introspection and higher-order thoughts made for important differences and was skeptical of them in other animals (Dennett, 2018, p.168-169). I donât know how morally important he found these differences to be, though.
I think that Dennett probably said inconsistent things about this over time.
I think thereâs a difference between access and phenomenal consciousness. You can have bits of your visual field, for instance, that youâre not introspectively aware of but are part of your consciousness. You also can have access consciousness that you canât talk about (e.g. if you canât speak). Not sure why weâd deny that animals have access consciousness.
âYou can have bits of your visual field, for instance, that youâre not introspectively aware of but are part of your consciousnessâ Maybe, but in the current context this is basically begging the question, whereas Iâve at least sketched an argument (albeit one you can probably resist without catastrophic cost).
EDIT: Strictly speaking, I donât think people with the Dennettian view have to or should deny that there is phenomenally conscious content that isnât in fact introspectively accessed. What they do/âshould deny is that there is p-conscious content that you couldnât access even if you tried.
From my comment above:
But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.
To prevent any misunderstanding, illusionism doesnât deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.
I think illusionism is extremely crazy, but even if you adopt it, I donât know why it dissolves the problem more to say âwhat we think of as consciousness is really just the brain modelling itself,â rather htan âwhat we think of as consciousness is really the brain integrating information.â
The brain modelling itself as having phenomenal properties would (partly) explain why people believe consciousness has phenomenal properties, i.e. that consciousness is phenomenal. In fact, you model yourself as having phenomenal properties whether or not illusionism is true, if it seems to you that you have phenomenal consciousness. That seeming, or appearance, has to have some basis in your brain, and that is a model.
Illusionism just says there arenât actually any phenomenal properties, so their appearance, i.e. their seeming to exist, is an illusion, and your model is wrong.
The hard problem is dissolved by illusionism because phenomenal consciousness doesnât exist under illusionism, because consciousness has no phenomenal properties. And we have a guide to solving the meta-problem under illusionism and verifying our dissolution of the hard problem:
we find the models of phenomenal properties in our brains (which exist whether or not illusionism is true), and check that they donât depend (causally or constitutively) on any actual phenomenal properties existing, or
we otherwise give a persuasive argument that consciousness doesnât have any phenomenal properties, or that phenomenal properties donât exist.
On the other hand, saying consciousness just is information integration and denying phenomenal properties together would indeed also dissolve the hard problem. Saying phenomenal consciousness just is information integration would solve the hard problem.
But both information integration accounts are poorly motivated, and I donât think anyone should give much credence to either. A good (dis)solution should be accompanied by an explanation for why many people believe consciousness has phenomenal properties and so solve the meta-problem, or at least give us a path to solving it. I donât think this would happen with (phenomenal) consciousness as mere information integration. Why would information integration, generically, lead to beliefs in phenomenal consciousness?
There doesnât seem to be much logical connection here. Of course, beliefs in phenomenal consciousness depend on information integration, but very few instances of information integration seem to have any connection to such beliefs at all. Information integration is nowhere close to a sufficient explanation.
And this seems to me to be the case for every attempted solution to the hard problem Iâve seen: they never give a good explanation for the causes of our beliefs in phenomenal consciousness.
Yeah itâs very bizarre. Seems just to be vibes.
Interesting! I intended the post largely as a response to someone with views like yours. In short, I think the considerations I provided based on how animals behave is very well explained by the supposition that theyâre conscious. I also find RPâs arguments against neuron counts completely devastating.
I worked on some of them with RP myself here.
FWIW, I found Adamâs arguments convincing against the kinds of views he argued against, but I donât think they covered the cases in point 2 here.
RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.
In regards to your first point, I donât see either why weâd think that degree of attention correlates with neuron counts or determines the intensity of consciousness
I might have written some of them! I still have some sympathy for the hypothesis and that it matters when you reason using expected values, taking the arguments into account, even if you assign the hypothesis like 1% probability. The probabilities can matter here.
I believe the intensity of suffering consists largely (maybe not exclusively) in how much it pulls your attention, specifically its motivational salience. Intense suffering thatâs easy to ignore seems like an oxymoron. I discuss this a bit more here.
Welfare Footprint Projectâs pain definitions also refer to attention as one of the criteria (along with other behaviours):
Annoying pain:
Hurtful pain:
Disabling pain:
Excruciating pain seems entirely behaviourally defined, but I would assume effects on attention like disabling pain or (much) stronger.
Then, we can ask âhow much attention can be pulled?â And we might think:
having more things youâre aware of simultaneously (e.g. more details in your visual field) means you have more attention to pull, and
more neurons allows you to be aware of more things simultaneously,
so brains with more neurons can have more attention to pull.
I donât think this is right. We could imagine a very simple creature experience very little pain but be totally focused on it. Itâs true that normally for creatures like us, we tend to focus more on more intense pain, but this doesnât mean thatâs the relevant benchmark for intensity. My claim is the causal arrow goes the other way.
But if I did, I think this would make me think animal consciousness is even more serious. For simple creatures, pain takes up their whole world.
Maybe itâll help for me to rephrase: if a being has more things it can attend to (be aware of, have in its attention) simultaneously, then it has more attention to pull. It can attend to more, all else equal, for example, if it has a richer/âmore detailed visual field, similar to more pixels in a computer screen.
If itâs very simple, it would probably have very little attention to pull (relatively), so the pain would not be intense under the hypothesis Iâm putting forward.
I also give some weight to this possibility, i.e. that we should measure attention in individual-relative terms, and itâs something more like the proportion of attention pulled that matters.
EDIT: And, as you may have had in mind, this seems more consistent with Welfare Footprint Projectâs definitions of pain intensity, assuming annoying pain falls in the same intensity range across animals, hurtful in the same range across animals, disabling in the same range across animals and excruciating in the same range across animals.