I now lean towards illusionism, and something like Attention Schema Theory. I donāt think illusionism rules out panpsychism, but Iād say itās much less likely under illusionism. I can share some papers that I found most convincing. Luke Muehlhauserās report on consciousness also supports illusionism.
By āillusionismā do you have in mind something like a higher-order view according to which noticing oneās own awareness (or having a sufficiently complex model of oneās attention, as in attention schema theory) is the crucial part of consciousness? I think that doesnāt necessarily follow from pure illusionism itself.
As I mention here, we could take illusionism to show that the distinction between āconsciousā and āunconsciousā processing is more shallow and trivial than we might have thought. For example, adding a model of oneās attention to a brain seems like a fairly small change that doesnāt require much additional computing power. Why should we give so much weight to such a small computational task, compared against the much larger and more sophisticated computations already occuring in a brain without such a model?
As an analogy, suppose I have a cuckoo clock thatās running. Then I draw a schematic diagram illustrating the parts of the clock and how they fit together (a model of the clock). Why should I say that the full clock that lives in the real world is unimportant, but when I draw a little picture of it, it suddenly starts to matter?
I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model and are unlikely to happen elsewhere. From Graziano, 2020:
Suppose the machine has a much richer model of attention. Somehow, attention is depicted by the model as a Moray eel darting around the world. Maybe the machine already had need for a depiction of Moray eels, and it coapted that model for monitoring its own attention. Now we plug in the speech engine. Does the machine claim to have consciousness? No. It claims to have an external Moray eel.
Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
I would also go a bit further to claim that itās ārichā illusions, not āsparseā illusions, that matter here. Shabasson, 2021 gives a nice summary of Kammerer, 2019, where this distinction is made:
According to Kammerer, the illusion of phenomenal consciousness must be a rich illusion because of its strength. It persists regardless of what an agent might come to believe about the reality (or unreality) of phenomenal consciousness. By contrast, a sparse illusion such as the headless woman illusion quickly loses its grip on us once we come to believe it is an illusion and understand how it is generated. Kammerer criticizes Dennettās and Grazianoās theories for being sparse-illusion views (2019c: 6ā8).
The example rich optical illusion given is the MĆ¼llerāLyer illusion. It doesnāt matter if you just measured the lines to show they have the same length: once you look at the original illusion again (at least without extra markings or rulers to make it obvious that they are the same length), one line will still look longer than the other.
On a practical and more theory-neutral or theory-light approach, we can also distinguish between conscious and unconscious perception in humans, e.g. with blindsight and other responses to things outside awareness. Of course, itās possible the āunconsciousā perception is actually conscious, just not accessible to the higher-order conscious process (conscious awareness/āattention), but there doesnāt seem to be much reason to believe itās conscious at all. Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that. Then, we have little reason to believe capacities that are sometimes realized unconsciously in humans indicate consciousness in other animals.
Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus.
and three candidate abilities: trace conditioning, rapid reversal learning and cross-model learning. The idea would be to āfind out whether the identified cluster of putatively consciousness-linked abilities is selectively switched on and off under masking in the same way it is in humans.ā
Apparently some rich optical illusions can occur unconsciously while others occur consciously, though (Chen et al., 2018). So, maybe there is some conscious but inaccessible perception, although this is confusing, and Iām not sure about the relationship between these kinds of illusions and illusionism as a theory. Furthermore, Iām still skeptical of inaccessible conscious valence in particular, since valence seems pretty holistic, context-dependent and late in any animalās processing to me. Mason and Lavery, 2022 discuss some refinements to experiments to distinguish conscious and unconscious valence.
I do concede that there could be an important line-drawing or trivial instantiation problem for what counts as having a consciousness illusion, or valence illusion, in particular.
Thanks for the detailed explanation! I havenāt read any of the papers you linked to (just most of the summaries right now), so my comments may be misguided.
My general feeling is that simplified models of other things, including sometimes models that are resistant to change, are fairly ubiquitous in the world. For example, imagine an alert on your computer that says āWarning: RAM usage is above 90%ā (so that you can avoid going up to 100% of RAM, which would slow your computer to a crawl). This alert would be an extremely simple āmodelā of the total amount of āattentionā that your computerās memory is devoting to various things. Suppose your computerās actual RAM usage drops below 90%, but the notification still shows. You click an āxā on the notification to close it, but then a second later, the computer erroneously pops up the notification again. You restart your computer, hoping that will solve it, but the bogus notification returns, even though you can see that your computerās RAM usage is only 38%. Like the MĆ¼ller-Lyer illusion, this buggy notification is resistant to correction.
Maybe your view is that the relevant models and things being modeled should meet various specific criteria, so that we wonāt see trivial instances of them throughout information-processing systems? Iām sympathetic to that view, since I intuitively donāt care much about simplified models of things unless those things are pretty similar to what happens in animal brains. I think there will be a spectrum from highly parochial views that have lots of criteria, to highly cosmopolitan views that have few criteria and therefore will see consciousness in many more places.
Even if we define consciousness as āspecific ways information is processed that would lead to inferences like the kind we make about consciousnessā, thereās a question of whether that should be the only thing we care about morally. We intuitively care about the illusions that we can see using the parts of our brains that can generate high-level, verbal thoughts, because those illusions are the things visible to those parts of our brains. We donāt intuitively care about other processes (even other schematic models elsewhere in our nervous systems) that our high-level thoughts canāt see. But most people also donāt care much about infants dying of diseases in Africa most of the time for the same reason: out of sight, out of mind. Itās not clear to me how much this bias to care about whatās visible should withstand moral reflection.
but there doesnāt seem to be much reason to believe itās conscious at all
If its being conscious (whatever that means exactly) wouldnāt be visible to our high-level thoughts, thereās also no reason to believe itās not conscious. :)
Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that.
The generation of a very specific type of attention schema other than the one we introspect upon using high-level thoughts might be unlikely. But the generation of simplified summaries of things for use by other parts of the nervous system seems fairly ubiquitous. For example, our face-recognition brain region might do lots of detailed processing of a face, determine that itās Jennifer Aniston, and then send a summary message āthis is Jennifer Anistonā to other parts of the brain so that they can react accordingly. Our fight-or-flight system does processing of possible threats, and when a threat is detected, it sends warning signals to other brain regions and triggers release of adrenaline, which is a very simplified āmodelā thatās distributed throughout the body via the blood. These simplified representations of complex things have huge impact on behavior (just like the high-level attention schema does), which is why evolution created them.
I assume you agree, and our disagreement is probably just about how many criteria a simplified model has to meet before it counts as being relevant to consciousness? For example, the message saying āthis is Jennifer Anistonā is a simplified model of a face, not a simplified model of attention, so it wouldnāt lead to illusion about oneās own conscious experience? If so, that makes sense, but when looking at these things from the outside as a neuroscientist would, it seems kind of weird to me to say that a simplified model of attention that can give rise to certain consciousness-related illusions is extremely important, while a simplified model of something else that could give rise to other illusions would be completely unimportant. Is it really the consciousness illusion itself that matters, or does the organism actually care about avoiding harm and seeking rewards, and the illusion is just the thing that we latch our caring energy onto? (Sorry if this is rambling and confused, and feel no need to answer these questions. At some point we get into the apparent absurdity of why we attach value to some physical processes rather than other ones at all.)
Iām not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/āpreferences, basically illusions that things actually matter to the system with those illusions.
I suspect that recognizing faces doesnāt require any illusion that would indicate consciousness. Still, Iām not sure what counts as an illusion, and I could imagine it being the case that there are very simple illusions everywhere.
I think illusionism is the only theory (or set of theories) thatās on the right track to actually (dis)solving the hard problem, by explaining why we have the beliefs we do about consciousness, and Iām pessimistic about all other approaches.
I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think youāre proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between āconsciousā and āunconsciousā is less fundamental than we assumed and that therefore more things should count as sentient than we previously thought. (Susan Blackmore is one illusionist who concludes from illusionism that thereās less of a distinction between conscious and unconscious than we naively think, although I donāt know how this affects her moral circle.)
Itās not clear to me whether an illusion that āthis rubber hand is part of my bodyā is more relevant to consciousness than a judgment that āthis face is Jennifer Anistonā. I guess weād have to propose detailed criteria for which judgments are relevant to consciousness and have better understandings of what these judgments look like in the brain.
illusions that things actually matter to the system with those illusions
I agree that such illusions seem important. :) But itās plausible to me that itās also at least somewhat important if something matters to the system, even if thereās no high-level illusion saying so. For example, a nematode clearly cares about avoiding bodily damage, even if its nervous system doesnāt contain any nontrivial representation that āI care about avoiding painā. I think adding that higher-level representation increases the sentience of the brain, but it seems weird to say that without the higher-level representation, the brain doesnāt matter at all. I guess without that higher-level representation, itās harder to imagine ourselves in the nematodeās place, because whenever we think about the badness of pain, weāre doing so using that higher level.
Iām not sure where to draw lines, but illusions of āthis is bad!ā (evaluative) or āget this to stop!ā (imperative) could be enough, rather than something like āI care about avoiding painā, and I doubt nematodes have those illusions, too. Itās not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But itās also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible robots or systems triggered by some simple event. I donāt think such modes would indicate moral value on their own. Some neurotransmitters may have a similar effect in simple animals, but on a continuum between exploratory and defensive behaviours and not centralized on one switch, but distributed across multiple switches, by affecting the responsiveness of neurons. Even a representation of positive or negative value, like used in RL policy updates (e.g. subtracting the average unshifted reward from the current reward), doesnāt necessarily indicate any illusion of valence. Stitching the modes and rewards together in one system doesnāt change this.
I think a simple reward/āpunishment signal can be an extremely basic neural representation that āthis is good/ābadā, and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes arenāt the simplest systems), but I also donāt see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. Itās like the difference between a :-| emoticon and the Mona Lisa. The Mona Lisa has lots of extra detail and refinement, but thereās a continuum of possible drawings in between them and no specific point where something qualitatively different occurs.
Thatās my current best guess of how to think about sentience relative to my moral intuitions. If there turns out to be a major conceptual breakthrough in neuroscience that points to some processing thatās qualitatively different in complex brains relative to nematodes or NPCs, I might shift my viewāalthough I find it hard to not extend a tiny bit of empathy toward the simpler systems anyway, because they do have preferences and basic neural representations. If we were to discover that consciousness is a special substance/āetc that only exists at all in certain minds, then itās easier for me to understand saying that nematodes or NPCs have literally zero amounts of it.
Iāll lay out how Iām thinking about it now after looking more into this and illusionism over the past few days.
I would consider three groups of moral interpretations of illusionism, which can be further divided:
A system/āprocess is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/āor modelling) and belief-forming process in the right way to generate a belief that something matters[1].
A system/āprocess is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1].
A system/āprocess is conscious in a morally relevant way if and only if it generates a belief that something matters[1].
Iām now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/āor belief-forming processes, although maybe the actual responses of the original system/āprocess can help break symmetries, or you can have enough restrictions on the connected introspective and/āor belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief.
There may be accounts of beliefs according to which āa reward/āpunishment signalā (and/āor its effects), āactivation of escape musclesā or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do arenāt beliefs (of mattering) under some accounts of beliefs Iām pretty sympathetic to. For example, maybe responses need to be modelled or represented by other processes to generate beliefs of mattering, but nematodes donāt model or represent their own responses.[2] Or, maybe even reflection on or the manipulation of some model or representation is required. So, I can imagine nematodes not mattering at all under some moral/ānormative views (combined with empirical views that nematodes donāt meet the given moral bar set by a moral view), but mattering on others.
Some other but less important details in the rest of the comment.
Furthermore, even on an account of belief, to what degree something is a belief at all[3] could come in more than 2 degrees, so nematodes may have beliefs but to a lesser degree than more cognitively sophisticated animals, and I think that we should deal with that like moral uncertainty, too.
For moral uncertainty, you could use a moral parliament or diversification approach (like this) or whatever, as youāre aware. How I might tentatively deal with non-binary degrees to which something is a belief (and vagueness generally) is to have a probability distribution over binary precisified views with different sharp cutoffs for what counts as a belief, and apply some diversification approach to moral uncertainty over it.[4] Somewhat more explicitly, suppose I think, on some vague account of belief, the degree to which nematodes have beliefs (of things mattering) is 0.1, on a scale from 0 to 1, holding constant some empirical beliefs about what nematodes can do physically. On that account of belief and those empirical views, with a uniform distribution for the cutoff over different precisified versions, Iād treat nematodes as having beliefs (of things mattering) with probability 10% and as if the account of belief is binary. This 10% is a matter of moral uncertainty that I wouldnāt take expected values over, but instead diversify across.
Nematodes may turn out to be dominated by other considerations in practice on those views, maybe by suffering in fundamental physics, in random particle movements or in the far future. I might give relatively low weight to the views where nematodes matter but random particle movements donāt, because I donāt care much about counterfactual robustness. Maybe >90% to I donāt care at all about it, and pretty much statistically independently of the rest of the normative views in my distributions over normative views. However, I could have been overconfident in the inference that random particle movements will generate beliefs of mattering with a cutoff including nematodes and without counterfactual robustness.
On the other hand, maybe a response is already a model or representation of itself, and that counts, but this seems like a degenerate account of beliefs; a belief is generally not about itself, unless it explicitly self-references, which mere responses donāt seem to do. Plus, self-referencing propositions can lead to contradictions, so can be problematic in general, and we might want to be careful about them. Again on the other hand, though, maybe responses can be chained trivially, e.g. neural activity is the response and muscle activation is the ābeliefā about neural activity. Or, generally, one cell can represent a cell itās connected to. Thereās still a question of whether itās representing a response that would indicate that something matters, e.g. an aversive response.
Not to what degree something matters according to that belief, i.e. strength or intensity, or to what degree it is believed, i.e. degree of confidence, or the number of beliefs or times that belief is generated (simultaneously or otherwise).
What do you think of the models of consciousness, with much less than 300 neurons, described in Herzog 2007?
I think the way the theories are assumed to work in that paper are all implausible accounts of consciousness, and, at least for GWT, not how GWT is intended to be interpreted. See https://āāforum.effectivealtruism.org/āāposts/āāvbhoFsyQmrntru6Kw/āādo-brains-contain-many-conscious-subsystems-if-so-should-we#Neural_correlate_theories_of_consciousness_____explanatory_theories_of_consciousness
I now lean towards illusionism, and something like Attention Schema Theory. I donāt think illusionism rules out panpsychism, but Iād say itās much less likely under illusionism. I can share some papers that I found most convincing. Luke Muehlhauserās report on consciousness also supports illusionism.
By āillusionismā do you have in mind something like a higher-order view according to which noticing oneās own awareness (or having a sufficiently complex model of oneās attention, as in attention schema theory) is the crucial part of consciousness? I think that doesnāt necessarily follow from pure illusionism itself.
As I mention here, we could take illusionism to show that the distinction between āconsciousā and āunconsciousā processing is more shallow and trivial than we might have thought. For example, adding a model of oneās attention to a brain seems like a fairly small change that doesnāt require much additional computing power. Why should we give so much weight to such a small computational task, compared against the much larger and more sophisticated computations already occuring in a brain without such a model?
As an analogy, suppose I have a cuckoo clock thatās running. Then I draw a schematic diagram illustrating the parts of the clock and how they fit together (a model of the clock). Why should I say that the full clock that lives in the real world is unimportant, but when I draw a little picture of it, it suddenly starts to matter?
I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model and are unlikely to happen elsewhere. From Graziano, 2020:
I would also go a bit further to claim that itās ārichā illusions, not āsparseā illusions, that matter here. Shabasson, 2021 gives a nice summary of Kammerer, 2019, where this distinction is made:
The example rich optical illusion given is the MĆ¼llerāLyer illusion. It doesnāt matter if you just measured the lines to show they have the same length: once you look at the original illusion again (at least without extra markings or rulers to make it obvious that they are the same length), one line will still look longer than the other.
On a practical and more theory-neutral or theory-light approach, we can also distinguish between conscious and unconscious perception in humans, e.g. with blindsight and other responses to things outside awareness. Of course, itās possible the āunconsciousā perception is actually conscious, just not accessible to the higher-order conscious process (conscious awareness/āattention), but there doesnāt seem to be much reason to believe itās conscious at all. Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that. Then, we have little reason to believe capacities that are sometimes realized unconsciously in humans indicate consciousness in other animals.
RPās invertebrate sentience research gave little weight to capacities that (sometimes) operate unconsciously in humans. Conscious vs unconscious perception is discussed more by Birch, 2020. He proposes the facilitation hypothesis:
and three candidate abilities: trace conditioning, rapid reversal learning and cross-model learning. The idea would be to āfind out whether the identified cluster of putatively consciousness-linked abilities is selectively switched on and off under masking in the same way it is in humans.ā
Apparently some rich optical illusions can occur unconsciously while others occur consciously, though (Chen et al., 2018). So, maybe there is some conscious but inaccessible perception, although this is confusing, and Iām not sure about the relationship between these kinds of illusions and illusionism as a theory. Furthermore, Iām still skeptical of inaccessible conscious valence in particular, since valence seems pretty holistic, context-dependent and late in any animalās processing to me. Mason and Lavery, 2022 discuss some refinements to experiments to distinguish conscious and unconscious valence.
I do concede that there could be an important line-drawing or trivial instantiation problem for what counts as having a consciousness illusion, or valence illusion, in particular.
Thanks for the detailed explanation! I havenāt read any of the papers you linked to (just most of the summaries right now), so my comments may be misguided.
My general feeling is that simplified models of other things, including sometimes models that are resistant to change, are fairly ubiquitous in the world. For example, imagine an alert on your computer that says āWarning: RAM usage is above 90%ā (so that you can avoid going up to 100% of RAM, which would slow your computer to a crawl). This alert would be an extremely simple āmodelā of the total amount of āattentionā that your computerās memory is devoting to various things. Suppose your computerās actual RAM usage drops below 90%, but the notification still shows. You click an āxā on the notification to close it, but then a second later, the computer erroneously pops up the notification again. You restart your computer, hoping that will solve it, but the bogus notification returns, even though you can see that your computerās RAM usage is only 38%. Like the MĆ¼ller-Lyer illusion, this buggy notification is resistant to correction.
Maybe your view is that the relevant models and things being modeled should meet various specific criteria, so that we wonāt see trivial instances of them throughout information-processing systems? Iām sympathetic to that view, since I intuitively donāt care much about simplified models of things unless those things are pretty similar to what happens in animal brains. I think there will be a spectrum from highly parochial views that have lots of criteria, to highly cosmopolitan views that have few criteria and therefore will see consciousness in many more places.
Even if we define consciousness as āspecific ways information is processed that would lead to inferences like the kind we make about consciousnessā, thereās a question of whether that should be the only thing we care about morally. We intuitively care about the illusions that we can see using the parts of our brains that can generate high-level, verbal thoughts, because those illusions are the things visible to those parts of our brains. We donāt intuitively care about other processes (even other schematic models elsewhere in our nervous systems) that our high-level thoughts canāt see. But most people also donāt care much about infants dying of diseases in Africa most of the time for the same reason: out of sight, out of mind. Itās not clear to me how much this bias to care about whatās visible should withstand moral reflection.
If its being conscious (whatever that means exactly) wouldnāt be visible to our high-level thoughts, thereās also no reason to believe itās not conscious. :)
The generation of a very specific type of attention schema other than the one we introspect upon using high-level thoughts might be unlikely. But the generation of simplified summaries of things for use by other parts of the nervous system seems fairly ubiquitous. For example, our face-recognition brain region might do lots of detailed processing of a face, determine that itās Jennifer Aniston, and then send a summary message āthis is Jennifer Anistonā to other parts of the brain so that they can react accordingly. Our fight-or-flight system does processing of possible threats, and when a threat is detected, it sends warning signals to other brain regions and triggers release of adrenaline, which is a very simplified āmodelā thatās distributed throughout the body via the blood. These simplified representations of complex things have huge impact on behavior (just like the high-level attention schema does), which is why evolution created them.
I assume you agree, and our disagreement is probably just about how many criteria a simplified model has to meet before it counts as being relevant to consciousness? For example, the message saying āthis is Jennifer Anistonā is a simplified model of a face, not a simplified model of attention, so it wouldnāt lead to illusion about oneās own conscious experience? If so, that makes sense, but when looking at these things from the outside as a neuroscientist would, it seems kind of weird to me to say that a simplified model of attention that can give rise to certain consciousness-related illusions is extremely important, while a simplified model of something else that could give rise to other illusions would be completely unimportant. Is it really the consciousness illusion itself that matters, or does the organism actually care about avoiding harm and seeking rewards, and the illusion is just the thing that we latch our caring energy onto? (Sorry if this is rambling and confused, and feel no need to answer these questions. At some point we get into the apparent absurdity of why we attach value to some physical processes rather than other ones at all.)
Iām not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/āpreferences, basically illusions that things actually matter to the system with those illusions.
I suspect that recognizing faces doesnāt require any illusion that would indicate consciousness. Still, Iām not sure what counts as an illusion, and I could imagine it being the case that there are very simple illusions everywhere.
I think illusionism is the only theory (or set of theories) thatās on the right track to actually (dis)solving the hard problem, by explaining why we have the beliefs we do about consciousness, and Iām pessimistic about all other approaches.
Thanks. :)
I plausibly agree with your last paragraph, but I think illusionism as a way to (dis)solve the hard problem can be consistent with lots of different moral views about which brain processes we consider sentient. Some people take the approach I think youāre proposing, in which we have stricter criteria regarding what it takes for a mind to be sentient than we might have had before learning about illusionism. Others might feel that illusionism shows that the distinction between āconsciousā and āunconsciousā is less fundamental than we assumed and that therefore more things should count as sentient than we previously thought. (Susan Blackmore is one illusionist who concludes from illusionism that thereās less of a distinction between conscious and unconscious than we naively think, although I donāt know how this affects her moral circle.)
Itās not clear to me whether an illusion that āthis rubber hand is part of my bodyā is more relevant to consciousness than a judgment that āthis face is Jennifer Anistonā. I guess weād have to propose detailed criteria for which judgments are relevant to consciousness and have better understandings of what these judgments look like in the brain.
I agree that such illusions seem important. :) But itās plausible to me that itās also at least somewhat important if something matters to the system, even if thereās no high-level illusion saying so. For example, a nematode clearly cares about avoiding bodily damage, even if its nervous system doesnāt contain any nontrivial representation that āI care about avoiding painā. I think adding that higher-level representation increases the sentience of the brain, but it seems weird to say that without the higher-level representation, the brain doesnāt matter at all. I guess without that higher-level representation, itās harder to imagine ourselves in the nematodeās place, because whenever we think about the badness of pain, weāre doing so using that higher level.
Iām not sure where to draw lines, but illusions of āthis is bad!ā (evaluative) or āget this to stop!ā (imperative) could be enough, rather than something like āI care about avoiding painā, and I doubt nematodes have those illusions, too. Itās not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But itās also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible robots or systems triggered by some simple event. I donāt think such modes would indicate moral value on their own. Some neurotransmitters may have a similar effect in simple animals, but on a continuum between exploratory and defensive behaviours and not centralized on one switch, but distributed across multiple switches, by affecting the responsiveness of neurons. Even a representation of positive or negative value, like used in RL policy updates (e.g. subtracting the average unshifted reward from the current reward), doesnāt necessarily indicate any illusion of valence. Stitching the modes and rewards together in one system doesnāt change this.
I think a simple reward/āpunishment signal can be an extremely basic neural representation that āthis is good/ābadā, and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes arenāt the simplest systems), but I also donāt see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. Itās like the difference between a :-| emoticon and the Mona Lisa. The Mona Lisa has lots of extra detail and refinement, but thereās a continuum of possible drawings in between them and no specific point where something qualitatively different occurs.
Thatās my current best guess of how to think about sentience relative to my moral intuitions. If there turns out to be a major conceptual breakthrough in neuroscience that points to some processing thatās qualitatively different in complex brains relative to nematodes or NPCs, I might shift my viewāalthough I find it hard to not extend a tiny bit of empathy toward the simpler systems anyway, because they do have preferences and basic neural representations. If we were to discover that consciousness is a special substance/āetc that only exists at all in certain minds, then itās easier for me to understand saying that nematodes or NPCs have literally zero amounts of it.
Iāll lay out how Iām thinking about it now after looking more into this and illusionism over the past few days.
I would consider three groups of moral interpretations of illusionism, which can be further divided:
A system/āprocess is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/āor modelling) and belief-forming process in the right way to generate a belief that something matters[1].
A system/āprocess is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1].
A system/āprocess is conscious in a morally relevant way if and only if it generates a belief that something matters[1].
Iām now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/āor belief-forming processes, although maybe the actual responses of the original system/āprocess can help break symmetries, or you can have enough restrictions on the connected introspective and/āor belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief.
There may be accounts of beliefs according to which āa reward/āpunishment signalā (and/āor its effects), āactivation of escape musclesā or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do arenāt beliefs (of mattering) under some accounts of beliefs Iām pretty sympathetic to. For example, maybe responses need to be modelled or represented by other processes to generate beliefs of mattering, but nematodes donāt model or represent their own responses.[2] Or, maybe even reflection on or the manipulation of some model or representation is required. So, I can imagine nematodes not mattering at all under some moral/ānormative views (combined with empirical views that nematodes donāt meet the given moral bar set by a moral view), but mattering on others.
Some other but less important details in the rest of the comment.
Furthermore, even on an account of belief, to what degree something is a belief at all[3] could come in more than 2 degrees, so nematodes may have beliefs but to a lesser degree than more cognitively sophisticated animals, and I think that we should deal with that like moral uncertainty, too.
For moral uncertainty, you could use a moral parliament or diversification approach (like this) or whatever, as youāre aware. How I might tentatively deal with non-binary degrees to which something is a belief (and vagueness generally) is to have a probability distribution over binary precisified views with different sharp cutoffs for what counts as a belief, and apply some diversification approach to moral uncertainty over it.[4] Somewhat more explicitly, suppose I think, on some vague account of belief, the degree to which nematodes have beliefs (of things mattering) is 0.1, on a scale from 0 to 1, holding constant some empirical beliefs about what nematodes can do physically. On that account of belief and those empirical views, with a uniform distribution for the cutoff over different precisified versions, Iād treat nematodes as having beliefs (of things mattering) with probability 10% and as if the account of belief is binary. This 10% is a matter of moral uncertainty that I wouldnāt take expected values over, but instead diversify across.
Nematodes may turn out to be dominated by other considerations in practice on those views, maybe by suffering in fundamental physics, in random particle movements or in the far future. I might give relatively low weight to the views where nematodes matter but random particle movements donāt, because I donāt care much about counterfactual robustness. Maybe >90% to I donāt care at all about it, and pretty much statistically independently of the rest of the normative views in my distributions over normative views. However, I could have been overconfident in the inference that random particle movements will generate beliefs of mattering with a cutoff including nematodes and without counterfactual robustness.
and/āor perhaps general beliefs about consciousness and its qualities like reddishness, classic qualia, the Cartesian theatre, etc..
On the other hand, maybe a response is already a model or representation of itself, and that counts, but this seems like a degenerate account of beliefs; a belief is generally not about itself, unless it explicitly self-references, which mere responses donāt seem to do. Plus, self-referencing propositions can lead to contradictions, so can be problematic in general, and we might want to be careful about them. Again on the other hand, though, maybe responses can be chained trivially, e.g. neural activity is the response and muscle activation is the ābeliefā about neural activity. Or, generally, one cell can represent a cell itās connected to. Thereās still a question of whether itās representing a response that would indicate that something matters, e.g. an aversive response.
Not to what degree something matters according to that belief, i.e. strength or intensity, or to what degree it is believed, i.e. degree of confidence, or the number of beliefs or times that belief is generated (simultaneously or otherwise).
Iād guess there are other ways to deal with nonbinary truth degrees, though.