I’m not sure where to draw lines, but illusions of “this is bad!” (evaluative) or “get this to stop!” (imperative) could be enough, rather than something like “I care about avoiding pain”, and I doubt nematodes have those illusions, too. It’s not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But it’s also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible robots or systems triggered by some simple event. I don’t think such modes would indicate moral value on their own. Some neurotransmitters may have a similar effect in simple animals, but on a continuum between exploratory and defensive behaviours and not centralized on one switch, but distributed across multiple switches, by affecting the responsiveness of neurons. Even a representation of positive or negative value, like used in RL policy updates (e.g. subtracting the average unshifted reward from the current reward), doesn’t necessarily indicate any illusion of valence. Stitching the modes and rewards together in one system doesn’t change this.
I think a simple reward/punishment signal can be an extremely basic neural representation that “this is good/bad”, and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes aren’t the simplest systems), but I also don’t see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. It’s like the difference between a :-| emoticon and the Mona Lisa. The Mona Lisa has lots of extra detail and refinement, but there’s a continuum of possible drawings in between them and no specific point where something qualitatively different occurs.
That’s my current best guess of how to think about sentience relative to my moral intuitions. If there turns out to be a major conceptual breakthrough in neuroscience that points to some processing that’s qualitatively different in complex brains relative to nematodes or NPCs, I might shift my view—although I find it hard to not extend a tiny bit of empathy toward the simpler systems anyway, because they do have preferences and basic neural representations. If we were to discover that consciousness is a special substance/etc that only exists at all in certain minds, then it’s easier for me to understand saying that nematodes or NPCs have literally zero amounts of it.
I’ll lay out how I’m thinking about it now after looking more into this and illusionism over the past few days.
I would consider three groups of moral interpretations of illusionism, which can be further divided:
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/or modelling) and belief-forming process in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if it generates a belief that something matters[1].
I’m now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/or belief-forming processes, although maybe the actual responses of the original system/process can help break symmetries, or you can have enough restrictions on the connected introspective and/or belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief.
There may be accounts of beliefs according to which “a reward/punishment signal” (and/or its effects), “activation of escape muscles” or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do aren’t beliefs (of mattering) under some accounts of beliefs I’m pretty sympathetic to. For example, maybe responses need to be modelled or represented by other processes to generate beliefs of mattering, but nematodes don’t model or represent their own responses.[2] Or, maybe even reflection on or the manipulation of some model or representation is required. So, I can imagine nematodes not mattering at all under some moral/normative views (combined with empirical views that nematodes don’t meet the given moral bar set by a moral view), but mattering on others.
Some other but less important details in the rest of the comment.
Furthermore, even on an account of belief, to what degree something is a belief at all[3] could come in more than 2 degrees, so nematodes may have beliefs but to a lesser degree than more cognitively sophisticated animals, and I think that we should deal with that like moral uncertainty, too.
For moral uncertainty, you could use a moral parliament or diversification approach (like this) or whatever, as you’re aware. How I might tentatively deal with non-binary degrees to which something is a belief (and vagueness generally) is to have a probability distribution over binary precisified views with different sharp cutoffs for what counts as a belief, and apply some diversification approach to moral uncertainty over it.[4] Somewhat more explicitly, suppose I think, on some vague account of belief, the degree to which nematodes have beliefs (of things mattering) is 0.1, on a scale from 0 to 1, holding constant some empirical beliefs about what nematodes can do physically. On that account of belief and those empirical views, with a uniform distribution for the cutoff over different precisified versions, I’d treat nematodes as having beliefs (of things mattering) with probability 10% and as if the account of belief is binary. This 10% is a matter of moral uncertainty that I wouldn’t take expected values over, but instead diversify across.
Nematodes may turn out to be dominated by other considerations in practice on those views, maybe by suffering in fundamental physics, in random particle movements or in the far future. I might give relatively low weight to the views where nematodes matter but random particle movements don’t, because I don’t care much about counterfactual robustness. Maybe >90% to I don’t care at all about it, and pretty much statistically independently of the rest of the normative views in my distributions over normative views. However, I could have been overconfident in the inference that random particle movements will generate beliefs of mattering with a cutoff including nematodes and without counterfactual robustness.
On the other hand, maybe a response is already a model or representation of itself, and that counts, but this seems like a degenerate account of beliefs; a belief is generally not about itself, unless it explicitly self-references, which mere responses don’t seem to do. Plus, self-referencing propositions can lead to contradictions, so can be problematic in general, and we might want to be careful about them. Again on the other hand, though, maybe responses can be chained trivially, e.g. neural activity is the response and muscle activation is the “belief” about neural activity. Or, generally, one cell can represent a cell it’s connected to. There’s still a question of whether it’s representing a response that would indicate that something matters, e.g. an aversive response.
Not to what degree something matters according to that belief, i.e. strength or intensity, or to what degree it is believed, i.e. degree of confidence, or the number of beliefs or times that belief is generated (simultaneously or otherwise).
I’m not sure where to draw lines, but illusions of “this is bad!” (evaluative) or “get this to stop!” (imperative) could be enough, rather than something like “I care about avoiding pain”, and I doubt nematodes have those illusions, too. It’s not clear responses to noxious stimuli, including learning or being put into a pessimistic or fearful-like state, actually indicate illusions of evaluations or imperatives. But it’s also not clear what would.
You could imagine a switch between hardcoded exploratory and defensive modes of NPCs or simple non-flexible robots or systems triggered by some simple event. I don’t think such modes would indicate moral value on their own. Some neurotransmitters may have a similar effect in simple animals, but on a continuum between exploratory and defensive behaviours and not centralized on one switch, but distributed across multiple switches, by affecting the responsiveness of neurons. Even a representation of positive or negative value, like used in RL policy updates (e.g. subtracting the average unshifted reward from the current reward), doesn’t necessarily indicate any illusion of valence. Stitching the modes and rewards together in one system doesn’t change this.
I think a simple reward/punishment signal can be an extremely basic neural representation that “this is good/bad”, and activation of escape muscles can be an extremely basic representation of an imperative to avoid something. I agree that these things seem almost completely unimportant in the simplest systems (I think nematodes aren’t the simplest systems), but I also don’t see any sharp dividing lines between the simplest systems and ourselves, just degrees of complexity and extra machinery. It’s like the difference between a :-| emoticon and the Mona Lisa. The Mona Lisa has lots of extra detail and refinement, but there’s a continuum of possible drawings in between them and no specific point where something qualitatively different occurs.
That’s my current best guess of how to think about sentience relative to my moral intuitions. If there turns out to be a major conceptual breakthrough in neuroscience that points to some processing that’s qualitatively different in complex brains relative to nematodes or NPCs, I might shift my view—although I find it hard to not extend a tiny bit of empathy toward the simpler systems anyway, because they do have preferences and basic neural representations. If we were to discover that consciousness is a special substance/etc that only exists at all in certain minds, then it’s easier for me to understand saying that nematodes or NPCs have literally zero amounts of it.
I’ll lay out how I’m thinking about it now after looking more into this and illusionism over the past few days.
I would consider three groups of moral interpretations of illusionism, which can be further divided:
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of introspective (monitoring and/or modelling) and belief-forming process in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if we could connect to it the right kind of belief-forming process (with no further introspective processes) in the right way to generate a belief that something matters[1].
A system/process is conscious in a morally relevant way if and only if it generates a belief that something matters[1].
I’m now tentatively most sympathetic to something like 3, although I was previously endorsing something like 2 in this thread. 1 and 2 seem plausibly trivial, so that anything matters in any way if you put all the work into the introspective and/or belief-forming processes, although maybe the actual responses of the original system/process can help break symmetries, or you can have enough restrictions on the connected introspective and/or belief-forming processes. Frankish explicitly endorses something like 1. I think Graziano endorses something like 2 or 3, and I think Humphrey endorses something like 3. Their views of course differ further in their details besides just 1, 2 and 3, especially on what counts as the right kind of introspection or belief.
There may be accounts of beliefs according to which “a reward/punishment signal” (and/or its effects), “activation of escape muscles” or even the responses of electrons to electric fields count as beliefs that something matters. However, I suspect those and what nematodes do aren’t beliefs (of mattering) under some accounts of beliefs I’m pretty sympathetic to. For example, maybe responses need to be modelled or represented by other processes to generate beliefs of mattering, but nematodes don’t model or represent their own responses.[2] Or, maybe even reflection on or the manipulation of some model or representation is required. So, I can imagine nematodes not mattering at all under some moral/normative views (combined with empirical views that nematodes don’t meet the given moral bar set by a moral view), but mattering on others.
Some other but less important details in the rest of the comment.
Furthermore, even on an account of belief, to what degree something is a belief at all[3] could come in more than 2 degrees, so nematodes may have beliefs but to a lesser degree than more cognitively sophisticated animals, and I think that we should deal with that like moral uncertainty, too.
For moral uncertainty, you could use a moral parliament or diversification approach (like this) or whatever, as you’re aware. How I might tentatively deal with non-binary degrees to which something is a belief (and vagueness generally) is to have a probability distribution over binary precisified views with different sharp cutoffs for what counts as a belief, and apply some diversification approach to moral uncertainty over it.[4] Somewhat more explicitly, suppose I think, on some vague account of belief, the degree to which nematodes have beliefs (of things mattering) is 0.1, on a scale from 0 to 1, holding constant some empirical beliefs about what nematodes can do physically. On that account of belief and those empirical views, with a uniform distribution for the cutoff over different precisified versions, I’d treat nematodes as having beliefs (of things mattering) with probability 10% and as if the account of belief is binary. This 10% is a matter of moral uncertainty that I wouldn’t take expected values over, but instead diversify across.
Nematodes may turn out to be dominated by other considerations in practice on those views, maybe by suffering in fundamental physics, in random particle movements or in the far future. I might give relatively low weight to the views where nematodes matter but random particle movements don’t, because I don’t care much about counterfactual robustness. Maybe >90% to I don’t care at all about it, and pretty much statistically independently of the rest of the normative views in my distributions over normative views. However, I could have been overconfident in the inference that random particle movements will generate beliefs of mattering with a cutoff including nematodes and without counterfactual robustness.
and/or perhaps general beliefs about consciousness and its qualities like reddishness, classic qualia, the Cartesian theatre, etc..
On the other hand, maybe a response is already a model or representation of itself, and that counts, but this seems like a degenerate account of beliefs; a belief is generally not about itself, unless it explicitly self-references, which mere responses don’t seem to do. Plus, self-referencing propositions can lead to contradictions, so can be problematic in general, and we might want to be careful about them. Again on the other hand, though, maybe responses can be chained trivially, e.g. neural activity is the response and muscle activation is the “belief” about neural activity. Or, generally, one cell can represent a cell it’s connected to. There’s still a question of whether it’s representing a response that would indicate that something matters, e.g. an aversive response.
Not to what degree something matters according to that belief, i.e. strength or intensity, or to what degree it is believed, i.e. degree of confidence, or the number of beliefs or times that belief is generated (simultaneously or otherwise).
I’d guess there are other ways to deal with nonbinary truth degrees, though.