Applying Heuristics about Aversive Experience Without Regard for Theories of Consciousness
TL;DR
Consciousness should have an extensional definition only. Misconstrual or reconception of the meaning of consciousness is an error. Robots, software agents, and animals can suffer aversive experience. Humans have heuristics to judge whether their own behavior inflicts aversive experience on other beings.
Those heuristics include:
that some behavior is damaging to the entity
that an entity can feign aversive experience
that reasonable people think some behavior is aversive
that the entity has something like system 1 processing.
Consciousness
Typically, an extensional definition of consciousness is a list of measured internal activity or specific external behavior associated with living people. Used correctly, “consciousness” has an extensional definition only. The specific items in the list to which “consciousness” refers depends on the speaker and the context.
In a medical context, a person:
shows signs of consciousness (for example, blinking, talking).
loses consciousness (for example, faints).
In an engineering context, a robot:
lacks all consciousness (despite blinking, talking).
never had consciousness (despite having passed some intelligence tests).
When misconstrued, the term “consciousness’ is understood to refer to an entity separate from the entity whose behavior or measured internal activity[1] the term describes (for example, consciousness is thought of as something you can lose or regain or contain while you are alive).
When reconceived, a user of the term “consciousness” summarizes the items with an intensional definition.
An extensional definition of consciousness can mismatch the intensional definition.
For example, a nurse might believe that a person still has not actually regained their consciousness after a medical procedure that brings the person back to life 20 minutes after death even though the person now appears alert, speaks, eats, and is apparently mentally healthy.
Another example would be a robot that demonstrates external behaviors and measured internal activity associated with human-like intelligence but that humans assume is not in fact a person.
Without an intensional definition of consciousness, dialog about whether aversive experience happens can fail. However, if you accept that your own subjective experience is real, and grant that others can have similar experience, then you can still apply heuristics to decide whether other beings have aversive experience. Those heuristics can build on your own experience and common-sense.
Heuristics about Aversion
I believe that humans will mistreat robotic or software or animal entities. Humans could try to excuse the mistreatment with the belief that such entities do not have consciousness or aversive experience. This brings to mind the obvious question: what is aversive experience?
Here are several heuristics:
If behavior damages an entity, then the behavior causes aversive experience for the entity.
if an entity can feign or imitate[2] aversive experience, then it can experience aversive experience.
if reasonable people reasonably interpret some actions done to the entity as aversive to an entity, then those actions are aversive to the entity.
if the entity has something like system 1 processing[3] , then it can experience aversive experience.
I’m sure there are more common-sense heuristics, but those are what I could think of that might forestall inflicting aversive experience on entities whose subjective experience is a subject of debate.
Processes that it does not choose to follow but that instead yield their results for further processing. It might be necessary to assume that at least one of these processes runs in parallel to another which the entity can edit or redesign.
Applying Heuristics about Aversive Experience Without Regard for Theories of Consciousness
TL;DR
Consciousness should have an extensional definition only. Misconstrual or reconception of the meaning of consciousness is an error. Robots, software agents, and animals can suffer aversive experience. Humans have heuristics to judge whether their own behavior inflicts aversive experience on other beings.
Those heuristics include:
that some behavior is damaging to the entity
that an entity can feign aversive experience
that reasonable people think some behavior is aversive
that the entity has something like system 1 processing.
Consciousness
Typically, an extensional definition of consciousness is a list of measured internal activity or specific external behavior associated with living people. Used correctly, “consciousness” has an extensional definition only. The specific items in the list to which “consciousness” refers depends on the speaker and the context.
In a medical context, a person:
shows signs of consciousness (for example, blinking, talking).
loses consciousness (for example, faints).
In an engineering context, a robot:
lacks all consciousness (despite blinking, talking).
never had consciousness (despite having passed some intelligence tests).
When misconstrued, the term “consciousness’ is understood to refer to an entity separate from the entity whose behavior or measured internal activity[1] the term describes (for example, consciousness is thought of as something you can lose or regain or contain while you are alive).
When reconceived, a user of the term “consciousness” summarizes the items with an intensional definition.
An extensional definition of consciousness can mismatch the intensional definition.
For example, a nurse might believe that a person still has not actually regained their consciousness after a medical procedure that brings the person back to life 20 minutes after death even though the person now appears alert, speaks, eats, and is apparently mentally healthy.
Another example would be a robot that demonstrates external behaviors and measured internal activity associated with human-like intelligence but that humans assume is not in fact a person.
Without an intensional definition of consciousness, dialog about whether aversive experience happens can fail. However, if you accept that your own subjective experience is real, and grant that others can have similar experience, then you can still apply heuristics to decide whether other beings have aversive experience. Those heuristics can build on your own experience and common-sense.
Heuristics about Aversion
I believe that humans will mistreat robotic or software or animal entities. Humans could try to excuse the mistreatment with the belief that such entities do not have consciousness or aversive experience. This brings to mind the obvious question: what is aversive experience?
Here are several heuristics:
If behavior damages an entity, then the behavior causes aversive experience for the entity.
if an entity can feign or imitate[2] aversive experience, then it can experience aversive experience.
if reasonable people reasonably interpret some actions done to the entity as aversive to an entity, then those actions are aversive to the entity.
if the entity has something like system 1 processing[3] , then it can experience aversive experience.
I’m sure there are more common-sense heuristics, but those are what I could think of that might forestall inflicting aversive experience on entities whose subjective experience is a subject of debate.
For a human, measured internal activity is stuff like brainwaves or peristaltic action.
I have noticed that sometimes humans believe or at least assert that other people imitate or feign emotions and internal experience.
Processes that it does not choose to follow but that instead yield their results for further processing. It might be necessary to assume that at least one of these processes runs in parallel to another which the entity can edit or redesign.