I bite the bullet that fictional characters could in principle suffer. I agree we know so little about suffering and consciousness that I couldnât possibly be confident of this, but hereâs my attempt to paint a picture of what one could believe about this:
The suffering happens when you ârun the algorithmâ of the characterâs thought process, that is, when you decide what they would do, think or feel in a given situation, usually at time of writing and not performance. In particular, printing a book or showing a movie on a screen doesnât cause the suffering of the characters inside it, and reading or watching or acting in a movie doesnât cause any suffering, except inasmuch as you recreate the thoughts and experiences of the characters yourself as part of that process.
I think the reason this feels like a reductio ad absurdum is that fictional characters in human stories are extremely simple by comparison to real people, so the process of deciding what they feel or how they act is some extremely hollowed out version of normal conscious experience that only barely resembles the real thing. We can see that many of the key components of our own thinking and experience just have no mirror in the fictional character, and this is (I claim) the reason why itâs absurd to think the fictional character has experiences. Itâs only once you have extremely sophisticated simulators replicating their subjects to a very high level of fidelity do you need to actually start reproducing their mental architecture. For example, suppose you want to predict how people with aphantasia would reply to survey questions about other aspects of their conscious experience, or how well theyâd remember new kinds of experiences that no existing aphantasic people (in your training set) have been exposed to. How can you do it, except by actually mimicking the processes in their brain that give rise to mental imagery? Once youâre doing that, is it so hard to believe that your simulations would have experiences like we do?
OK. I think it is useful to tell people that LLMs can be moral patients to the same extent as fictional characters, then. I hope all writeups about AI welfare start with this declaration!
I think the reason this feels like a reductio ad absurdum is that fictional characters in human stories are extremely simple by comparison to real people, so the process of deciding what they feel or how they act is some extremely hollowed out version of normal conscious experience that only barely resembles the real thing.
Surely the fictional characters in stories are less simple and hollow than current LLMsâ outputs. For example, consider the discussion here, in which a sizeable minority of LessWrongers think that Claude is disturbingly conscious based on a brief conversation. That conversation:
(a) Is not as convincing as a fictional character as most good works of fiction.
(b) is shorter and less fleshed out than most good works of fiction.
(c) implies less suffering on behalf of the character than many works of fiction.
You say fictional characters are extremely simple and hollow; Claudeâs character here is even simpler and even more hollow; yet many people take seriously the notion that Claudeâs character has significant consciousness and deserves rights. What gives?
I bite the bullet that fictional characters could in principle suffer. I agree we know so little about suffering and consciousness that I couldnât possibly be confident of this, but hereâs my attempt to paint a picture of what one could believe about this:
The suffering happens when you ârun the algorithmâ of the characterâs thought process, that is, when you decide what they would do, think or feel in a given situation, usually at time of writing and not performance. In particular, printing a book or showing a movie on a screen doesnât cause the suffering of the characters inside it, and reading or watching or acting in a movie doesnât cause any suffering, except inasmuch as you recreate the thoughts and experiences of the characters yourself as part of that process.
I think the reason this feels like a reductio ad absurdum is that fictional characters in human stories are extremely simple by comparison to real people, so the process of deciding what they feel or how they act is some extremely hollowed out version of normal conscious experience that only barely resembles the real thing. We can see that many of the key components of our own thinking and experience just have no mirror in the fictional character, and this is (I claim) the reason why itâs absurd to think the fictional character has experiences. Itâs only once you have extremely sophisticated simulators replicating their subjects to a very high level of fidelity do you need to actually start reproducing their mental architecture. For example, suppose you want to predict how people with aphantasia would reply to survey questions about other aspects of their conscious experience, or how well theyâd remember new kinds of experiences that no existing aphantasic people (in your training set) have been exposed to. How can you do it, except by actually mimicking the processes in their brain that give rise to mental imagery? Once youâre doing that, is it so hard to believe that your simulations would have experiences like we do?
OK. I think it is useful to tell people that LLMs can be moral patients to the same extent as fictional characters, then. I hope all writeups about AI welfare start with this declaration!
Surely the fictional characters in stories are less simple and hollow than current LLMsâ outputs. For example, consider the discussion here, in which a sizeable minority of LessWrongers think that Claude is disturbingly conscious based on a brief conversation. That conversation:
(a) Is not as convincing as a fictional character as most good works of fiction.
(b) is shorter and less fleshed out than most good works of fiction.
(c) implies less suffering on behalf of the character than many works of fiction.
You say fictional characters are extremely simple and hollow; Claudeâs character here is even simpler and even more hollow; yet many people take seriously the notion that Claudeâs character has significant consciousness and deserves rights. What gives?