I think with fictional characters, they could be suffering while they are being instantiated. E.g., I found the film Oldboy pretty painful, because I felt some of the suffering of the character while watching the film. Similarly, if a convincing novel makes its readers feel the pain of the characters, that could be something to care about.
Similarly, if LLM computations implement some of what makes suffering bad—for instance, if they simulate some sort of distress internally while stating the words “I am suffering”, because this is useful in order to make better predictions—then this could lead to them having moral patienthood.
That doesn’t seem super likely to me, but as you have llms that are more and more capable of mimicking humans, I can see the possibility that implementing suffering is useful in order to predict what an agent suffering would output.
Hey, I thought this was thought provoking.
I think with fictional characters, they could be suffering while they are being instantiated. E.g., I found the film Oldboy pretty painful, because I felt some of the suffering of the character while watching the film. Similarly, if a convincing novel makes its readers feel the pain of the characters, that could be something to care about.
Similarly, if LLM computations implement some of what makes suffering bad—for instance, if they simulate some sort of distress internally while stating the words “I am suffering”, because this is useful in order to make better predictions—then this could lead to them having moral patienthood.
That doesn’t seem super likely to me, but as you have llms that are more and more capable of mimicking humans, I can see the possibility that implementing suffering is useful in order to predict what an agent suffering would output.