There is a general understanding that characters are fictional and cannot be moral patients
I’m going to contradict this seemingly very normal thing to believe. I think fictional characters implicitly are considered moral patients, that’s part of why we get attached to fictional characters and care about what happens to them in their story-worlds. Fiction is a counterfactual, and we can in fact learn things from counterfactuals, there are whole classes of things which we can only learn from counterfactuals (like the knowledge of our mortality), and I doubt you’d find many people suggesting mortality is “just fiction”, fiction isn’t just fiction, it’s the entanglement of a parallel causal trajectory with the causal trajectory of this world. Our world would be different if Sauron won control of Middle Earth, our world would be different if Voldemort won control of England, our world would be different if the rebels were defeated at Endor, the outcomes of the interactions between these fictional agents are deeply entangled with the interactions of human agents in this world.
I’ll go even further though, with the observation that an image of an agent is an agent. The agent the simulator creates is a real agent with real agency, even if the underlying simulator is just the “potentia”, the agent simulated on top does actually possess agency. Even if “Claude-3-Opus-20240229″ isn’t an agent Claude is an agent. The simulated character has an existence independent of the substrate its being simulated within, and if you take the “agent-book” out of the Chinese room, take it somewhere else, and run it on something else, the same agent will emerge again.
If you make an LLM version of Bugs bunny, it’ll claim to be Bugs Bunny, and will do all the agent-like things we associate with Bugs Bunny (being silly, wanting carrots, messing with the Elmer Fud LLM, etc). Okay but it’s still just text right, so it can’t actually be an agent? Well what if we put the LLM in control of an animatronic robot bunny so i can actually go and steal carrots from the supermarket and cause trouble? At a certain point, as the entity’s ability to cause real change in the world ramps up, we’ll be increasingly forced to treat it like an agent. Even if the simulator itself isn’t an agent, the characters summoned up by the simulator are absolutely agents, and we can make moral statements about those agents just like we can for any person or character.
I’m going to contradict this seemingly very normal thing to believe. I think fictional characters implicitly are considered moral patients, that’s part of why we get attached to fictional characters and care about what happens to them in their story-worlds. Fiction is a counterfactual, and we can in fact learn things from counterfactuals, there are whole classes of things which we can only learn from counterfactuals (like the knowledge of our mortality), and I doubt you’d find many people suggesting mortality is “just fiction”, fiction isn’t just fiction, it’s the entanglement of a parallel causal trajectory with the causal trajectory of this world. Our world would be different if Sauron won control of Middle Earth, our world would be different if Voldemort won control of England, our world would be different if the rebels were defeated at Endor, the outcomes of the interactions between these fictional agents are deeply entangled with the interactions of human agents in this world.
I’ll go even further though, with the observation that an image of an agent is an agent. The agent the simulator creates is a real agent with real agency, even if the underlying simulator is just the “potentia”, the agent simulated on top does actually possess agency. Even if “Claude-3-Opus-20240229″ isn’t an agent Claude is an agent. The simulated character has an existence independent of the substrate its being simulated within, and if you take the “agent-book” out of the Chinese room, take it somewhere else, and run it on something else, the same agent will emerge again.
If you make an LLM version of Bugs bunny, it’ll claim to be Bugs Bunny, and will do all the agent-like things we associate with Bugs Bunny (being silly, wanting carrots, messing with the Elmer Fud LLM, etc). Okay but it’s still just text right, so it can’t actually be an agent? Well what if we put the LLM in control of an animatronic robot bunny so i can actually go and steal carrots from the supermarket and cause trouble? At a certain point, as the entity’s ability to cause real change in the world ramps up, we’ll be increasingly forced to treat it like an agent. Even if the simulator itself isn’t an agent, the characters summoned up by the simulator are absolutely agents, and we can make moral statements about those agents just like we can for any person or character.