I’ve always heard “pinpricks vs torture” or the Omelas story interpreted as an example of the overwhelming badness of extreme suffering, rather than against scope sensitivity. I’ve heard it cited in favor of animal welfare! As one could see from the Dominion documentary, billions of animals live lives of extreme suffering. Omelas could be interpreted to argue that this suffering is even more important than is otherwise assumed.
I think it’s hard to say what the simulation argument implies for this debate one way or the other, since there are many more (super speculative) considerations:
If consciousness is an illusion or a byproduct of certain kinds of computations which would arise in any substrate, then we should expect animals to be conscious even in the simulation.
I’ve heard some argue that the simulators would be interested in the life trajectories of particular individuals, which could imply that only a few select humans would be conscious, and nobody else. (In history, we tell the stories of world-changing individuals, neglecting those of every other individual. In video games, often only the player and maybe a select few NPCs are given rich behavior.)
The simulators might be interested in seeing what the pre-AGI world may have looked like, and will terminate the simulation once we get AGI. In that case, we’d want to go all-in on suffering reduction, which would probably mean prioritizing animals.
I agree with you that many claim the moral value of animal experiences is incommensurate with that of human experiences, and that categorical responsibilities would generally also favor humans.
Thanks for the comment!
I’ve always heard “pinpricks vs torture” or the Omelas story interpreted as an example of the overwhelming badness of extreme suffering, rather than against scope sensitivity. I’ve heard it cited in favor of animal welfare! As one could see from the Dominion documentary, billions of animals live lives of extreme suffering. Omelas could be interpreted to argue that this suffering is even more important than is otherwise assumed.
I think it’s hard to say what the simulation argument implies for this debate one way or the other, since there are many more (super speculative) considerations:
If consciousness is an illusion or a byproduct of certain kinds of computations which would arise in any substrate, then we should expect animals to be conscious even in the simulation.
I’ve heard some argue that the simulators would be interested in the life trajectories of particular individuals, which could imply that only a few select humans would be conscious, and nobody else. (In history, we tell the stories of world-changing individuals, neglecting those of every other individual. In video games, often only the player and maybe a select few NPCs are given rich behavior.)
The simulators might be interested in seeing what the pre-AGI world may have looked like, and will terminate the simulation once we get AGI. In that case, we’d want to go all-in on suffering reduction, which would probably mean prioritizing animals.
I agree with you that many claim the moral value of animal experiences is incommensurate with that of human experiences, and that categorical responsibilities would generally also favor humans.