I agree that this sounds semantic. I think of illusionism as a type of error theory, but people in this camp have always been somewhat cagey what they’re denying and there is a range of interesting theories.
At an rate, whether consciousness is a real phenomenon or not, however we define it, I would count systems that have illusions of consciousness, or specifically illusions of conscious evaluations (pleasure, suffering, “conscious” preferences) as moral patients and consider their interests in the usual ways.
Interesting. Do you go the other way too? E.g. if a creature doesn’t have illusions of consciousness, then it isn’t a moral patient?
It seems like this may be a non-standard interpretation of illusionism. Being under illusions of consciousness isn’t necessary for consciousness according to Frankish, and what is necessary is that if a sufficiently sophisticated introspective/monitoring system were connected in to the system in the right way, then that would generate illusions of consciousness. See, e.g. his talks:
I suspect now that this is also how AST is supposed to be understood, based on the artficial agents paper.
I do wonder if this is setting the bar too low, though. Humphrey seems to set a higher bar, where some kind of illusion is in fact required, but also mammals and birds probably have them.
I think we get into a definitional problem. What exactly do mean by “illusion” or “belief”? If an animal has a “spooky” attention schema, and cognitive access to it, then plausibly the animal has beliefs about it of some kind. If an animal or system believes something is good or bad or whatever, is that not an illusion, too, and is that not enough?
I agree that this sounds semantic. I think of illusionism as a type of error theory, but people in this camp have always been somewhat cagey what they’re denying and there is a range of interesting theories.
Interesting. Do you go the other way too? E.g. if a creature doesn’t have illusions of consciousness, then it isn’t a moral patient?
Assuming illusionism is true, then yes, I think only those with illusions of consciousness are moral patients.
It seems like this may be a non-standard interpretation of illusionism. Being under illusions of consciousness isn’t necessary for consciousness according to Frankish, and what is necessary is that if a sufficiently sophisticated introspective/monitoring system were connected in to the system in the right way, then that would generate illusions of consciousness. See, e.g. his talks:
https://youtu.be/xZxcair9oNk?t=3590
https://www.youtube.com/watch?v=txiYTLGtCuM
https://youtu.be/me9WXTx6Z-Q
I suspect now that this is also how AST is supposed to be understood, based on the artficial agents paper.
I do wonder if this is setting the bar too low, though. Humphrey seems to set a higher bar, where some kind of illusion is in fact required, but also mammals and birds probably have them.
I think we get into a definitional problem. What exactly do mean by “illusion” or “belief”? If an animal has a “spooky” attention schema, and cognitive access to it, then plausibly the animal has beliefs about it of some kind. If an animal or system believes something is good or bad or whatever, is that not an illusion, too, and is that not enough?