I suppose you could put my overall point this way: current theories present very few technical obstacles, so there it would take little effort to build a system which would be difficult to rule out. Even if you think we need more criteria to avoid getting stuck with panpsychism, we don’t have those criteria and so can’t wield them to do any work in the near future.
This is also my impression of the theories with which I’m familiar, except illusionist ones. I think only illusionist theories actually give plausible accounts of consciousness in general, as far as I’m aware, and I think they probably rule out panpsychism, but I’m not sure (if small enough animal brains are conscious, and counterfactual robustness is not necessary, then you might get panspychism again).
I mean everything that is plausibly relevant according to current theories, which is a relatively short list. There is a big gulf between everything people have suggested is necessary for consciousness and a whole brain emulation.
Fair. That’s my impression, too.
It has been awhile since I’ve read Graziano—but if I recall correctly (and as your quote illustrates) he likes both illusionism and an attention schema theory. Since illusionism denies consciousness, he can’t take AST as a theory of what consciousness is; he treats it instead as a theory of the phenomena that leads us to puzzle mistakenly about consciousness. If that is right, he should think that any artificial mind might be led by an AST architecture, even a pretty crude one, to make mistakes about mind-brain relations and that isn’t indicative of any further interesting phenomenon. The question of the consciousness of artificial systems is settled decisively in the negative by illusionism.
I guess this is a matter of definitions. I wouldn’t personally take illusionism as denying consciousness outright, and instead illusionism says that consciousness does not actually have the apparently inaccessible, ineffable, unphysical or mysterious properties people often attribute to it, and it’s just the appearance/depiction/illusion of such properties that makes a system conscious. At any (typo) rate, whether consciousness is a real phenomenon or not, however we define it, I would count systems that have illusions of consciousness, or specifically illusions of conscious evaluations (pleasure, suffering, “conscious” preferences) as moral patients and consider their interests in the usual ways. (Maybe with some exceptions that don’t count, like giant lookup tables and some other systems that don’t have causal structures at all resembling our own.) This is also Luke Muehlhauser’s approach in 2017 Report on Consciousness and Moral Patienthood.
I agree that this sounds semantic. I think of illusionism as a type of error theory, but people in this camp have always been somewhat cagey what they’re denying and there is a range of interesting theories.
At an rate, whether consciousness is a real phenomenon or not, however we define it, I would count systems that have illusions of consciousness, or specifically illusions of conscious evaluations (pleasure, suffering, “conscious” preferences) as moral patients and consider their interests in the usual ways.
Interesting. Do you go the other way too? E.g. if a creature doesn’t have illusions of consciousness, then it isn’t a moral patient?
It seems like this may be a non-standard interpretation of illusionism. Being under illusions of consciousness isn’t necessary for consciousness according to Frankish, and what is necessary is that if a sufficiently sophisticated introspective/monitoring system were connected in to the system in the right way, then that would generate illusions of consciousness. See, e.g. his talks:
I suspect now that this is also how AST is supposed to be understood, based on the artficial agents paper.
I do wonder if this is setting the bar too low, though. Humphrey seems to set a higher bar, where some kind of illusion is in fact required, but also mammals and birds probably have them.
I think we get into a definitional problem. What exactly do mean by “illusion” or “belief”? If an animal has a “spooky” attention schema, and cognitive access to it, then plausibly the animal has beliefs about it of some kind. If an animal or system believes something is good or bad or whatever, is that not an illusion, too, and is that not enough?
This is also my impression of the theories with which I’m familiar, except illusionist ones. I think only illusionist theories actually give plausible accounts of consciousness in general, as far as I’m aware, and I think they probably rule out panpsychism, but I’m not sure (if small enough animal brains are conscious, and counterfactual robustness is not necessary, then you might get panspychism again).
Fair. That’s my impression, too.
I guess this is a matter of definitions. I wouldn’t personally take illusionism as denying consciousness outright, and instead illusionism says that consciousness does not actually have the apparently inaccessible, ineffable, unphysical or mysterious properties people often attribute to it, and it’s just the appearance/depiction/illusion of such properties that makes a system conscious. At any (typo) rate, whether consciousness is a real phenomenon or not, however we define it, I would count systems that have illusions of consciousness, or specifically illusions of conscious evaluations (pleasure, suffering, “conscious” preferences) as moral patients and consider their interests in the usual ways. (Maybe with some exceptions that don’t count, like giant lookup tables and some other systems that don’t have causal structures at all resembling our own.) This is also Luke Muehlhauser’s approach in 2017 Report on Consciousness and Moral Patienthood.
I agree that this sounds semantic. I think of illusionism as a type of error theory, but people in this camp have always been somewhat cagey what they’re denying and there is a range of interesting theories.
Interesting. Do you go the other way too? E.g. if a creature doesn’t have illusions of consciousness, then it isn’t a moral patient?
Assuming illusionism is true, then yes, I think only those with illusions of consciousness are moral patients.
It seems like this may be a non-standard interpretation of illusionism. Being under illusions of consciousness isn’t necessary for consciousness according to Frankish, and what is necessary is that if a sufficiently sophisticated introspective/monitoring system were connected in to the system in the right way, then that would generate illusions of consciousness. See, e.g. his talks:
https://youtu.be/xZxcair9oNk?t=3590
https://www.youtube.com/watch?v=txiYTLGtCuM
https://youtu.be/me9WXTx6Z-Q
I suspect now that this is also how AST is supposed to be understood, based on the artficial agents paper.
I do wonder if this is setting the bar too low, though. Humphrey seems to set a higher bar, where some kind of illusion is in fact required, but also mammals and birds probably have them.
I think we get into a definitional problem. What exactly do mean by “illusion” or “belief”? If an animal has a “spooky” attention schema, and cognitive access to it, then plausibly the animal has beliefs about it of some kind. If an animal or system believes something is good or bad or whatever, is that not an illusion, too, and is that not enough?