I agree that IIT doesn’t seem falsifiable since there’s no way to confirm something isn’t conscious, and that’s an important objection, because there probably isn’t consciousness without information integration. At least with other the theories I looked at, we could in principle have some confidence that recurrence or attention or predicting lower order mental states probably isn’t necessary, even though there are no sharp lines between processes that are doing these things and those that aren’t, and the ones that do to nonzero degree seem ubiquitous. But these processes can only really be ruled out as necessary if they are not necessary for eventual report.
Do I need to be able to eventually report (even just to myself) that I experienced something to have actually experienced it? This also seems unfalsifiable. So processes required for eventual report (the ones necessarily used during experiences that are eventually reported, but not necessarily the ones used during the report itself) can’t be ruled out as unnecessary, and I’m concerned that the more complex theories of consciousness are approaching theories of reportability (in humans), not necessarily theories of consciousness. No report paradigms only get around this through the unfalsifiable assumption that reflexive behaviours correlated with report (under certain experimental conditions) actually indicate consciousness in the absence of report.
So, IIT accepts basically everything as conscious, while reportability requirements can rule out basically everything except humans (and maybe some “higher” animals) under specific conditions (EDIT: actually, I’m not sure about this), both are unfalsifiable, and basically all other physical theories with academic supporters fall between them (maybe with a few extra elements that are falsifiable), and therefore also include unfalsifiable elements. Choosing between them seems like a matter of intuition, not science. Suppose we identified all of the features necessary for reportability. Academics would still be arguing over which ones among these are necessary for consciousness. Some would claim all of them are, others would still support panpsychist theories, and there doesn’t seem to be any principled way to decide. They’d just fit their theories to their intuitions of which things are conscious, but those intuitions aren’t reliable data, so this seems backwards.
One skeptical response might be that reportability is required for consciousness. But another skeptical response is that if you try to make things precise, you can’t rule out panpsychism non-arbitrarily, as I illustrate in this post.
Slightly weaker than report and similar to reportability, sometimes “access” is considered necessary (consciousness is access consciousness, according to Dennett). But access seems to be based on attention or global workspaces, and imprecisely defined processes that are accessing them, and I argue in this post that attention and global workspaces can be reduced to ubiquitous processes, and my guess is that the imprecisely defined processes accessing them aren’t necessary (for the same reasons as report) or attempts to define them in precise physical terms will also either draw arbitrary lines or lead to reduction to panpsychism anyway.
Here are some definitions of access consciousness:
Access consciousness: conscious states that can be reported by virtue of highlevel cognitive functions such as memory, attention and decision making.
A perceptual state is access-conscious, roughly speaking, if its content—what is represented by the perceptual state—is processed via that information-processing function, that is, if its content gets to the Executive System, whereby it can be used to control reasoning and behavior.
(...)
A state is access-conscious (A-conscious) if, in virtue of one’s having the state, a representation of its content is (1) inferentially promiscuous (Stich 1978), that is, poised for use as a premise in reasoning, (2) poised for rational control of action, and (3) poised for rational control of speech. (I will speak of both states and their contents as A-conscious.) These three conditions are together sufficient, but not all necessary. I regard (3) as not necessary (and not independent of the others), because I want to allow that nonlinguistic animals, for example chimps, have A-conscious states. I see A-consciousness as a cluster concept, in which (3) - roughly, reportability—is the element of the cluster with the smallest weight, though (3) is often the best practical guide to A-consciousness.
There’s still an ongoing debate as to whether or not the prefrontal cortex is necessary for consciousness in humans, with some claiming that it’s only necessary for report in humans:
Whether or not it is necessary for consciousness in humans could decide whether or not many nonhuman animals are conscious, assuming the kinds of processes happening in the prefrontal cortex are somehow fairly unique and necessary for consciousness generally, not just in humans (although I think attempts to capture their unique properties physically will probably fail to rule out panpsychism non-arbitrarily, like in this post).
I also think these objections will apply to panpsychism generally and any precise physical requirements that don’t draw arbitrary lines. In particular, they apply to the other proposed requirements Luke describes in 6.2. Combining precise physical requirements in specific ways, e.g. attention feeds into working memory which feeds into a process that models/predicts its own behaviour/attention, won’t really solve the problem, if each of those requirements are so ubiquitous in nature to nonzero degree under attempts to make them precise that specific combinations of them will happen to be, too.
What do you think of Luke Muehlhauser’s objections to Integrated Information Theory?
I agree that IIT doesn’t seem falsifiable since there’s no way to confirm something isn’t conscious, and that’s an important objection, because there probably isn’t consciousness without information integration. At least with other the theories I looked at, we could in principle have some confidence that recurrence or attention or predicting lower order mental states probably isn’t necessary, even though there are no sharp lines between processes that are doing these things and those that aren’t, and the ones that do to nonzero degree seem ubiquitous. But these processes can only really be ruled out as necessary if they are not necessary for eventual report.
Do I need to be able to eventually report (even just to myself) that I experienced something to have actually experienced it? This also seems unfalsifiable. So processes required for eventual report (the ones necessarily used during experiences that are eventually reported, but not necessarily the ones used during the report itself) can’t be ruled out as unnecessary, and I’m concerned that the more complex theories of consciousness are approaching theories of reportability (in humans), not necessarily theories of consciousness. No report paradigms only get around this through the unfalsifiable assumption that reflexive behaviours correlated with report (under certain experimental conditions) actually indicate consciousness in the absence of report.
So, IIT accepts basically everything as conscious, while reportability requirements can rule out basically everything except humans (and maybe some “higher” animals) under specific conditions (EDIT: actually, I’m not sure about this), both are unfalsifiable, and basically all other physical theories with academic supporters fall between them (maybe with a few extra elements that are falsifiable), and therefore also include unfalsifiable elements. Choosing between them seems like a matter of intuition, not science. Suppose we identified all of the features necessary for reportability. Academics would still be arguing over which ones among these are necessary for consciousness. Some would claim all of them are, others would still support panpsychist theories, and there doesn’t seem to be any principled way to decide. They’d just fit their theories to their intuitions of which things are conscious, but those intuitions aren’t reliable data, so this seems backwards.
One skeptical response might be that reportability is required for consciousness. But another skeptical response is that if you try to make things precise, you can’t rule out panpsychism non-arbitrarily, as I illustrate in this post.
Slightly weaker than report and similar to reportability, sometimes “access” is considered necessary (consciousness is access consciousness, according to Dennett). But access seems to be based on attention or global workspaces, and imprecisely defined processes that are accessing them, and I argue in this post that attention and global workspaces can be reduced to ubiquitous processes, and my guess is that the imprecisely defined processes accessing them aren’t necessary (for the same reasons as report) or attempts to define them in precise physical terms will also either draw arbitrary lines or lead to reduction to panpsychism anyway.
Here are some definitions of access consciousness:
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(11)00125-2
http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/1995_Function.pdf
There’s still an ongoing debate as to whether or not the prefrontal cortex is necessary for consciousness in humans, with some claiming that it’s only necessary for report in humans:
https://plato.stanford.edu/entries/consciousness-neuroscience/#FronPost
https://www.jneurosci.org/content/37/40/9603
https://www.jneurosci.org/content/37/40/9593.full
https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12264
Whether or not it is necessary for consciousness in humans could decide whether or not many nonhuman animals are conscious, assuming the kinds of processes happening in the prefrontal cortex are somehow fairly unique and necessary for consciousness generally, not just in humans (although I think attempts to capture their unique properties physically will probably fail to rule out panpsychism non-arbitrarily, like in this post).
I also think these objections will apply to panpsychism generally and any precise physical requirements that don’t draw arbitrary lines. In particular, they apply to the other proposed requirements Luke describes in 6.2. Combining precise physical requirements in specific ways, e.g. attention feeds into working memory which feeds into a process that models/predicts its own behaviour/attention, won’t really solve the problem, if each of those requirements are so ubiquitous in nature to nonzero degree under attempts to make them precise that specific combinations of them will happen to be, too.