It sounds like you’re giving IIT approximately zero weight in your all-things-considered view. I find this surprising, given IIT’s popularity amongst people who’ve thought hard about consciousness, and given that you seem aware of this.
From my experience, there is a significant difference in the popularity of IIT by field. In philosophy, where I got my training, it isn’t a view that is widely held. Partly because of this bias, I haven’t spent a whole lot of time thinking about it. I have read the seminal papers that introduce the formal model and given the philosophical justifications, but I haven’t looked much into the empirical literature. The philosophical justifications seem very weak to me—the formal model seems very loosely connected to the axioms of consciousness that supposedly motivate it. And without much philosophical justification, I’m wary of the empirical evidence. The human brain is messy enough that I expect you could find evidence to confirm IIT whether or not it is true, if you look long enough and frame your assumptions correctly. That said, it is possible that existing empirical work does provide a lot of support for IIT that I haven’t taken the time to appreciate.
Additionally, I’d be interested to hear how your view may have updated in light of the recent empirical results from the IIT-GNWT adversarial collaboration:
If you don’t buy the philosophical assumptions, as I do not, then I don’t think you should update much on the IIT-GNWT adversarial collaboration results. IIT may have done better, but if the two views being compared were essentially picked out of a hat as representatives of different philosophical kinds, then the fact that one view does better doesn’t say much about the kinds. It seems weird to me to compare the precise predictions of theories that are so drastically different in their overall view of things. I don’t really like the idea of adversarial approaches across different frameworks. I would think it makes more sense to compare nearby theories to one another.
This is very informative to me, thanks for taking the time to reply. For what it’s worth, my exposure to theories of consciousness is from the neuroscience + cognitive science angle. (I very nearly started a PhD in IIT in Anil Seth’s lab back in 2020.) The overview of the field I had in my head could be crudely expressed as: higher-order theories and global workspace theories are ~dead (though, on the latter, Baars and co. have yet to give up); the exciting frontier research is in IIT and predictive processing and re-entry theories.
I’ve been puzzled by the mentions of GWT in EA circles—the noteworthy example here is how philosopher Rob Long gave GWT a fair amount of air time in his 80k episode. But given EA’s skew toward philosopher-types, this now makes a lot more sense.
From my experience, there is a significant difference in the popularity of IIT by field. In philosophy, where I got my training, it isn’t a view that is widely held. Partly because of this bias, I haven’t spent a whole lot of time thinking about it. I have read the seminal papers that introduce the formal model and given the philosophical justifications, but I haven’t looked much into the empirical literature. The philosophical justifications seem very weak to me—the formal model seems very loosely connected to the axioms of consciousness that supposedly motivate it. And without much philosophical justification, I’m wary of the empirical evidence. The human brain is messy enough that I expect you could find evidence to confirm IIT whether or not it is true, if you look long enough and frame your assumptions correctly. That said, it is possible that existing empirical work does provide a lot of support for IIT that I haven’t taken the time to appreciate.
If you don’t buy the philosophical assumptions, as I do not, then I don’t think you should update much on the IIT-GNWT adversarial collaboration results. IIT may have done better, but if the two views being compared were essentially picked out of a hat as representatives of different philosophical kinds, then the fact that one view does better doesn’t say much about the kinds. It seems weird to me to compare the precise predictions of theories that are so drastically different in their overall view of things. I don’t really like the idea of adversarial approaches across different frameworks. I would think it makes more sense to compare nearby theories to one another.
This is very informative to me, thanks for taking the time to reply. For what it’s worth, my exposure to theories of consciousness is from the neuroscience + cognitive science angle. (I very nearly started a PhD in IIT in Anil Seth’s lab back in 2020.) The overview of the field I had in my head could be crudely expressed as: higher-order theories and global workspace theories are ~dead (though, on the latter, Baars and co. have yet to give up); the exciting frontier research is in IIT and predictive processing and re-entry theories.
I’ve been puzzled by the mentions of GWT in EA circles—the noteworthy example here is how philosopher Rob Long gave GWT a fair amount of air time in his 80k episode. But given EA’s skew toward philosopher-types, this now makes a lot more sense.