Got it, thanks for clarifying. Off the top of my head, I can’t think of any unconscious or at least “hidden” processing that is known to work in the relatively sophisticated manner your describe, but I might have read about such cases and am simply not remembering them at the moment. Certainly an expert on unconscious/hidden cognitive processing might be able to name some fairly well-characterized examples, and in general I find it quite plausible that such cognitive processes occur in (e.g.) the human brain (and thus potentially in the brains of other animals). Possibly the apparent cognitive operations undertaken by the non-verbal hemisphere in split-brain patients would qualify, though they seem especially likely to qualify as “conscious” under the Schwitzgebel-inspired definition even if they are not accessible to the hemisphere that can make verbal reports.
Anyway, the sort of thing you describe is one reason why, in section 4.2, my probabilities for “consciousness of a sort I intuitively morally care about” are generally higher than my probabilities for “consciousness as loosely defined by example above.” Currently, I don’t think I’d morally care about such cognitive processes so long as they were “unconscious” (as loosely defined by example in my report), but I think it’s at least weakly plausible that if I was able to carry out my idealized process for making moral judgments, I would conclude that I care about some such unconscious processes. I don’t use Brian’s approach of “mere” similarities in a multi-dimensional concept space, but regardless I could still imagine myself morally caring about certain types of unconscious processes similar to those you describe, even if I don’t care about some other unconscious processes that may be even more similar (in Brian’s concept space) to the processes that do instantiate “conscious experience” (as loosely defined by example in my report). (I’d currently bet against making such moral judgments, but not super-confidently.)
Got it, thanks for clarifying. Off the top of my head, I can’t think of any unconscious or at least “hidden” processing that is known to work in the relatively sophisticated manner your describe, but I might have read about such cases and am simply not remembering them at the moment. Certainly an expert on unconscious/hidden cognitive processing might be able to name some fairly well-characterized examples, and in general I find it quite plausible that such cognitive processes occur in (e.g.) the human brain (and thus potentially in the brains of other animals). Possibly the apparent cognitive operations undertaken by the non-verbal hemisphere in split-brain patients would qualify, though they seem especially likely to qualify as “conscious” under the Schwitzgebel-inspired definition even if they are not accessible to the hemisphere that can make verbal reports.
Anyway, the sort of thing you describe is one reason why, in section 4.2, my probabilities for “consciousness of a sort I intuitively morally care about” are generally higher than my probabilities for “consciousness as loosely defined by example above.” Currently, I don’t think I’d morally care about such cognitive processes so long as they were “unconscious” (as loosely defined by example in my report), but I think it’s at least weakly plausible that if I was able to carry out my idealized process for making moral judgments, I would conclude that I care about some such unconscious processes. I don’t use Brian’s approach of “mere” similarities in a multi-dimensional concept space, but regardless I could still imagine myself morally caring about certain types of unconscious processes similar to those you describe, even if I don’t care about some other unconscious processes that may be even more similar (in Brian’s concept space) to the processes that do instantiate “conscious experience” (as loosely defined by example in my report). (I’d currently bet against making such moral judgments, but not super-confidently.)