It may be that certain mental subsystems wouldn’t be adequate by themselves to produce consciousness. But certainly some of them would. Consider a neuron in my brain and name it Fred. Absent Fred, I’d still be conscious. So then why isn’t my brain-Fred conscious? The other view makes consciousness weirdly extrinsic—whether some collection of neurons is conscious depends on how they’re connected to other neurons.
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don’t overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
Your brain has a bunch of overlapping subsystems that are each conscious, according to many plausible criteria for consciousness you could use. You could say they’re all minds. I’m not sure I’d say they’re different minds, because if two overlap enough, they should be treated like the same one.
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans.
It may be that certain mental subsystems wouldn’t be adequate by themselves to produce consciousness. But certainly some of them would. Consider a neuron in my brain and name it Fred. Absent Fred, I’d still be conscious. So then why isn’t my brain-Fred conscious? The other view makes consciousness weirdly extrinsic—whether some collection of neurons is conscious depends on how they’re connected to other neurons.
(Not speaking for my co-authors or RP.)
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don’t overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we?commentId=pAZtCqpXuGk6H2FgF
But then wouldn’t this by brain has a bunch of different minds? How can the consciousness of one overlap with the consciousness of another?
Your brain has a bunch of overlapping subsystems that are each conscious, according to many plausible criteria for consciousness you could use. You could say they’re all minds. I’m not sure I’d say they’re different minds, because if two overlap enough, they should be treated like the same one.
See also the problem of the many on SEP: