I’d guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.
My initial intuition is the same here; it seems like nothing is lost, only things are added. I suppose one could object that actually independence of the parts is lost, and this could actually make the parts less conscious than they would have been if separate (although there is now also a greater whole that’s more conscious), but I’m not sure why this should be the case; why shouldn’t it make the parts more conscious? One reason to believe it would reduce the consciousness in each part (whether increasing or decreasing the amount of consciousness in the system) if the connections are reoptimized is that there’s new redundancy; the parts could be readjusted to accomplish the same while working less (e.g. firing less frequently). Of course, the leftover resources could be used for other things, which might have decreasing or increasing returns.
Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.
It seems like the implementations of particular capabilities are necessarily pretty integrative and interdependent (especially the global workspace and selective/top-down attention), and it could be that if you try to carve up the networks without also readjusting the remaining connections, you just get garbage.
Can you throw away 99% of a human brain (99% of neurons and synapses), not readjust what’s left, and get something that’s around 1% as conscious as the original brain? You might not have to throw away much to get something which doesn’t do much at all, while still being orders of magnitude larger than a bee brain. 1% of a human brain would have about as many or more neurons than a magpie, cat, rat or mouse (and more synapses than the rat and mouse, at least). Is there any 1% of a human brain you could carve out that would be about as conscious as a magpie, who can pass a mirror test? Can you do this while also keeping all of the sensory modalities (vision, hearing, etc.) and the same emotions common to humans and magpies? My guess is no, although I’m not confident.
Of course, this isn’t to say that bigger brains don’t tend to matter more; I think they do. I’m also not sure that they don’t often matter more than proportionally more; this might happen if certain important conscious functions require significant investment that some small brains don’t make at all, e.g. self-awareness and metacognition.
Some more related articles:
Is Brain Size Morally Relevant? by Brian Tomasik
Quantity of experience: brain-duplication and degrees of consciousness by Nick Bostrom
I also wrote an article about minimal instantiations of theories of consciousness: Physical theories of consciousness reduce to panpsychism.
My initial intuition is the same here; it seems like nothing is lost, only things are added. I suppose one could object that actually independence of the parts is lost, and this could actually make the parts less conscious than they would have been if separate (although there is now also a greater whole that’s more conscious), but I’m not sure why this should be the case; why shouldn’t it make the parts more conscious? One reason to believe it would reduce the consciousness in each part (whether increasing or decreasing the amount of consciousness in the system) if the connections are reoptimized is that there’s new redundancy; the parts could be readjusted to accomplish the same while working less (e.g. firing less frequently). Of course, the leftover resources could be used for other things, which might have decreasing or increasing returns.
It seems like the implementations of particular capabilities are necessarily pretty integrative and interdependent (especially the global workspace and selective/top-down attention), and it could be that if you try to carve up the networks without also readjusting the remaining connections, you just get garbage.
Can you throw away 99% of a human brain (99% of neurons and synapses), not readjust what’s left, and get something that’s around 1% as conscious as the original brain? You might not have to throw away much to get something which doesn’t do much at all, while still being orders of magnitude larger than a bee brain. 1% of a human brain would have about as many or more neurons than a magpie, cat, rat or mouse (and more synapses than the rat and mouse, at least). Is there any 1% of a human brain you could carve out that would be about as conscious as a magpie, who can pass a mirror test? Can you do this while also keeping all of the sensory modalities (vision, hearing, etc.) and the same emotions common to humans and magpies? My guess is no, although I’m not confident.
Of course, this isn’t to say that bigger brains don’t tend to matter more; I think they do. I’m also not sure that they don’t often matter more than proportionally more; this might happen if certain important conscious functions require significant investment that some small brains don’t make at all, e.g. self-awareness and metacognition.