If you start decomposing minds into their computational components, you find many orders of magnitude differences in the numbers of similar components. E.g. both a honeybee and a human may have visual experience, but the latter will have on the order of 10,000 times as many photoreceptors, with even larger disparities in the number of neurons and computations for subsequent processing. If each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.
Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses, associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.
On the other side, trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples. See also this discussion. A global workspace may broadcast to thousands of processes or billions.
We can also consider minds much larger than humans, e.g. imagine a network of humans linked by neural interfaces, exchanging memories, sensory input, and directions to action. As we increased the bandwidth of these connections and the degree of behavioral integration, eventually you might have a system that one could consider a single organism, but with vastly greater numbers of perceptions, actions, and cognitive processes than a single human. If we started with 1 billion humans who gradually joined their minds together in such a network, should we say that near the end of the process their total amount of experience or moral weight is reduced to that of 1-10 humans? I’d guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.
The usual discussions on this topic seem to assume that connecting and integrating many mental processes almost certainly destroys almost all of their consciousness and value, which seems questionable both for the view itself and for the extreme weight put on it. With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.
I’d guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.
My initial intuition is the same here; it seems like nothing is lost, only things are added. I suppose one could object that actually independence of the parts is lost, and this could actually make the parts less conscious than they would have been if separate (although there is now also a greater whole that’s more conscious), but I’m not sure why this should be the case; why shouldn’t it make the parts more conscious? One reason to believe it would reduce the consciousness in each part (whether increasing or decreasing the amount of consciousness in the system) if the connections are reoptimized is that there’s new redundancy; the parts could be readjusted to accomplish the same while working less (e.g. firing less frequently). Of course, the leftover resources could be used for other things, which might have decreasing or increasing returns.
Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.
It seems like the implementations of particular capabilities are necessarily pretty integrative and interdependent (especially the global workspace and selective/top-down attention), and it could be that if you try to carve up the networks without also readjusting the remaining connections, you just get garbage.
Can you throw away 99% of a human brain (99% of neurons and synapses), not readjust what’s left, and get something that’s around 1% as conscious as the original brain? You might not have to throw away much to get something which doesn’t do much at all, while still being orders of magnitude larger than a bee brain. 1% of a human brain would have about as many or more neurons than a magpie, cat, rat or mouse (and more synapses than the rat and mouse, at least). Is there any 1% of a human brain you could carve out that would be about as conscious as a magpie, who can pass a mirror test? Can you do this while also keeping all of the sensory modalities (vision, hearing, etc.) and the same emotions common to humans and magpies? My guess is no, although I’m not confident.
Of course, this isn’t to say that bigger brains don’t tend to matter more; I think they do. I’m also not sure that they don’t often matter more than proportionally more; this might happen if certain important conscious functions require significant investment that some small brains don’t make at all, e.g. self-awareness and metacognition.
With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.
I think you may need to have pretty overwhelming credence in such a views, though. EDIT: Or give enough weight to sufficiently superlinear views.
From this Vox article, we have about 1.5 to 2.5 million mites on our bodies. If we straightforwardly consider the oversimplified view that moral weight scales proportionally with the square root of total neuron count in an animal, then these mites (assuming at least 300 neurons each, as many as C elegans) would have ~100x as much moral weight as the human they’re on. Assigning just a 1% credence to such a view, assuming we’re fixing the moral weight of humans and taking expected values puts the mites on our own bodies ahead of us (although FWIW, I don’t think this is the right way to treat this kind of moral uncertainty; there’s no compelling reason for humans to be the reference point and I think doing it this way actually favours nonhumans).
(86 billion neurons in the human brain)^0.5 = ~300,000.
(300^0.5) *2 million = ~35 million.
The square root may seem kind of extreme starting from your thought experiment, but I think there are plausible ways moral weight scales much more slowly than linearly in brain size in practice, and considering them together, something similar to the square root doesn’t seem too improbable as an approximation:
There isn’t any reasonably decisive argument for why moral weight should scale linearly with network depth when we compare different animals (your thought experiments don’t apply; I think network depth is more relevant insofar as we want to ask whether the network does a certain thing at all, but making honey bee networks deeper without changing their functions doesn’t obviously contribute to increasing moral weight). Since brains are 3 dimensional, this can get us to (neuroncount)2/3 as a first approximation.
For a larger brain to act the same way on the same inputs, it needs to be less than proportionally sensitive or responsive or both. This is a reason to expect smaller brains to be disproportionately sensitive. This can also result in a power of neuron count lower than 1; see here.
There may be disproportionately more redundancy in larger brains that doesn’t contribute much to moral weight, including more inhibitory neurons and lower average firing rates (this is related to 2, so we don’t want to double count this argument).
There may be disproportionately more going on in larger brains that isn’t relevant to moral weight.
There are likely bottlenecks, so you probably can’t actually carve out a number of (non-overlapping or very minimally overlapping) honeybee-like morally relevant subsystems equal to the the ratio of the number of neurons in the morally relevant subsystems. It’ll be fewer.
I think we can actually check claims 2-4 in practice.
If you start decomposing minds into their computational components, you find many orders of magnitude differences in the numbers of similar components. E.g. both a honeybee and a human may have visual experience, but the latter will have on the order of 10,000 times as many photoreceptors, with even larger disparities in the number of neurons and computations for subsequent processing. If each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.
Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses, associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.
On the other side, trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples. See also this discussion. A global workspace may broadcast to thousands of processes or billions.
We can also consider minds much larger than humans, e.g. imagine a network of humans linked by neural interfaces, exchanging memories, sensory input, and directions to action. As we increased the bandwidth of these connections and the degree of behavioral integration, eventually you might have a system that one could consider a single organism, but with vastly greater numbers of perceptions, actions, and cognitive processes than a single human. If we started with 1 billion humans who gradually joined their minds together in such a network, should we say that near the end of the process their total amount of experience or moral weight is reduced to that of 1-10 humans? I’d guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.
The usual discussions on this topic seem to assume that connecting and integrating many mental processes almost certainly destroys almost all of their consciousness and value, which seems questionable both for the view itself and for the extreme weight put on it. With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.
Some more related articles:
Is Brain Size Morally Relevant? by Brian Tomasik
Quantity of experience: brain-duplication and degrees of consciousness by Nick Bostrom
I also wrote an article about minimal instantiations of theories of consciousness: Physical theories of consciousness reduce to panpsychism.
My initial intuition is the same here; it seems like nothing is lost, only things are added. I suppose one could object that actually independence of the parts is lost, and this could actually make the parts less conscious than they would have been if separate (although there is now also a greater whole that’s more conscious), but I’m not sure why this should be the case; why shouldn’t it make the parts more conscious? One reason to believe it would reduce the consciousness in each part (whether increasing or decreasing the amount of consciousness in the system) if the connections are reoptimized is that there’s new redundancy; the parts could be readjusted to accomplish the same while working less (e.g. firing less frequently). Of course, the leftover resources could be used for other things, which might have decreasing or increasing returns.
It seems like the implementations of particular capabilities are necessarily pretty integrative and interdependent (especially the global workspace and selective/top-down attention), and it could be that if you try to carve up the networks without also readjusting the remaining connections, you just get garbage.
Can you throw away 99% of a human brain (99% of neurons and synapses), not readjust what’s left, and get something that’s around 1% as conscious as the original brain? You might not have to throw away much to get something which doesn’t do much at all, while still being orders of magnitude larger than a bee brain. 1% of a human brain would have about as many or more neurons than a magpie, cat, rat or mouse (and more synapses than the rat and mouse, at least). Is there any 1% of a human brain you could carve out that would be about as conscious as a magpie, who can pass a mirror test? Can you do this while also keeping all of the sensory modalities (vision, hearing, etc.) and the same emotions common to humans and magpies? My guess is no, although I’m not confident.
Of course, this isn’t to say that bigger brains don’t tend to matter more; I think they do. I’m also not sure that they don’t often matter more than proportionally more; this might happen if certain important conscious functions require significant investment that some small brains don’t make at all, e.g. self-awareness and metacognition.
I think you may need to have pretty overwhelming credence in such a views, though. EDIT: Or give enough weight to sufficiently superlinear views.
From this Vox article, we have about 1.5 to 2.5 million mites on our bodies. If we straightforwardly consider the oversimplified view that moral weight scales proportionally with the square root of total neuron count in an animal, then these mites (assuming at least 300 neurons each, as many as C elegans) would have ~100x as much moral weight as the human they’re on. Assigning just a 1% credence to such a view, assuming we’re fixing the moral weight of humans and taking expected values puts the mites on our own bodies ahead of us (although FWIW, I don’t think this is the right way to treat this kind of moral uncertainty; there’s no compelling reason for humans to be the reference point and I think doing it this way actually favours nonhumans).
(86 billion neurons in the human brain)^0.5 = ~300,000.
(300^0.5) *2 million = ~35 million.
The square root may seem kind of extreme starting from your thought experiment, but I think there are plausible ways moral weight scales much more slowly than linearly in brain size in practice, and considering them together, something similar to the square root doesn’t seem too improbable as an approximation:
There isn’t any reasonably decisive argument for why moral weight should scale linearly with network depth when we compare different animals (your thought experiments don’t apply; I think network depth is more relevant insofar as we want to ask whether the network does a certain thing at all, but making honey bee networks deeper without changing their functions doesn’t obviously contribute to increasing moral weight). Since brains are 3 dimensional, this can get us to (neuron count)2/3 as a first approximation.
For a larger brain to act the same way on the same inputs, it needs to be less than proportionally sensitive or responsive or both. This is a reason to expect smaller brains to be disproportionately sensitive. This can also result in a power of neuron count lower than 1; see here.
There may be disproportionately more redundancy in larger brains that doesn’t contribute much to moral weight, including more inhibitory neurons and lower average firing rates (this is related to 2, so we don’t want to double count this argument).
There may be disproportionately more going on in larger brains that isn’t relevant to moral weight.
There are likely bottlenecks, so you probably can’t actually carve out a number of (non-overlapping or very minimally overlapping) honeybee-like morally relevant subsystems equal to the the ratio of the number of neurons in the morally relevant subsystems. It’ll be fewer.
I think we can actually check claims 2-4 in practice.
Also note that, based on this table:
cats and house mice have ~13,000-14,000 synapses per neuron
brown rats have ~2,500
humans have ~1,750
honey bees have ~1,000
sea squirts and C elegans have ~20-40.
I’m not sure how consistently they’re counting in whole brain vs central nervous system vs whole body, though.