With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.
I think you may need to have pretty overwhelming credence in such a views, though. EDIT: Or give enough weight to sufficiently superlinear views.
From this Vox article, we have about 1.5 to 2.5 million mites on our bodies. If we straightforwardly consider the oversimplified view that moral weight scales proportionally with the square root of total neuron count in an animal, then these mites (assuming at least 300 neurons each, as many as C elegans) would have ~100x as much moral weight as the human they’re on. Assigning just a 1% credence to such a view, assuming we’re fixing the moral weight of humans and taking expected values puts the mites on our own bodies ahead of us (although FWIW, I don’t think this is the right way to treat this kind of moral uncertainty; there’s no compelling reason for humans to be the reference point and I think doing it this way actually favours nonhumans).
(86 billion neurons in the human brain)^0.5 = ~300,000.
(300^0.5) *2 million = ~35 million.
The square root may seem kind of extreme starting from your thought experiment, but I think there are plausible ways moral weight scales much more slowly than linearly in brain size in practice, and considering them together, something similar to the square root doesn’t seem too improbable as an approximation:
There isn’t any reasonably decisive argument for why moral weight should scale linearly with network depth when we compare different animals (your thought experiments don’t apply; I think network depth is more relevant insofar as we want to ask whether the network does a certain thing at all, but making honey bee networks deeper without changing their functions doesn’t obviously contribute to increasing moral weight). Since brains are 3 dimensional, this can get us to (neuroncount)2/3 as a first approximation.
For a larger brain to act the same way on the same inputs, it needs to be less than proportionally sensitive or responsive or both. This is a reason to expect smaller brains to be disproportionately sensitive. This can also result in a power of neuron count lower than 1; see here.
There may be disproportionately more redundancy in larger brains that doesn’t contribute much to moral weight, including more inhibitory neurons and lower average firing rates (this is related to 2, so we don’t want to double count this argument).
There may be disproportionately more going on in larger brains that isn’t relevant to moral weight.
There are likely bottlenecks, so you probably can’t actually carve out a number of (non-overlapping or very minimally overlapping) honeybee-like morally relevant subsystems equal to the the ratio of the number of neurons in the morally relevant subsystems. It’ll be fewer.
I think we can actually check claims 2-4 in practice.
I think you may need to have pretty overwhelming credence in such a views, though. EDIT: Or give enough weight to sufficiently superlinear views.
From this Vox article, we have about 1.5 to 2.5 million mites on our bodies. If we straightforwardly consider the oversimplified view that moral weight scales proportionally with the square root of total neuron count in an animal, then these mites (assuming at least 300 neurons each, as many as C elegans) would have ~100x as much moral weight as the human they’re on. Assigning just a 1% credence to such a view, assuming we’re fixing the moral weight of humans and taking expected values puts the mites on our own bodies ahead of us (although FWIW, I don’t think this is the right way to treat this kind of moral uncertainty; there’s no compelling reason for humans to be the reference point and I think doing it this way actually favours nonhumans).
(86 billion neurons in the human brain)^0.5 = ~300,000.
(300^0.5) *2 million = ~35 million.
The square root may seem kind of extreme starting from your thought experiment, but I think there are plausible ways moral weight scales much more slowly than linearly in brain size in practice, and considering them together, something similar to the square root doesn’t seem too improbable as an approximation:
There isn’t any reasonably decisive argument for why moral weight should scale linearly with network depth when we compare different animals (your thought experiments don’t apply; I think network depth is more relevant insofar as we want to ask whether the network does a certain thing at all, but making honey bee networks deeper without changing their functions doesn’t obviously contribute to increasing moral weight). Since brains are 3 dimensional, this can get us to (neuron count)2/3 as a first approximation.
For a larger brain to act the same way on the same inputs, it needs to be less than proportionally sensitive or responsive or both. This is a reason to expect smaller brains to be disproportionately sensitive. This can also result in a power of neuron count lower than 1; see here.
There may be disproportionately more redundancy in larger brains that doesn’t contribute much to moral weight, including more inhibitory neurons and lower average firing rates (this is related to 2, so we don’t want to double count this argument).
There may be disproportionately more going on in larger brains that isn’t relevant to moral weight.
There are likely bottlenecks, so you probably can’t actually carve out a number of (non-overlapping or very minimally overlapping) honeybee-like morally relevant subsystems equal to the the ratio of the number of neurons in the morally relevant subsystems. It’ll be fewer.
I think we can actually check claims 2-4 in practice.
Also note that, based on this table:
cats and house mice have ~13,000-14,000 synapses per neuron
brown rats have ~2,500
humans have ~1,750
honey bees have ~1,000
sea squirts and C elegans have ~20-40.
I’m not sure how consistently they’re counting in whole brain vs central nervous system vs whole body, though.