Some thoughts against superlinear scaling in particular relative to sublinear scaling not covered directly in those two posts:
If we count multiple conscious subsystems in a brain even allowing substantial overlap between multiple of them to get to superlinear scaling (that’s substantially faster than linear scaling), that seems likely to imply “double counting” valenced experiences, and my guess is that this would get badly out of hand, e.g. in exponential territory, which would also have counterintuitive implications. I discuss this here.
Humans don’t seem to have many times more synapses per neuron than bees (1,000 to 7,000 in human brains vs ~1,000 in honeybee brains, based on data in [1] and [2]), so the number of direct connections between neurons is close-ish to proportional with neuron counts between humans and bees. We could have many times more indirect connections per neuron through paths of connections, but the influence from one neuron on another it’s only indirectly connected to should decrease with the lengths of paths from the first to the second, because the signal has to make it farther and compete with more signals. This doesn’t rule out superlinear scaling, but can limit it.
Some thoughts against superlinear scaling in particular relative to sublinear scaling not covered directly in those two posts:
If we count multiple conscious subsystems in a brain even allowing substantial overlap between multiple of them to get to superlinear scaling (that’s substantially faster than linear scaling), that seems likely to imply “double counting” valenced experiences, and my guess is that this would get badly out of hand, e.g. in exponential territory, which would also have counterintuitive implications. I discuss this here.
Humans don’t seem to have many times more synapses per neuron than bees (1,000 to 7,000 in human brains vs ~1,000 in honeybee brains, based on data in [1] and [2]), so the number of direct connections between neurons is close-ish to proportional with neuron counts between humans and bees. We could have many times more indirect connections per neuron through paths of connections, but the influence from one neuron on another it’s only indirectly connected to should decrease with the lengths of paths from the first to the second, because the signal has to make it farther and compete with more signals. This doesn’t rule out superlinear scaling, but can limit it.
A brain duplication thought experiment here.
Multiple other arguments here.