One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where I’d push back. My understanding is that neurons don’t “check” if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldn’t be able to tell whether a neuron was not firing or just didn’t exist at that moment. This text box I’m typing into can’t tell whether the keyboard doesn’t exist or just isn’t sending input signals, when I’m not typing, because (I assume) all it does is check for input.
(I think the computer does “know” if there’s a keyboard, though, but I’d guess that’s because it’s running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. It’s also possible to tell that something exists because a signal is received in its absence but not when it’s present, like an object blocking light or a current.)
Specifically, I don’t think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesn’t fire:
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
It could be that even non-firing neurons affect other neurons in some other important ways I’m not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesn’t reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldn’t reflect my thought experiment anymore, which is intended to hold all else equal.
I don’t think it’s a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really don’t know either way. They won’t adapt immediately to the addition/removal of a neuron if it wasn’t going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
Ya, this is where I’d push back. My understanding is that neurons don’t “check” if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldn’t be able to tell whether a neuron was not firing or just didn’t exist at that moment. This text box I’m typing into can’t tell whether the keyboard doesn’t exist or just isn’t sending input signals, when I’m not typing, because (I assume) all it does is check for input.
(I think the computer does “know” if there’s a keyboard, though, but I’d guess that’s because it’s running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. It’s also possible to tell that something exists because a signal is received in its absence but not when it’s present, like an object blocking light or a current.)
Specifically, I don’t think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesn’t fire:
It could be that even non-firing neurons affect other neurons in some other important ways I’m not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesn’t reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldn’t reflect my thought experiment anymore, which is intended to hold all else equal.
I don’t think it’s a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really don’t know either way. They won’t adapt immediately to the addition/removal of a neuron if it wasn’t going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
I’m also ignoring inhibitory neurons.