One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where Iād push back. My understanding is that neurons donāt ācheckā if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldnāt be able to tell whether a neuron was not firing or just didnāt exist at that moment. This text box Iām typing into canāt tell whether the keyboard doesnāt exist or just isnāt sending input signals, when Iām not typing, because (I assume) all it does is check for input.
(I think the computer does āknowā if thereās a keyboard, though, but Iād guess thatās because itās running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. Itās also possible to tell that something exists because a signal is received in its absence but not when itās present, like an object blocking light or a current.)
Specifically, I donāt think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesnāt fire:
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
It could be that even non-firing neurons affect other neurons in some other important ways Iām not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesnāt reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldnāt reflect my thought experiment anymore, which is intended to hold all else equal.
I donāt think itās a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really donāt know either way. They wonāt adapt immediately to the addition/āremoval of a neuron if it wasnāt going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
Ya, this is where Iād push back. My understanding is that neurons donāt ācheckā if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldnāt be able to tell whether a neuron was not firing or just didnāt exist at that moment. This text box Iām typing into canāt tell whether the keyboard doesnāt exist or just isnāt sending input signals, when Iām not typing, because (I assume) all it does is check for input.
(I think the computer does āknowā if thereās a keyboard, though, but Iād guess thatās because itās running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. Itās also possible to tell that something exists because a signal is received in its absence but not when itās present, like an object blocking light or a current.)
Specifically, I donāt think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesnāt fire:
It could be that even non-firing neurons affect other neurons in some other important ways Iām not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesnāt reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldnāt reflect my thought experiment anymore, which is intended to hold all else equal.
I donāt think itās a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really donāt know either way. They wonāt adapt immediately to the addition/āremoval of a neuron if it wasnāt going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
Iām also ignoring inhibitory neurons.