I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didn’t do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/absence makes a difference to me seems unphysical, because they didn’t do anything in 1 where they were present.
I’m unclear why you think proportion couldn’t matter in this scenario.
I’ve written a pseudo program in Python below in which proportion does matter, removing neurons that don’t fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I don’t believe consciousness works this way in humans or other animals but I don’t think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
def experience_pain(nociceptive_neurons_list):
# nociceptive_neurons_list is a list of neurons represented by 0's and 1's, where 1 is when an individual neuron is firing, and 0 is not
proportion_of_neurons_firing = proportion_of_neurons_firing(nociceptive_neurons_list)
if proportion_of_neurons_firing < 0.3:
return pain_intensity(1)
elif proportion_of_neurons_firing > 0.3 && proportion_of_neurons_firing < 0.6:
return pain_intensity(2)
elif proportion_of_neurons_firing > 0.6 && proportion_of_neurons_firing < 1:
return pain_intensity(5)
elif proportion_of_neurons_firing == 1:
return pain_intensity(10)
else:
return pain_intensity(0)
def proportion_of_neurons_firing(nociceptive_neurons_list):
num_neurons_firing = 0
for neuron in nociceptive_neurons_list:
if neuron == 1:
num_neurons_firing += num_neurons_firing # add 1 for every neuron that is firing
return num_neurons_firing/get_number_of_pain_neurons(nociceptive_neurons_list) #return the proportion firing
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
pain_list_all_neurons = [0, 0, 0, 1, 1]
pain_list_only_firing = [1, 1]
experience_pain(pain_list_all_neurons) # would return pain_intensity(2)
experience_pain(pain_list_only_firing) # would return pain_intensity(10)
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where I’d push back. My understanding is that neurons don’t “check” if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldn’t be able to tell whether a neuron was not firing or just didn’t exist at that moment. This text box I’m typing into can’t tell whether the keyboard doesn’t exist or just isn’t sending input signals, when I’m not typing, because (I assume) all it does is check for input.
(I think the computer does “know” if there’s a keyboard, though, but I’d guess that’s because it’s running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. It’s also possible to tell that something exists because a signal is received in its absence but not when it’s present, like an object blocking light or a current.)
Specifically, I don’t think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesn’t fire:
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
It could be that even non-firing neurons affect other neurons in some other important ways I’m not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesn’t reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldn’t reflect my thought experiment anymore, which is intended to hold all else equal.
I don’t think it’s a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really don’t know either way. They won’t adapt immediately to the addition/removal of a neuron if it wasn’t going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
I’m unclear why you think proportion couldn’t matter in this scenario.
I’ve written a pseudo program in Python below in which proportion does matter, removing neurons that don’t fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I don’t believe consciousness works this way in humans or other animals but I don’t think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where I’d push back. My understanding is that neurons don’t “check” if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldn’t be able to tell whether a neuron was not firing or just didn’t exist at that moment. This text box I’m typing into can’t tell whether the keyboard doesn’t exist or just isn’t sending input signals, when I’m not typing, because (I assume) all it does is check for input.
(I think the computer does “know” if there’s a keyboard, though, but I’d guess that’s because it’s running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. It’s also possible to tell that something exists because a signal is received in its absence but not when it’s present, like an object blocking light or a current.)
Specifically, I don’t think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesn’t fire:
It could be that even non-firing neurons affect other neurons in some other important ways I’m not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesn’t reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldn’t reflect my thought experiment anymore, which is intended to hold all else equal.
I don’t think it’s a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really don’t know either way. They won’t adapt immediately to the addition/removal of a neuron if it wasn’t going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
I’m also ignoring inhibitory neurons.