As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So Iām not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)
One argument against proportion mattering (or at least in a straightforward way):
Suppose a brain responds to some stimuli and you record its pattern of neuron firings.
Then, suppose you could repeat exactly the same pattern of neuron firings, but before doing so, you remove all the neurons that wouldnāt have fired anyway. By doing so, you have increased the proportion of neurons that fire compared to 1.
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didnāt do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/āabsence makes a difference to me seems unphysical, because they didnāt do anything in 1 where they were present. Or itās a claim that whatās experienced in 1 depends on what could have happened instead, which also seems unphysical, since these counterfactuals shouldnāt change what actually happened. Number of firing neurons, on the other hand, only tracks actual physical events/āinteractions.
I had a similar discussion here, although there was pushback against my views.
This seems like a pretty good reason to reject a simple proportion account, and so it does seem like itās really the number firing that matters in a given brain, or the same brain with neurons removed (or something like graph minors, more generally, so also allowing contractions of paths). This suggests that if one brain A can be embedded into another B, and so we can get A from B by removing neurons and/āor connections from B, then B has more intense experiences than A, ignoring effects of extra neurons in B that may actually decrease intensity, like inhibition (and competition?).
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didnāt do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/āabsence makes a difference to me seems unphysical, because they didnāt do anything in 1 where they were present.
Iām unclear why you think proportion couldnāt matter in this scenario.
Iāve written a pseudo program in Python below in which proportion does matter, removing neurons that donāt fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I donāt believe consciousness works this way in humans or other animals but I donāt think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
def experience_pain(nociceptive_neurons_list):
# nociceptive_neurons_list is a list of neurons represented by 0's and 1's, where 1 is when an individual neuron is firing, and 0 is not
proportion_of_neurons_firing = proportion_of_neurons_firing(nociceptive_neurons_list)
if proportion_of_neurons_firing < 0.3:
return pain_intensity(1)
elif proportion_of_neurons_firing > 0.3 && proportion_of_neurons_firing < 0.6:
return pain_intensity(2)
elif proportion_of_neurons_firing > 0.6 && proportion_of_neurons_firing < 1:
return pain_intensity(5)
elif proportion_of_neurons_firing == 1:
return pain_intensity(10)
else:
return pain_intensity(0)
def proportion_of_neurons_firing(nociceptive_neurons_list):
num_neurons_firing = 0
for neuron in nociceptive_neurons_list:
if neuron == 1:
num_neurons_firing += num_neurons_firing # add 1 for every neuron that is firing
return num_neurons_firing/get_number_of_pain_neurons(nociceptive_neurons_list) #return the proportion firing
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
pain_list_all_neurons = [0, 0, 0, 1, 1]
pain_list_only_firing = [1, 1]
experience_pain(pain_list_all_neurons) # would return pain_intensity(2)
experience_pain(pain_list_only_firing) # would return pain_intensity(10)
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where Iād push back. My understanding is that neurons donāt ācheckā if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldnāt be able to tell whether a neuron was not firing or just didnāt exist at that moment. This text box Iām typing into canāt tell whether the keyboard doesnāt exist or just isnāt sending input signals, when Iām not typing, because (I assume) all it does is check for input.
(I think the computer does āknowā if thereās a keyboard, though, but Iād guess thatās because itās running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. Itās also possible to tell that something exists because a signal is received in its absence but not when itās present, like an object blocking light or a current.)
Specifically, I donāt think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesnāt fire:
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
It could be that even non-firing neurons affect other neurons in some other important ways Iām not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesnāt reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldnāt reflect my thought experiment anymore, which is intended to hold all else equal.
I donāt think itās a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really donāt know either way. They wonāt adapt immediately to the addition/āremoval of a neuron if it wasnāt going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
This seems like a pretty good reason to reject a simple proportion account
To be clear, I also reject the simple proportion account. For that matter, I reject any simple account. If thereās one thing Iāve learned from thinking about differences in the intensity of valenced experience, itās that brains are really, really complicated and messy. Perhaps thatās the reason Iām less moved by the type of thought experiments youāve been offering in this thread. Thought experiments, by their nature, abstract away a lot of detail. But because the neurological mechanisms that govern valenced experience are so complex and so poorly understood, itās hardly ever clear to me which details can be safely ignored. Fortunately, our tools for studying the brain are improving every year. Iām tentatively confident that the next couple decades will bring a fairly dramatic improvement in our neuroscientific understanding of conscious experience.
Still, I would conclude from my thought experiments that proportion canāt matter at all in a simple way (i.e. all else equal, and controlling for number of firing neurons), even as a small part of the picture, while number still plausibly could in a simple way (all else equal, and controlling for proportion of firing neurons), at least as a small part of the picture. All else equal, it seems number matters, but proportion does not. But ya, this might be close to useless to know now, since all else is so far from equal in practice. Maybe evolution ārenormalizesā intensity when more neurons are added. Or something else we havenāt even imagined yet.
That anti-proportionality arguments seems tricky to me. It sounds comparable to the following example. You see a grey picture, composed of small black and white pixels. (The white pixles correspond to neuron firings in your example) The greyness depends on the proportion of white pixels. Now, what happens when you remove the black pixels? That is undefined. It could be that only white pixels are left and you now see 100% whiteness. Or the absent black pixels are still being seen as black, which means the same greyness as before. Or removing the black pixels correspond with making them transparent, and then who knows what youāll see?
I would say my claim is that when you remove pixels, what you see in their place instead is in fact black, an absence of emitted light. Thereās no functional difference at any moment between a missing pixel and a black pixel if we only distinguish them by how much light they emit, which, in this case, is none for both. Iād also expect this to be what happens with a real monitor/āscreen in the dark (although maybe thereās something non-black behind the pixels; we could assume the lights are transparent).
One argument against proportion mattering (or at least in a straightforward way):
Suppose a brain responds to some stimuli and you record its pattern of neuron firings.
Then, suppose you could repeat exactly the same pattern of neuron firings, but before doing so, you remove all the neurons that wouldnāt have fired anyway. By doing so, you have increased the proportion of neurons that fire compared to 1.
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didnāt do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/āabsence makes a difference to me seems unphysical, because they didnāt do anything in 1 where they were present. Or itās a claim that whatās experienced in 1 depends on what could have happened instead, which also seems unphysical, since these counterfactuals shouldnāt change what actually happened. Number of firing neurons, on the other hand, only tracks actual physical events/āinteractions.
I had a similar discussion here, although there was pushback against my views.
This seems like a pretty good reason to reject a simple proportion account, and so it does seem like itās really the number firing that matters in a given brain, or the same brain with neurons removed (or something like graph minors, more generally, so also allowing contractions of paths). This suggests that if one brain A can be embedded into another B, and so we can get A from B by removing neurons and/āor connections from B, then B has more intense experiences than A, ignoring effects of extra neurons in B that may actually decrease intensity, like inhibition (and competition?).
Iām unclear why you think proportion couldnāt matter in this scenario.
Iāve written a pseudo program in Python below in which proportion does matter, removing neurons that donāt fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I donāt believe consciousness works this way in humans or other animals but I donāt think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where Iād push back. My understanding is that neurons donāt ācheckā if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldnāt be able to tell whether a neuron was not firing or just didnāt exist at that moment. This text box Iām typing into canāt tell whether the keyboard doesnāt exist or just isnāt sending input signals, when Iām not typing, because (I assume) all it does is check for input.
(I think the computer does āknowā if thereās a keyboard, though, but Iād guess thatās because itās running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. Itās also possible to tell that something exists because a signal is received in its absence but not when itās present, like an object blocking light or a current.)
Specifically, I donāt think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesnāt fire:
It could be that even non-firing neurons affect other neurons in some other important ways Iām not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesnāt reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldnāt reflect my thought experiment anymore, which is intended to hold all else equal.
I donāt think itās a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really donāt know either way. They wonāt adapt immediately to the addition/āremoval of a neuron if it wasnāt going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
Iām also ignoring inhibitory neurons.
To be clear, I also reject the simple proportion account. For that matter, I reject any simple account. If thereās one thing Iāve learned from thinking about differences in the intensity of valenced experience, itās that brains are really, really complicated and messy. Perhaps thatās the reason Iām less moved by the type of thought experiments youāve been offering in this thread. Thought experiments, by their nature, abstract away a lot of detail. But because the neurological mechanisms that govern valenced experience are so complex and so poorly understood, itās hardly ever clear to me which details can be safely ignored. Fortunately, our tools for studying the brain are improving every year. Iām tentatively confident that the next couple decades will bring a fairly dramatic improvement in our neuroscientific understanding of conscious experience.
Fair point. I agree.
Still, I would conclude from my thought experiments that proportion canāt matter at all in a simple way (i.e. all else equal, and controlling for number of firing neurons), even as a small part of the picture, while number still plausibly could in a simple way (all else equal, and controlling for proportion of firing neurons), at least as a small part of the picture. All else equal, it seems number matters, but proportion does not. But ya, this might be close to useless to know now, since all else is so far from equal in practice. Maybe evolution ārenormalizesā intensity when more neurons are added. Or something else we havenāt even imagined yet.
That anti-proportionality arguments seems tricky to me. It sounds comparable to the following example. You see a grey picture, composed of small black and white pixels. (The white pixles correspond to neuron firings in your example) The greyness depends on the proportion of white pixels. Now, what happens when you remove the black pixels? That is undefined. It could be that only white pixels are left and you now see 100% whiteness. Or the absent black pixels are still being seen as black, which means the same greyness as before. Or removing the black pixels correspond with making them transparent, and then who knows what youāll see?
I would say my claim is that when you remove pixels, what you see in their place instead is in fact black, an absence of emitted light. Thereās no functional difference at any moment between a missing pixel and a black pixel if we only distinguish them by how much light they emit, which, in this case, is none for both. Iād also expect this to be what happens with a real monitor/āscreen in the dark (although maybe thereās something non-black behind the pixels; we could assume the lights are transparent).