I apologize if you addressed this and I missed it, since Iām still reading.
In response to the section Decision-Making, my impression is that brain parallelism/āduplication thought experiments e.g. Shulman, Tomasik, are a reason to expect greater intensity in larger brains, and evolution would have to tune overall motivation, behaviour and attention to be less sensitive to intensity of valence compared to smaller brains in order to achieve adaptive behaviour.
If you took a person, duplicated their brain and connected the copy to the same inputs and outputs, the system with two brains would experience twice as much valence (assuming the strength of the signal is maintained when itās split to get to each brain). Its outputs would get twice the signal, too, so the system would overreact compared to if there had just been one brain. Setting aside unconscious processing and reflexive behaviour and assuming all neural paths from input to output go through conscious experience (they donāt), there would be two ways to fix this and get back the original one-brain behaviour in response to the same inputs, while holding the size of the two brains constant:
reduce the intensity of the experiences across the two brains, and
reduce the output response relative to intensity of experience across the two brains.
I think we should expect both to happen if we reoptimized this system (holding brain size constant and requiring the original single-brain final behaviour), and Iād expect the system to have 1x to 2x the intensity of experience as the original one brain, and be 1x to 2x less responsive (at outputs) for each intensity of experience. In general, making N copies of the same brain (so N times larger) would give 1x to Nx the intensity. This range is not so helpful, though, since it allows us, at the extremes, to weight brain size linearly, or not at all!
I think āN is a natural choice for the amount by which the intensity is increased and the response is decreased as the mean (or mode?) of a prior distribution, since we use the same factor increase/ādecrease for each. But this relies on a very speculative symmetry. The factors could also depend on the intensity of the experience instead of being uniform across experiences. On the other hand, Shulman supports at least N times the moral weight, but his argument doesnāt involve reoptimizing:
Iād guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds
Some remarks:
This isnāt to say weād weight whole brains, since much of what happens in larger brains is not relevant to intensity of valence.
Evolution may be too unlike this thought experiment, so we shouldnāt have much confidence in the argument.
This assumes an additive calculus and no integration between the two brains. Iād expect the N brains to each have less intense valence than the original, so if we were sufficiently prioritarian, we might actually prioritize a single brain over the N after fixing the N. Or maybe this is a reductio of prioritarianism if we think integration doesnāt actually matter.
The N-brain system has a lot of redundancy. It could repurpose N-1 of the brains for something else, and just keep the one to preserve the original one-brain behaviour (or behaviour thatās at least as adaptive). The extra N-1 brains worth of processing could or could not involve extra valence. I think this is a good response to undermine the whole argument, although weād have to believe none of the extra total processing is used for extra valence (or that thereās less valence in the larger brain, which seems unlikely). The thought experiment only really tells us about making neural networks wider, not deeper.
Maybe some redundancy is useful, too, but how much? Does it give us finer discrimination (more just noticeable differences) or more robust/āless noisy discrimination (taking the āconsensusā of the activations of more neurons)? It also matters whether this happens in conscious or unconscious processing, but (I assume) human brains are larger than almost all other animalsā in similar brain regions, including those related to valence.
Maybe there are genes that contribute to brain size kind of generally (with separate genes for how the extra neurons are used), or for both regions necessary for valence and others that arenāt, so intensity was increased as a side-effect of some other useful adaptation, and motivation had to decrease in response.
Setting aside unconscious processing and reflexive behaviour and assuming all neural paths from input to output go through conscious experience (they donāt), there would be two ways to fix this and get back the original one-brain behaviour in response to the same inputs, while holding the size of the two brains constant:
reduce the intensity of the experiences across the two brains, and
reduce the output response relative to intensity of experience across the two brains.
1 could also be divided into further steps for physical stimuli, for example noting that sensory pain perception and the affective response to pain are distinct:
reduce the intensity of sensory perception across the two brains for a given stimuli intensity
reduce the intensity of the affective response across the two brains for a given sensory perception intensity
reduce the output response across the two brains for a given affective intensity.
And repeating the argument in the comment Iām replying to, the prior could be 3āN for physical stimuli. Of course, this illustrates dependence on some pretty arbitrary and empirically ungrounded assumptions about how to divide up a brain. EDIT: It should be N13āN=N23. This makes sense intuitively: there are more dimensions along which to reduce the sensitivity, so each can be reduced less.
I wouldnāt be surprised if the average insect neuron fired more often than the average neuron in larger brains for similar behavioural responses to events, since larger brains could have a lot more room for redundancy. Maybe this can help prevent overfitting in a big brain, like ādropoutā used while training deep artificial neural networks. This seems worth checking by comparing actual animal brains. The number of neurons (in the relevant parts of the brain) firing per second seems to matter more than just the number of neurons (in the relevant parts of the brain), and they may not scale linearly with each other in practice.
Thanks for the comment and thanks for prompting me to write about these sorts of thought experiments. I confess Iāve never felt their bite, but perhaps thatās because Iāve never understood them. Iām not sure what the crux of our disagreement is, and I worry that we might talk past each other. So Iām just going to offer some reactions, and Iāll let you tell me what is and isnāt relevant to the sort of objection youāre pursuing.
Big brains are not just collections of little brains. Large brains are incredibly specialized (though somewhat plastic).
At least in humans, consciousness is unified. Even if you could carve out some smallish region of a human brain and put it in a system such that it becomes a seat of consciousness, that doesnāt mean that within the human brain that region is itself a seat of consciousness. (Happy to talk in much more detail about this point if this turns out to be the crux.)
Valence intensity isnāt controlled by the raw number of neurons firing. I didnāt find any neuroscience papers that suggested there might be a correlation between neuron count and valence intensity. As with all things neurological, the actual story is a lot more complicated than a simple metric like neuron count would suggest.
Not sure where this fits in, but if you yoke two brains together, it seems to me youād have two independent seats of consciousness. Thereās probably some way of filling out the thought experiment such that that would not be the case, but I think the details actually matter here, so Iād have to see the filled-out thought experiment.
I agree with 1. I think it weakens the force of the argument, but Iām not sure it defeats it.
2 might be a crux. I might say that unity is largely illusory and integration comes in degrees (so itās misleading to count consciousnesses with integers) since we can imagine cutting connections between two regions of a brain one at a time (e.g. between our two hemispheres), and even if you took distinct conscious brains and integrated/āunified them, we might think the unified brain would matter at least as much as the separate brains (this is Shulmanās thought experiment).
There could also be hidden qualia. There may be roughly insect brains in your brain, but āyouā are only connected to a small subset of their neurons (or only get feedforward connections from them). Similarly, you could imagine connecting your brain to someone elseās only partially so that their experiences remain mostly hidden to you.
Maybe a better real-world argument would be split-brain patients? Is it accurate to say there are distinct/āseparate consciousnesses in each hemisphere after splitting, and if thatās the case, shouldnāt we expect their full unsplit brain to have at least roughly the same moral weight as the two split brains, even though itās more unified (regardless of any lateralization of valence)? If not, weāre suggesting that splitting the brains actually increases moral weight; this isnāt a priori implausible, but I lean against this conclusion.
In particular, this suggests (assuming a causal role, with more neurons that are responsive causing more intense experiences) that if we got rid of neurons in these regions and so had fewer of them (and therefore fewer of them to be responsive), we would decrease the intensities of the experiences (and valence).
Thanks for engaging so deeply with the piece. This is a super complicated subject, and I really appreciate your perspective.
I agree that hidden qualia are possible, but Iām not sure thereās much of an argument on the table suggesting they exist. When possible, I think itās important to try to ground these philosophical debates in empirical evidence. The split-brain case is interesting precisely because there is empirical evidence for dual seats of consciousness. From the SEP entry on the unity of consciousness:
In these operations, the corpus callosum is cut. The corpus callosum is a large strand of about 200,000,000 neurons running from one hemisphere to the other. When present, it is the chief channel of communication between the hemispheres. These operations, done mainly in the 1960s but recently reintroduced in a somewhat modified form, are a last-ditch effort to control certain kinds of severe epilepsy by stopping the spread of seizures from one lobe of the cerebral cortex to the other. For details, see Sperry (1984), Zaidel et al. (1993), or Gazzaniga (2000).
In normal life, patients show little effect of the operation. In particular, their consciousness of their world and themselves appears to remain as unified as it was prior to the operation. How this can be has puzzled a lot of people (Hurley 1998). Even more interesting for our purposes, however, is that, under certain laboratory conditions, these patients seem to behave as though two ācentres of consciousnessā have been created in them. The original unity seems to be gone and two centres of unified consciousness seem to have replaced it, each associated with one of the two cerebral hemispheres.
Here are a couple of examples of the kinds of behaviour that prompt that assessment. The human retina is split vertically in such a way that the left half of each retina is primarily hooked up to the left hemisphere of the brain and the right half of each retina is primarily hooked up to the right hemisphere of the brain. Now suppose that we flash the word TAXABLE on a screen in front of a brain bisected patient in such a way that the letters TAX hit the left side of the retina, the letters ABLE the right side, and we put measures in place to ensure that the information hitting each half of the retina goes only to one lobe and is not fed to the other. If such a patient is asked what word is being shown, the mouth, controlled usually by the left hemisphere, will say TAX while the hand controlled by the hemisphere that does not control the mouth (usually the left hand and the right hemisphere) will write ABLE. Or, if the hemisphere that controls a hand (usually the left hand) but not speech is asked to do arithmetic in a way that does not penetrate to the hemisphere that controls speech and the hands are shielded from the eyes, the mouth will insist that it is not doing arithmetic, has not even thought of arithmetic today, and so onāwhile the appropriate hand is busily doing arithmetic!
So I donāt think itās implausible to assign split-brain patients 2x moral weight.
I also think itās possible to find empirical evidence for differences in phenomenal unity across species. Thereās some really interesting work concerning octopuses. See, for example, āThe Octopus and the Unity of Consciousnessā. (I might write more about this topic in a few months, so stay tuned.)
As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So Iām not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)
As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So Iām not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)
One argument against proportion mattering (or at least in a straightforward way):
Suppose a brain responds to some stimuli and you record its pattern of neuron firings.
Then, suppose you could repeat exactly the same pattern of neuron firings, but before doing so, you remove all the neurons that wouldnāt have fired anyway. By doing so, you have increased the proportion of neurons that fire compared to 1.
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didnāt do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/āabsence makes a difference to me seems unphysical, because they didnāt do anything in 1 where they were present. Or itās a claim that whatās experienced in 1 depends on what could have happened instead, which also seems unphysical, since these counterfactuals shouldnāt change what actually happened. Number of firing neurons, on the other hand, only tracks actual physical events/āinteractions.
I had a similar discussion here, although there was pushback against my views.
This seems like a pretty good reason to reject a simple proportion account, and so it does seem like itās really the number firing that matters in a given brain, or the same brain with neurons removed (or something like graph minors, more generally, so also allowing contractions of paths). This suggests that if one brain A can be embedded into another B, and so we can get A from B by removing neurons and/āor connections from B, then B has more intense experiences than A, ignoring effects of extra neurons in B that may actually decrease intensity, like inhibition (and competition?).
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didnāt do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/āabsence makes a difference to me seems unphysical, because they didnāt do anything in 1 where they were present.
Iām unclear why you think proportion couldnāt matter in this scenario.
Iāve written a pseudo program in Python below in which proportion does matter, removing neurons that donāt fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I donāt believe consciousness works this way in humans or other animals but I donāt think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
def experience_pain(nociceptive_neurons_list):
# nociceptive_neurons_list is a list of neurons represented by 0's and 1's, where 1 is when an individual neuron is firing, and 0 is not
proportion_of_neurons_firing = proportion_of_neurons_firing(nociceptive_neurons_list)
if proportion_of_neurons_firing < 0.3:
return pain_intensity(1)
elif proportion_of_neurons_firing > 0.3 && proportion_of_neurons_firing < 0.6:
return pain_intensity(2)
elif proportion_of_neurons_firing > 0.6 && proportion_of_neurons_firing < 1:
return pain_intensity(5)
elif proportion_of_neurons_firing == 1:
return pain_intensity(10)
else:
return pain_intensity(0)
def proportion_of_neurons_firing(nociceptive_neurons_list):
num_neurons_firing = 0
for neuron in nociceptive_neurons_list:
if neuron == 1:
num_neurons_firing += num_neurons_firing # add 1 for every neuron that is firing
return num_neurons_firing/get_number_of_pain_neurons(nociceptive_neurons_list) #return the proportion firing
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
pain_list_all_neurons = [0, 0, 0, 1, 1]
pain_list_only_firing = [1, 1]
experience_pain(pain_list_all_neurons) # would return pain_intensity(2)
experience_pain(pain_list_only_firing) # would return pain_intensity(10)
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where Iād push back. My understanding is that neurons donāt ācheckā if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldnāt be able to tell whether a neuron was not firing or just didnāt exist at that moment. This text box Iām typing into canāt tell whether the keyboard doesnāt exist or just isnāt sending input signals, when Iām not typing, because (I assume) all it does is check for input.
(I think the computer does āknowā if thereās a keyboard, though, but Iād guess thatās because itās running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. Itās also possible to tell that something exists because a signal is received in its absence but not when itās present, like an object blocking light or a current.)
Specifically, I donāt think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesnāt fire:
def get_number_of_pain_neurons(nociceptive_neurons_list):
return len(nociceptive_neurons_list) # get length of list
It could be that even non-firing neurons affect other neurons in some other important ways Iām not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesnāt reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldnāt reflect my thought experiment anymore, which is intended to hold all else equal.
I donāt think itās a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really donāt know either way. They wonāt adapt immediately to the addition/āremoval of a neuron if it wasnāt going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
This seems like a pretty good reason to reject a simple proportion account
To be clear, I also reject the simple proportion account. For that matter, I reject any simple account. If thereās one thing Iāve learned from thinking about differences in the intensity of valenced experience, itās that brains are really, really complicated and messy. Perhaps thatās the reason Iām less moved by the type of thought experiments youāve been offering in this thread. Thought experiments, by their nature, abstract away a lot of detail. But because the neurological mechanisms that govern valenced experience are so complex and so poorly understood, itās hardly ever clear to me which details can be safely ignored. Fortunately, our tools for studying the brain are improving every year. Iām tentatively confident that the next couple decades will bring a fairly dramatic improvement in our neuroscientific understanding of conscious experience.
Still, I would conclude from my thought experiments that proportion canāt matter at all in a simple way (i.e. all else equal, and controlling for number of firing neurons), even as a small part of the picture, while number still plausibly could in a simple way (all else equal, and controlling for proportion of firing neurons), at least as a small part of the picture. All else equal, it seems number matters, but proportion does not. But ya, this might be close to useless to know now, since all else is so far from equal in practice. Maybe evolution ārenormalizesā intensity when more neurons are added. Or something else we havenāt even imagined yet.
That anti-proportionality arguments seems tricky to me. It sounds comparable to the following example. You see a grey picture, composed of small black and white pixels. (The white pixles correspond to neuron firings in your example) The greyness depends on the proportion of white pixels. Now, what happens when you remove the black pixels? That is undefined. It could be that only white pixels are left and you now see 100% whiteness. Or the absent black pixels are still being seen as black, which means the same greyness as before. Or removing the black pixels correspond with making them transparent, and then who knows what youāll see?
I would say my claim is that when you remove pixels, what you see in their place instead is in fact black, an absence of emitted light. Thereās no functional difference at any moment between a missing pixel and a black pixel if we only distinguish them by how much light they emit, which, in this case, is none for both. Iād also expect this to be what happens with a real monitor/āscreen in the dark (although maybe thereās something non-black behind the pixels; we could assume the lights are transparent).
So I donāt think itās implausible to assign split-brain patients 2x moral weight.
What if we only destroyed 1%, 50% or 99% of their corpus callosum? Would that mean increasing degrees of moral weight from ~1x to ~2x? What is it about cutting these connections that increases moral weight? Is it the increased independence?
Maybe this an inherently normative question, and thereās no fact of the matter which has āmoreā experience? Or we canāt answer this through empirical research? Or weāre just nowhere near doing so?
Itās plausible to assign split-brain patients 2x moral weight because itās plausible that split-brain patients contain two independent morally relevant seats of consciousness. (To be clear, Iām just claiming this is a plausible view; Iām not prepared to give an all-things-considered defense of the view.) I take it to be an empirical question how much of the corpus callosum needs to be severed to generate such a split. Exploring the answer to this empirical question might help us think about the phenomenal unity of creatures with less centralized brains than humans, such as cephalopods.
About split brain; those studies are about cognition (having beliefs about what is being seen). Does anyone know if the same happens with affection (valenced experience)? For example: left brain sees a horrible picture, right brain sees picture of the most joyfull vacation memory. Now ask left and right brains how they feel. I imagine such experiments are already being done? My expectation is that asking the brain hemisphere who sees the picture of the vacation memory, that hemisphere will respond that the picture strangely enough gives the subject a weird, unexplainable, kind of horrible feeling instead of pure joy. As if feelings are still unified. Anyone knows about such studies?
I think āN is a natural choice for the amount by which the intensity is increased and the response is decreased as the mean (or mode?) of a prior distribution, since we use the same factor increase/ādecrease for each. But this relies on a very speculative symmetry.
I think deriving āN from the geometric mean between 1 and N is not the best approch, even assuming 1 and N are the ātrueā minimum and maximum scaling factors. The geometric mean between two quantiles whose sum is 1 (e.g. 0 and 1) corresponds to the median of a loguniform/ālognormal distribution, but what we arguably care about is the mean, which is larger.
Iād claim our prior distribution should be somewhat concentrated around āN and its log roughly symmetric around it, so the EV is plausibly close to āN, but it could be higher if the distribution is not concentrated enough, which is also very plausible.
Actually, the difference between the mean and median is much smaller than I expected. For 1/āN = 221 M /ā 86 G = 0.00256 (ratio between the number of neurons of a red junglefowl and human taken from here), the mean and median of a distribution whose 1st and 99th percentiles are 1/āN and 1 are:
Lognormal distribution (āvery concentratedā): 0.1 and 0.05, i.e. the mean is only 2 times as large as the median.
Loguniform (ānot concentratedā): 0.2 and 0.05, i.e. the mean is only 3 times as large as the median.
The mean moral weight of poultry birds relative to humans of 2 I estimated here is 10 times as large as the one respecting the loguniform distribution just above. This makes me think 2 is not an unreasonably high estimate, especially having in mind that there are factors such as clock speed of consciousness which might increase the moral weight of poultry birds relative to humans, instead of decreasing it as the number of neurons.
Excited to read through this! Thanks!
I apologize if you addressed this and I missed it, since Iām still reading.
In response to the section Decision-Making, my impression is that brain parallelism/āduplication thought experiments e.g. Shulman, Tomasik, are a reason to expect greater intensity in larger brains, and evolution would have to tune overall motivation, behaviour and attention to be less sensitive to intensity of valence compared to smaller brains in order to achieve adaptive behaviour.
If you took a person, duplicated their brain and connected the copy to the same inputs and outputs, the system with two brains would experience twice as much valence (assuming the strength of the signal is maintained when itās split to get to each brain). Its outputs would get twice the signal, too, so the system would overreact compared to if there had just been one brain. Setting aside unconscious processing and reflexive behaviour and assuming all neural paths from input to output go through conscious experience (they donāt), there would be two ways to fix this and get back the original one-brain behaviour in response to the same inputs, while holding the size of the two brains constant:
reduce the intensity of the experiences across the two brains, and
reduce the output response relative to intensity of experience across the two brains.
I think we should expect both to happen if we reoptimized this system (holding brain size constant and requiring the original single-brain final behaviour), and Iād expect the system to have 1x to 2x the intensity of experience as the original one brain, and be 1x to 2x less responsive (at outputs) for each intensity of experience. In general, making N copies of the same brain (so N times larger) would give 1x to Nx the intensity. This range is not so helpful, though, since it allows us, at the extremes, to weight brain size linearly, or not at all!
I think āN is a natural choice for the amount by which the intensity is increased and the response is decreased as the mean (or mode?) of a prior distribution, since we use the same factor increase/ādecrease for each. But this relies on a very speculative symmetry. The factors could also depend on the intensity of the experience instead of being uniform across experiences. On the other hand, Shulman supports at least N times the moral weight, but his argument doesnāt involve reoptimizing:
Some remarks:
This isnāt to say weād weight whole brains, since much of what happens in larger brains is not relevant to intensity of valence.
Evolution may be too unlike this thought experiment, so we shouldnāt have much confidence in the argument.
This assumes an additive calculus and no integration between the two brains. Iād expect the N brains to each have less intense valence than the original, so if we were sufficiently prioritarian, we might actually prioritize a single brain over the N after fixing the N. Or maybe this is a reductio of prioritarianism if we think integration doesnāt actually matter.
The N-brain system has a lot of redundancy. It could repurpose N-1 of the brains for something else, and just keep the one to preserve the original one-brain behaviour (or behaviour thatās at least as adaptive). The extra N-1 brains worth of processing could or could not involve extra valence. I think this is a good response to undermine the whole argument, although weād have to believe none of the extra total processing is used for extra valence (or that thereās less valence in the larger brain, which seems unlikely). The thought experiment only really tells us about making neural networks wider, not deeper.
Maybe some redundancy is useful, too, but how much? Does it give us finer discrimination (more just noticeable differences) or more robust/āless noisy discrimination (taking the āconsensusā of the activations of more neurons)? It also matters whether this happens in conscious or unconscious processing, but (I assume) human brains are larger than almost all other animalsā in similar brain regions, including those related to valence.
Maybe there are genes that contribute to brain size kind of generally (with separate genes for how the extra neurons are used), or for both regions necessary for valence and others that arenāt, so intensity was increased as a side-effect of some other useful adaptation, and motivation had to decrease in response.
1 could also be divided into further steps for physical stimuli, for example noting that sensory pain perception and the affective response to pain are distinct:
reduce the intensity of sensory perception across the two brains for a given stimuli intensity
reduce the intensity of the affective response across the two brains for a given sensory perception intensity
reduce the output response across the two brains for a given affective intensity.
And repeating the argument in the comment Iām replying to, the prior could be 3āN for physical stimuli. Of course, this illustrates dependence on some pretty arbitrary and empirically ungrounded assumptions about how to divide up a brain. EDIT: It should be N13āN=N23. This makes sense intuitively: there are more dimensions along which to reduce the sensitivity, so each can be reduced less.
I wouldnāt be surprised if the average insect neuron fired more often than the average neuron in larger brains for similar behavioural responses to events, since larger brains could have a lot more room for redundancy. Maybe this can help prevent overfitting in a big brain, like ādropoutā used while training deep artificial neural networks. This seems worth checking by comparing actual animal brains. The number of neurons (in the relevant parts of the brain) firing per second seems to matter more than just the number of neurons (in the relevant parts of the brain), and they may not scale linearly with each other in practice.
Hi Michael,
Thanks for the comment and thanks for prompting me to write about these sorts of thought experiments. I confess Iāve never felt their bite, but perhaps thatās because Iāve never understood them. Iām not sure what the crux of our disagreement is, and I worry that we might talk past each other. So Iām just going to offer some reactions, and Iāll let you tell me what is and isnāt relevant to the sort of objection youāre pursuing.
Big brains are not just collections of little brains. Large brains are incredibly specialized (though somewhat plastic).
At least in humans, consciousness is unified. Even if you could carve out some smallish region of a human brain and put it in a system such that it becomes a seat of consciousness, that doesnāt mean that within the human brain that region is itself a seat of consciousness. (Happy to talk in much more detail about this point if this turns out to be the crux.)
Valence intensity isnāt controlled by the raw number of neurons firing. I didnāt find any neuroscience papers that suggested there might be a correlation between neuron count and valence intensity. As with all things neurological, the actual story is a lot more complicated than a simple metric like neuron count would suggest.
Not sure where this fits in, but if you yoke two brains together, it seems to me youād have two independent seats of consciousness. Thereās probably some way of filling out the thought experiment such that that would not be the case, but I think the details actually matter here, so Iād have to see the filled-out thought experiment.
I agree with 1. I think it weakens the force of the argument, but Iām not sure it defeats it.
2 might be a crux. I might say that unity is largely illusory and integration comes in degrees (so itās misleading to count consciousnesses with integers) since we can imagine cutting connections between two regions of a brain one at a time (e.g. between our two hemispheres), and even if you took distinct conscious brains and integrated/āunified them, we might think the unified brain would matter at least as much as the separate brains (this is Shulmanās thought experiment).
Also related: https://āāwww.nickbostrom.com/āāpapers/āāexperience.pdf
There could also be hidden qualia. There may be roughly insect brains in your brain, but āyouā are only connected to a small subset of their neurons (or only get feedforward connections from them). Similarly, you could imagine connecting your brain to someone elseās only partially so that their experiences remain mostly hidden to you.
Maybe a better real-world argument would be split-brain patients? Is it accurate to say there are distinct/āseparate consciousnesses in each hemisphere after splitting, and if thatās the case, shouldnāt we expect their full unsplit brain to have at least roughly the same moral weight as the two split brains, even though itās more unified (regardless of any lateralization of valence)? If not, weāre suggesting that splitting the brains actually increases moral weight; this isnāt a priori implausible, but I lean against this conclusion.
On 3, at least within brains, there seems to be a link between intensity and number of responsive neurons, e.g.: https://āāwww.ncbi.nlm.nih.gov/āāpmc/āāarticles/āāPMC3179932/āā
In particular, this suggests (assuming a causal role, with more neurons that are responsive causing more intense experiences) that if we got rid of neurons in these regions and so had fewer of them (and therefore fewer of them to be responsive), we would decrease the intensities of the experiences (and valence).
Hey Michael,
Thanks for engaging so deeply with the piece. This is a super complicated subject, and I really appreciate your perspective.
I agree that hidden qualia are possible, but Iām not sure thereās much of an argument on the table suggesting they exist. When possible, I think itās important to try to ground these philosophical debates in empirical evidence. The split-brain case is interesting precisely because there is empirical evidence for dual seats of consciousness. From the SEP entry on the unity of consciousness:
So I donāt think itās implausible to assign split-brain patients 2x moral weight.
I also think itās possible to find empirical evidence for differences in phenomenal unity across species. Thereās some really interesting work concerning octopuses. See, for example, āThe Octopus and the Unity of Consciousnessā. (I might write more about this topic in a few months, so stay tuned.)
As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So Iām not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)
One argument against proportion mattering (or at least in a straightforward way):
Suppose a brain responds to some stimuli and you record its pattern of neuron firings.
Then, suppose you could repeat exactly the same pattern of neuron firings, but before doing so, you remove all the neurons that wouldnāt have fired anyway. By doing so, you have increased the proportion of neurons that fire compared to 1.
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didnāt do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/āabsence makes a difference to me seems unphysical, because they didnāt do anything in 1 where they were present. Or itās a claim that whatās experienced in 1 depends on what could have happened instead, which also seems unphysical, since these counterfactuals shouldnāt change what actually happened. Number of firing neurons, on the other hand, only tracks actual physical events/āinteractions.
I had a similar discussion here, although there was pushback against my views.
This seems like a pretty good reason to reject a simple proportion account, and so it does seem like itās really the number firing that matters in a given brain, or the same brain with neurons removed (or something like graph minors, more generally, so also allowing contractions of paths). This suggests that if one brain A can be embedded into another B, and so we can get A from B by removing neurons and/āor connections from B, then B has more intense experiences than A, ignoring effects of extra neurons in B that may actually decrease intensity, like inhibition (and competition?).
Iām unclear why you think proportion couldnāt matter in this scenario.
Iāve written a pseudo program in Python below in which proportion does matter, removing neurons that donāt fire alters the experience, and the the raw number of neurons involved is incidental to the outputs (10 out of 100 gets the same result as 100 out of 1000) [assuming there is a set of neurons to be checked at all]. I donāt believe consciousness works this way in humans or other animals but I donāt think anything about this is obviously incorrect given the constraints of your thought experiment.
One place where this might be incorrect is by checking if a neuron was firing, this might be seen as violating the constraint on the inactive neurons actually being inactive. But this could be conceived a third group of neurons checking for input from this set. But even if this particular program is slightly astray, it seems plausible an altered version of this would meet the criteria for proportion to matter.
Ya, this is where Iād push back. My understanding is that neurons donāt ācheckā if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldnāt be able to tell whether a neuron was not firing or just didnāt exist at that moment. This text box Iām typing into canāt tell whether the keyboard doesnāt exist or just isnāt sending input signals, when Iām not typing, because (I assume) all it does is check for input.
(I think the computer does āknowā if thereās a keyboard, though, but Iād guess thatās because itās running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. Itās also possible to tell that something exists because a signal is received in its absence but not when itās present, like an object blocking light or a current.)
Specifically, I donāt think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesnāt fire:
It could be that even non-firing neurons affect other neurons in some other important ways Iām not aware of, though.
EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesnāt reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldnāt reflect my thought experiment anymore, which is intended to hold all else equal.
I donāt think itās a priori implausible that this is how brains work when new neurons are added, from one generation to the next or even within a single brain over time, i.e. neurons could adapt to more inputs by becoming less sensitive, but this is speculation on my part and I really donāt know either way. They wonāt adapt immediately to the addition/āremoval of a neuron if it wasnāt going to fire anyway, unless neurons have effects on other neurons beyond the signals they send by firing.
Iām also ignoring inhibitory neurons.
To be clear, I also reject the simple proportion account. For that matter, I reject any simple account. If thereās one thing Iāve learned from thinking about differences in the intensity of valenced experience, itās that brains are really, really complicated and messy. Perhaps thatās the reason Iām less moved by the type of thought experiments youāve been offering in this thread. Thought experiments, by their nature, abstract away a lot of detail. But because the neurological mechanisms that govern valenced experience are so complex and so poorly understood, itās hardly ever clear to me which details can be safely ignored. Fortunately, our tools for studying the brain are improving every year. Iām tentatively confident that the next couple decades will bring a fairly dramatic improvement in our neuroscientific understanding of conscious experience.
Fair point. I agree.
Still, I would conclude from my thought experiments that proportion canāt matter at all in a simple way (i.e. all else equal, and controlling for number of firing neurons), even as a small part of the picture, while number still plausibly could in a simple way (all else equal, and controlling for proportion of firing neurons), at least as a small part of the picture. All else equal, it seems number matters, but proportion does not. But ya, this might be close to useless to know now, since all else is so far from equal in practice. Maybe evolution ārenormalizesā intensity when more neurons are added. Or something else we havenāt even imagined yet.
That anti-proportionality arguments seems tricky to me. It sounds comparable to the following example. You see a grey picture, composed of small black and white pixels. (The white pixles correspond to neuron firings in your example) The greyness depends on the proportion of white pixels. Now, what happens when you remove the black pixels? That is undefined. It could be that only white pixels are left and you now see 100% whiteness. Or the absent black pixels are still being seen as black, which means the same greyness as before. Or removing the black pixels correspond with making them transparent, and then who knows what youāll see?
I would say my claim is that when you remove pixels, what you see in their place instead is in fact black, an absence of emitted light. Thereās no functional difference at any moment between a missing pixel and a black pixel if we only distinguish them by how much light they emit, which, in this case, is none for both. Iād also expect this to be what happens with a real monitor/āscreen in the dark (although maybe thereās something non-black behind the pixels; we could assume the lights are transparent).
All fair points.
What if we only destroyed 1%, 50% or 99% of their corpus callosum? Would that mean increasing degrees of moral weight from ~1x to ~2x? What is it about cutting these connections that increases moral weight? Is it the increased independence?
Maybe this an inherently normative question, and thereās no fact of the matter which has āmoreā experience? Or we canāt answer this through empirical research? Or weāre just nowhere near doing so?
Itās plausible to assign split-brain patients 2x moral weight because itās plausible that split-brain patients contain two independent morally relevant seats of consciousness. (To be clear, Iām just claiming this is a plausible view; Iām not prepared to give an all-things-considered defense of the view.) I take it to be an empirical question how much of the corpus callosum needs to be severed to generate such a split. Exploring the answer to this empirical question might help us think about the phenomenal unity of creatures with less centralized brains than humans, such as cephalopods.
About split brain; those studies are about cognition (having beliefs about what is being seen). Does anyone know if the same happens with affection (valenced experience)? For example: left brain sees a horrible picture, right brain sees picture of the most joyfull vacation memory. Now ask left and right brains how they feel. I imagine such experiments are already being done? My expectation is that asking the brain hemisphere who sees the picture of the vacation memory, that hemisphere will respond that the picture strangely enough gives the subject a weird, unexplainable, kind of horrible feeling instead of pure joy. As if feelings are still unified. Anyone knows about such studies?
I think deriving āN from the geometric mean between 1 and N is not the best approch, even assuming 1 and N are the ātrueā minimum and maximum scaling factors. The geometric mean between two quantiles whose sum is 1 (e.g. 0 and 1) corresponds to the median of a loguniform/ālognormal distribution, but what we arguably care about is the mean, which is larger.
Iād claim our prior distribution should be somewhat concentrated around āN and its log roughly symmetric around it, so the EV is plausibly close to āN, but it could be higher if the distribution is not concentrated enough, which is also very plausible.
That makes sense.
Actually, the difference between the mean and median is much smaller than I expected. For 1/āN = 221 M /ā 86 G = 0.00256 (ratio between the number of neurons of a red junglefowl and human taken from here), the mean and median of a distribution whose 1st and 99th percentiles are 1/āN and 1 are:
Lognormal distribution (āvery concentratedā): 0.1 and 0.05, i.e. the mean is only 2 times as large as the median.
Loguniform (ānot concentratedā): 0.2 and 0.05, i.e. the mean is only 3 times as large as the median.
The mean moral weight of poultry birds relative to humans of 2 I estimated here is 10 times as large as the one respecting the loguniform distribution just above. This makes me think 2 is not an unreasonably high estimate, especially having in mind that there are factors such as clock speed of consciousness which might increase the moral weight of poultry birds relative to humans, instead of decreasing it as the number of neurons.