AB, I think you’re looking at a different stage of analysis than we are here. We’re looking at how to weigh the intrinsic value of different organisms so that we know how to count them. It sounds to me like you’re discussing the idea of, once we have decided on a particular way to count them, what actions should we take in order to best produce the greatest amount of value.
Adam Shriver
“Dimensions of Pain” workshop: Summary and updated conclusions
It’s an interesting thought, although I’d note that quite a few prominent authors would disagree that the cortex is ultimately what matters for valence even in mammals (Jaak Panksepp being a prominent example). I think it’d also raise interesting questions about how to generalize this idea to organisms that don’t have cortices. Michael used mushroom bodies in insects as an example, but is there reason to think that mushroom bodies in insects are “like the cortex and pallium” but unlike various subcortical structures in the brain that also play a role in integrating information from different sensory sources? I think there’s need to be more of a specification of which types of neurons are ultimately counted in a principled way.
Hi Rhiza,
I appreciate your interesting point! I would note that as Erich mentioned, we’re interested in moral patiency rather than moral agency, and we ultimately don’t endorse the idea of using neuron counts.
But in response to your comment, there are different ways of trying to spell out why more neurons would matter. Presumably, on some (or most) of those, the way neurons are connected to other neurons matters, and as you know in babies the connections between neurons are very different from the connections in older individuals. So I think a defender of the neuron count hypothesis would still be able to say, in response to your point, that it’s not just the number of neurons but rather the number of neurons networked together in a particular way that matters.
Here’s the report on conscious subsystems: https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
Here’s the report on conscious subsystems: https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
Thanks, yeah, I agree those are better than just raw neuron count and we discuss those a bit more in the longer report. But also the objections are meant to apply to even these measures.
Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?
Thanks Michael!
Thanks, I agree on these points. In regards to focusing on neurons involved in pain or other emotions, while I agree this would be the ideal thing to look at, the problem is that there is so much disagreement in the literature about issues that would be relevant for deciding which neurons/brain areas to include. There are positions that range from thinking that certain emotions can be localized to very specific regions to those who think that almost the whole brain is involved in every different type of experience, and lots of positions in between. So for that reason we tried to focus on more general criticisms.
No, it’s not a reference Goodheart’s Law.
It’s just that one reason for liking neuron counts is that we have relatively easy ways of measuring neurons in a brain (or at least relatively easy ways of coming up with good estimate). However, as noted, there are a lot of other things that are relevant if what we really care about is information-processing capacity, so neuron count isn’t an accurate measure of information processing capacity.
But if we focus on information-processing capacity itself, we no longer have a good way of easily measuring it (because of all the other factors involved).
This framing comes from Bob Fischer’s comment on an earlier draft, btw.
Wow, this is really cool, Lizka, thanks! I think it’s a really nice visualization of the post and report. I would say, in regards to the larger argument, that @lukeprog is right that hidden qualia/conscious subsystems is another key route people try to take between neuron count and moral weight, so the full picture of the overall debate would probably need to include that. (and again, RP’s report on that should be published next week).
Thanks, this is a great point.
We have a report on conscious subsystems coming out I believe next week, which considers the possibility of non-reportable valenced conscious states.
Also (speaking only about my own impressions), I’d say that while some people who talk about neuron counts might be thinking of hidden qualia (eg Brian Tomasik), it’s not clear to me that that is the assumption of most. I don’t think the hidden qualia assumption, for example, is an explicit assumption of Budolfson and Spears or of MacAskill’s discussion in his book (though of course I can’t speak to what they believe privately).
Anything specific I should look at?
My link above was to a bookmark in the report, which includes an additional argument.
Not sure I agree with the “TL” part haha, but this is a pretty good summary. However, I’d also add that there’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence (I guess this would go between 1d and 2) and also that I think the idea that more neurons are active during welfare-relevant experiences is a separate but related point to the idea that more brain volume is correlated with welfare-relevant experiences.
I’d also note that your TL/DR is a summary of the summary, but there are some additional arguments in the report that aren’t included in the summary. For example, here’s a more general argument against using neuron counts in the longer report: https://docs.google.com/document/d/1p50vw84-ry2taYmyOIl4B91j7wkCurlB/edit#bookmark=id.3mp7v7dyd88i
Why Neuron Counts Shouldn’t Be Used as Proxies for Moral Weight
Jason, thanks for the response! I’d definitely be interested in talking more some time...I’m a bit of a novice on this forum so let me know the best way to set something up.
As a first pass at your questions, my chapter The Unpleasantness of Pain for Humans and Other Animals gets at some of them.
I think for (1), it depends on how strongly you mean “comes apart.” If we just mean varying one dimension while the other stays constant, or varying one dimension more than the other, there are a huge number of instances where this occurs. If, however, you mean the stronger case of “coming apart” where one dimension is present while the other is completely absent, the evidence is a bit more controversial. Lesion studies like cingulotomies and pain asymbolia cases (resulting from insula lesions) are often cited as examples, but some argue that cingulotomies don’t produce true dissociations and the pain asymbolia cases are pretty rare and a bit strange in other ways. Morphine or other opioids are sometimes said to eliminate pain affect without eliminating pain sensation, but again there are scholars who disagree with that interpretation. There are also many other forms of studies (such as direct stimulation, transcranial magnetic stimulation, deep brain stimulation) that are able to produce differential effects for pain affect and pain sensation, but I don’t think any of them have resulted in complete dissociations.
Regarding (2), I argued in the above chapter that we need better research into the nonverbal effects of sensory-affective dissociation in humans. A lot of the research on the unpleasantness of pain in humans relies too much on verbal self-report, which makes it difficult to know how to map this dimension to other animals (conditioned place aversion is currently one of the ways of trying to test pain affect).
Finally, you also asked:: “If sensory intensity and affective intensity are correlated in humans, do you think it’s reasonable to assume that the components are correlated in other mammals?”
So the typical pain signal in humans might follow roughly this pattern: a noxious stimulus causes activity in nociceptors in the peripheral nervous system, which then send a signal to the spinal cord, which transmits information to the thalamus, which then passes the info on to sensory cortical regions and to affective regions (and there are some direct connections between the thalamus and affective regions). I think the magnitude at every step in that process is pretty strongly correlated with the ultimate affective intensity. But we wouldn’t want to say activity in nociceptors is a biomarker for valences experience despite the fact that it is strongly correlated with it, because we know of many instances where nociceptive activity can come apart from experienced pain. Granted, the connection between the sensory dimension of pain is more strongly correlated with experienced unpleasantness, but it seems like the same problem exists. So I guess I just tend to think that “X is a neurobiological marker of Y” requires something stronger than “X is highly correlated with/and or predictive of Y.”
To take one example of why this could matter, an expectation of pain can influence pain unpleasantness more than pain intensity ratings. So if you were using a marker that only predicted pain intensity, you could miss important details about the actual welfare implications of the pains. Many of the pains that occur in agricultural animals or laboratory animals presumably occur in situations where differential influences (such as that from anticipated pain, high anxiety, depression, etc) on the affective components of pain could be important.
Micheal, the link between specific brain regions and encoding pain affect is pretty complicated and controversial, as mentioned in the original article. So I would first note that even if we don’t know exactly what specific brain regions are doing, there’s still a lot of evidence (including several lines of evidence cited in the Price article you mention) for a sensory/affective dissociation.
That said, the brain regions most commonly linked to the affective dimension of pain are the anterior cingulate cortex (with some controversy as to whether the relevant region should be referred to as part of the midcingulate rather than the ACC), and the insula cortex (possibly along with the neighboring parietal operculum). But there was also a really impressively thorough recent study by Corder et a that seemed to show that the basolateral amygdala plays a central role in the unpleasantness of pain: https://science.sciencemag.org/content/363/6424/276 .
One difficulty with all of these regions is that they’re involved in many different cognitive processes, so it’s hard to suss out exactly what role is being played in pain. Part of what was especially cool about the Corder study was that it drilled down to specific neural ensembles within the amgdala that really did seem to play a pain-specific role. Similarly, more fine-grained examinations of the cingulate have helped to clarify which regions are involved in pain vs other processes: https://www.sciencedirect.com/science/article/abs/pii/S0891061815300326 and see also: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5801068/ ). The most detailed argument for a central role of the insula in pain affect is Grahek’s book Feeling Pain and Being in Pain, which is a bit dated now, but there’s still a lot of emphasis on the insula as a key area for pain’s unpleasantness. In humans, there’s evidence that lesions to the cingulate and insula can selectively impair pain affect while preserving pain sensation, direct stimulation of the insula can cause expressions of pain, and deep brain stimulation on the cingulate has selectively lessened the affective component of chronic pain in early studies.
So I guess the tl/dr is that the regions most likely to play a central role in pain affect are the anterior midcingulate cortex (which is the region Price referred to as the posterior ACC), the posterior insula and parietal operculum, and (specific neuronal ensambles in) the basolateral amygdala, but there are also a lot of really big questions remaining.
It shouldn’t be used as a unitary measure, but can be included in a combined measure, which is likely to correlate better.