Why Neuron Counts Shouldn’t Be Used as Proxies for Moral Weight

Link post

Key Takeaways

  • Several influential EAs have suggested using neuron counts as rough proxies for animals’ relative moral weights. We challenge this suggestion.

  • We take the following ideas to be the strongest reasons in favor of a neuron count proxy:

    • neuron counts are correlated with intelligence and intelligence is correlated with moral weight,

    • additional neurons result in “more consciousness” or “more valenced consciousness,” and

    • increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities.

  • However:

    • in regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;

    • many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and

    • there is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.

  • Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely. Rather, neuron counts should be combined with other metrics in an overall weighted score that includes information about whether different species have welfare-relevant capacities.

Introduction

This is the fourth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post is to summarize our full report on the use of neuron counts as proxies for moral weights. The full report can be found here and includes more extensive arguments and evidence.

Motivations for the Report

Can the number of neurons an organism possesses, or some related measure, be used as a proxy for deciding how much weight to give that organism in moral decisions? Several influential EAs have suggested that the answer is “Yes” in cases that involve aggregating the welfare of members of different species (Tomasik 2013, MacAskill 2022, Alexander 2021, Budolfson & Spears 2020).

For the purposes of aggregating and comparing welfare across species, neuron counts are proposed as multipliers for cross-species comparisons of welfare. In general, the idea goes, as the number of neurons an organism possesses increases, so too does some morally relevant property related to the organism’s welfare. Generally, the morally relevant properties are assumed to increase linearly with an increase in neurons, though other scaling functions are possible.

Scott Alexander of Slate Star Codex has a passage illustrating how weighting by neuron count might work:

“Might cows be “more conscious” in a way that makes their suffering matter more than chickens? Hard to tell. But if we expect this to scale with neuron number, we find cows have 6x as many cortical neurons as chickens, and most people think of them as about 10x more morally valuable. If we massively round up and think of a cow as morally equivalent to 20 chickens, switching from an all-chicken diet to an all-beef diet saves 60 chicken-equivalents per year.” (2021)

This methodology has important implications for assigning moral weight. For example, the average number of neurons in a human (86,000,000,000) is 390 times greater than the average number of neurons in a chicken (220,000,000) so we would treat the welfare units of humans as 390 times more valuable. If we accepted the strongest version of the neuron count hypothesis, determining the moral weight of different species would simply be a matter of using the best current techniques (such as those developed by Herculano-Houzel) to determine the average number of neurons in different species.

Arguments Connecting Neuron Counts to Moral Weight

There are several practical advantages to using neuron counts as a proxy for moral weight. Neuron counts are quantifiable, they are in-principle measurable, they correlate at least to some extent with cognitive abilities that are plausibly relevant for moral standing, and they correlate to some extent with our intuitions about the moral status of different species. What these practical advantages show is that, if it is the case that neuron counts are a good proxy for moral weight, that would be very convenient for us. But we still need an argument for why we should believe in the first place that neuron counts are in fact connected to moral weight in a reliable way.

There are very few explicit arguments explaining this connection. However, we believe the following possibilities are the strongest reasons in favor of connecting neuron counts to moral weight: (1) neuron counts are correlated with intelligence and intelligence is correlated with moral weight, (2) additional neurons result in “more consciousness” or “more valenced consciousness” and (3) increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities. In a separate report in this sequence, we consider the possibility that (4) greater numbers of neurons lead to more “conscious subsystems” that have associated moral weight.

Neuron Counts As a Stand-In For Information Processing Capacity

Before summarizing our reservations about these arguments, it’s important to flag a common assumption, which is that greater informational processing is ultimately what matters for moral weight, with “brain size” or “neuron count” being taken as indicators of information processing ability. However, brain size measured by mass or volume turns out not to directly predict the number of neurons in an organism, both because neurons themselves can be different sizes and because brains can contain differing proportions of neurons and connective tissue. Moreover, different types of species have divergent evolutionary pressures that can influence neuron size. For example, avian species tend to have smaller neurons because they need to keep weight down in order to fly. Aquatic mammals, however, have less pressure than land mammals from the constraints of gravity, and as such can have larger brains and larger neurons without as significant of an evolutionary cost.

Moreover, the raw number of neurons an organism possesses does not tell the full story about information processing capacity. That’s because the number of computations that can be performed over a given amount of time in a brain also depends upon many other factors, such as (1) the number of connections between neurons, (2) the distance between neurons (with shorter distances allowing faster communication), (3) the conduction velocity of neurons, and (4) the refractory period which indicates how much time must elapse before a given neuron can fire again. In some ways, these additional factors can actually favor smaller brains (Chitka 2009).

So, neuron counts are not in fact perfect predictors of information-processing capacity and the exact extent to which they are predictors is still to be determined. Importantly, although one initial practical consideration in favor of using neuron counts is that they are in principle measurable, when we move to overall information processing capacity it is no longer the case that we can accurately measure this feature across different organisms, because measuring all of the relevant factors mentioned above for entire brains is currently not possible. With that in mind, let’s consider the three most promising arguments for using neuron counts as proxies for moral weights.

Neuron counts correlate with intelligence, and intelligence correlates with moral weight

Many people seem to believe, at least implicitly, that more intelligent animals have more moral weight. Thus they tend to think humans have the most moral weight, and chimpanzees have more moral weight than most other animals, that dogs have more moral weight than goldfish, and so on.

However, as noted above, we might question how well neuron counts predict overall information-processing capacity and, thus, presumably, intelligence. We can, additionally, question whether intelligence truly influences moral weight.

Though there is not a large literature connecting neuron counts to sentience, welfare capacity, or valenced experience, there is a reasonably large scientific literature examining the connection of neuron counts to measures of intelligence in animals. The research is still ongoing and unsettled, but we can draw a few lessons from it.

First, it seems hard to deny that there’s one sense in which the increased processing power enabled by additional neurons correlates with moral weight, at least insofar as welfare relevant abilities all seem to require at least some minimum number of neurons. Pains, for example, would seem to minimally require at least some representation of the body in space, some ability to quantify intensity, and some connections to behavioral responses, all of which require a certain degree of processing power. Like pains, each welfare-relevant functional capacity requires at least some minimum number of neurons.

But aside from needing a baseline number of neurons to cross certain morally relevant thresholds, things become less clear, at least on a hedonistic account of well-being where what matters is the intensity and duration of valenced experience. It seems conceptually possible to increase intelligence without increasing the intensity of experience, and similarly possible to imagine the intensity of experience increasing without a corresponding increase in intelligence. Furthermore, it certainly is not the case that in humans we tend to associate greater intelligence with greater moral weight. Most people would not think it’s acceptible to dismiss the pains of children or the elderly or cognitively impaired in virtue of them scoring lower on intelligence tests.

Finally, it’s worth noting that some people have proposed precisely the opposite intuition: that intelligence can blunt the intensity of certain emotional states, particularly suffering. My intense pain feels less bad if I can tell myself that it will be over soon. According to this account, supported by evidence of top-down cognitive influences on pain, our intelligence can sometimes provide us with tools for blunting the impact of particularly intense experiences, while other less cognitively sophisticated animals may lack these abilities.

More Neurons = More Valenced Consciousness?

One might think that adding neurons increases the overall “amount” of consciousness. In the larger report, we consider the following empirical arguments in favor of this claim.

There are studies that show increased volume of brain regions correlated with valenced experience, such as a study showing that cortical thickness in a particular region increased along with pain sensitivity. Do these studies demonstrate that more neurons lead to more intense positive experiences? The problem with this interpretation is that there are studies showing correlations that work in the opposite direction, such as studies showing that increased pain is correlated with decreased brain volume in areas associated with pain. There is simply not a reliable relationship between brain volume and intensity of experience.

Similarly, there are many brain imaging studies that show that increased activation in particular brain regions is associated with increases in certain valenced states such as pleasure or pain, which might initially be thought to be evidence that “more active neurons” = “more experience.” However, neuroimaging researchers are highly invested in finding an objective biomarker for pain, and in being able to predict individual differences in pain responses. They have performed hundreds of experiments looking at brain activation during pain. The current consensus is that there is not yet a reliable way of identifying pain across all relevant conditions merely by looking at brain scans, and no leading neuroscientists have said anything that suggests that the mere number of neurons active in a particular region is predictive of “how much pain” a person might experience. It is, in general, the patterns of activation and how those activations are connected to observable outputs that matter most, rather than raw numbers of neurons involved.

Increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities

As noted above, it certainly is true that various capacities linked to intelligent behavior and welfare require some minimal degree of information-processing capacity. So might neuron counts be considered a proxy of moral weight simply in virtue of being predictive of how many morally relevant thresholds an organism has crossed?

To see the difficulty with this, consider the extremely impressive body of literature studying the cognitive capacities of bees. Bees have a relatively small number of neurons, yet have been found to be able to engage in sophisticated capacities that were previously thought to require large brains, including cognitive flexibility, cross-modal recognition of objects, and play behavior. There is, in general, not a defined relationship between the number of neurons and many possible capacities.

But without such a correlation, it would be unwise to use neuron counts as a proxy for having certain welfare relevant capacities rather than simply testing to see whether the animals in fact have the capacities. And there are many such proposed capacities. Varner (1998) has suggested that reversal learning may be an important capacity related to moral status. Colin Allen (2004) has suggested that trace conditioning might be an important marker of the capacity of conscious experience. Birch, Ginsburg and Jablonka (2020) have raised the possibility that unlimited associative learning is the key indicator of consciousness. And Gallup (1970) famously proposed that mirror self-recognition was a necessary condition for self-awareness.

The relevance of each of these views should be considered and weighed on their own. If there’s a plausible argument connecting them to moral weight, we see no reason to discard them in favor of a unitary moral weight measure focused only on the number of neurons. As such, it would be far preferable to include measures of plausibly relevant proxies along with neuron counts rather than use neuron counts as the sole measures of moral weight.

Conclusion

To summarize, one primary attraction of using neuron counts as a metric is its measurability. However, the more measurable a metric we choose, the less accurate it is, and the more we prioritize accuracy, the less we are currently able to measure. As such, the primary attraction of neuron counts is an illusion that vanishes once we attempt to reach out and grasp it.

And the three strongest arguments in favor of using neuron counts as a proxy all fail. We can question how well neuron counts are correlated with intelligence and also doubt that intelligence is correlated with moral weight. Arguments suggesting that additional neurons result in “more consciousness” or “more valenced consciousness” appear to be inconsistent with and unsupported by current views of how the brain works. And though it is true that increasing numbers of neurons are required to reach thresholds of minimal information capacity required for morally relevant cognitive abilities, this seems most compatible with using neuron counts in a combined measure rather than using them as a proxy for other capacities that can be independently measured.

Given this, we suggest that the best role for neuron counts in an assessment of moral weight is as a weighted contributor, one among many, to an overall estimation of moral weight. Neuron counts likely provide some useful insights about how much information can be processed at a particular time, but it seems unlikely that they would provide more useful information individually than a function that takes them into account along with other plausible markers of sentience and moral significance. Developing such a function has its own difficulties, but is preferable to relying solely on one metric which deviates from other measures of sentience and intelligence.

Acknowledgments

This research is a project of Rethink Priorities. It was written by Adam Shriver. Thanks to Bob Fischer, Michael St. Jules, Jason Schukraft, Marcus Davis, Meghan Barrett, Gavin Taylor, David Moss, Joseph Gottlieb, Mark Budolfson, and the audience at the 2022 Animal Minds Conference at UCSD for helpful feedback on the report. If you’re interested in RP’s work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.