Reasons to think that “neuron count” correlates with “moral weight”:
Neuron counts correlate with our intuitions of moral weights
“Pains, for example, would seem to minimally require at least some representation of the body in space, some ability to quantify intensity, and some connections to behavioral responses, all of which require a certain degree of processing power.”
“There are studies that show increased volume of brain regions correlated with valenced experience, such as a study showing that cortical thickness in a particular region increased along with pain sensitivity.” (But the opposite is also true. See 6. below.)
Reasons to think that “neuron count” does NOT correlate “moral weight”
There’s more to information processing capacity than neuron count. There’s also:
Number of neural connections (synapses)
Distance between neurons (more distance → more latency)
Conduction velocity of neurons
Neuron refactory period (“rest time” between neuron activation)
“There’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence”
“It seems conceptually possible to increase intelligence without increasing the intensity of experience”
Within humans, we don’t think that more intelligence implies more moral weight. We don’t generally give less moral to children, elderly, or the cognitiviely impaired.
Not sure I agree with the “TL” part haha, but this is a pretty good summary. However, I’d also add that there’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence (I guess this would go between 1d and 2) and also that I think the idea that more neurons are active during welfare-relevant experiences is a separate but related point to the idea that more brain volume is correlated with welfare-relevant experiences.
TL;DR:
Reasons to think that “neuron count” correlates with “moral weight”:
Neuron counts correlate with our intuitions of moral weights
“Pains, for example, would seem to minimally require at least some representation of the body in space, some ability to quantify intensity, and some connections to behavioral responses, all of which require a certain degree of processing power.”
“There are studies that show increased volume of brain regions correlated with valenced experience, such as a study showing that cortical thickness in a particular region increased along with pain sensitivity.” (But the opposite is also true. See 6. below.)
Reasons to think that “neuron count” does NOT correlate “moral weight”
There’s more to information processing capacity than neuron count. There’s also:
Number of neural connections (synapses)
Distance between neurons (more distance → more latency)
Conduction velocity of neurons
Neuron refactory period (“rest time” between neuron activation)
“There’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence”
“It seems conceptually possible to increase intelligence without increasing the intensity of experience”
Within humans, we don’t think that more intelligence implies more moral weight. We don’t generally give less moral to children, elderly, or the cognitiviely impaired.
The top-down cognitive influences on pain suggest that maybe intelligence actually mitigates suffering.
There are “studies showing that increased pain is correlated with decreased brain volume in areas associated with pain”
Hundreds of brain imaging experiments haven’t uncovered any simple relationship between quantity of neurons firing and “amount of pain”
Bees have small brains, but have “cognitive flexibility, cross-modal recognition of objects, and play behavior”
There are competing ideas for correlates of moral weight/consciousness/self-awareness:
Reverals learning
Trace conditioning
Unlimited associative learning
Mirror self-recognition
Not sure I agree with the “TL” part haha, but this is a pretty good summary. However, I’d also add that there’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence (I guess this would go between 1d and 2) and also that I think the idea that more neurons are active during welfare-relevant experiences is a separate but related point to the idea that more brain volume is correlated with welfare-relevant experiences.
I’d also note that your TL/DR is a summary of the summary, but there are some additional arguments in the report that aren’t included in the summary. For example, here’s a more general argument against using neuron counts in the longer report: https://docs.google.com/document/d/1p50vw84-ry2taYmyOIl4B91j7wkCurlB/edit#bookmark=id.3mp7v7dyd88i
Thanks for feedback.
Well, yeah. Maybe. It’s also about making the structure more legible.
Anything specific I should look at?
My link above was to a bookmark in the report, which includes an additional argument.
Thanks for doing this. Post is too long, could have been dot points. I want to see more TL;DRs like this