But didn’t RP prove that cortical neuron counts are fake?
Hardly. They gave a bunch of reasons why we might be skeptical of neuron count (summarised here). But I think the reasons in favour of using cortical neuron count as a proxy for moral weight are stronger than the objections.
I don’t think the reasons in favour of using neuron counts provide much support for weighing by neuron counts or any function of them in practice. Rather, they primarily support using neuron counts to inform missing data about functions and capacities that do determine welfare ranges (EDIT: or moral weights), in models of how welfare ranges (EDIT: or moral weights) are determined by functions and capacities. There’s a general trend that animals with more neurons have more capacities and more sophisticated versions of some capacities.
However, most functions and capacities seem pretty irrelevant to welfare ranges, even if relevant for what welfare is realized in specific circumstances. If an animal can already experience excruciating pain, presumably near the extreme of their welfare range, what do humans have that would make excruciating pain far worse for us in general, or otherwise give us far wider welfare ranges? And why?
“If an animal can already experience excruciating pain, presumably near the extreme of their welfare range, what do humans have that would make excruciating pain far worse for us in general, or otherwise give us far wider welfare ranges? And why?”
We have a far more advanced consciousness and self awareness, that may make our experience of pain orders of magnitude worse (or at least different) than for many animals—or not.
I think there is far more uncertainty in this question than many ackhnowledge—RP acknowledge the uncertainty but I don’t think present it as clearly as they could. Extreme pain for humans could be a wildly different experience than it is for animals, or it could be quite similar. Even if we assume hedonism (which I don’t), we can oversimplify the concepts of “Sentience” and “welfare ranges” to feel like we have more certainty over these numbers than we do.
We have a far more advanced consciousness and self awareness, that may make our experience of pain orders of magnitude worse (or at least different) than for many animals—or not.
I agree that that’s possible and worth including under uncertainty, but it doesn’t answer the “why”, so it’s hard to justify giving it much or disproportionate weight (relative to other accounts) without further argument. Why would self-awareness, say, make being in intense pain orders of magnitude worse?
And are we even much more self-aware than other animals when we are in intense pain? One of the functions of pain is to take our attention, and it does so more the more intense the pain. That might limit the use of our capacities for self-awareness: we’d be too focused on and distracted by the pain. Or, maybe our self-awareness or other advanced capacities distract us from the pain, making it less intense than in other animals.
(My own best guess is that at the extremes of excruciating pain, sophisticated self-awareness makes little difference to the intensity of suffering.)
They won’t be literally identical: they’ll differ in many ways, like physical details, cognitive expression and behavioural influence. They’re separate instantiations of the same broad class of functions or capacities.
I would say the number of times a function or capacity is realized in a brain can be relevant, but it seems pretty unlikely to me that a person can experience suffering hundreds of times simultaneously (and hundreds of times more than chickens, say). Rethink Priorities looked into these kinds of views here. (I’m a co-author on that article, but I don’t work at Rethink Priorities anymore, and I’m not speaking on their behalf.)
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
There are other simple methodologies that make vaguely plausible guesses (under hedonism), like:
welfare ranges are generally similar or just individual-relative across species capable of suffering and pleasure (RP’s Equality Model),
the intensity categories of pain defined by Welfare Footprint Project (or some other functionally defined categories) have similar ranges across animals that have them, and assign numerical weights to those categories, so that we should weigh “disabling pain” similarly across animals, including humans,
the pain intensity scales with the number of just-noticeable differences in pain intensities away from neutral across individuals, so we just weigh by their number (RP’s Just Noticeable Difference Model[1]).
In my view, 1, 2 and 3 are more plausible and defensible than views that would give you (cortical or similar function) neuron counts as a good approximation. I also think the actually right answer, if there’s any (so excluding the individual-relative interpretation for 1), will look like 2, but more complex and with possibly different functions. RP explicitly considered 1 and 3 in its work. These three models give chickens >0.1x humans’ welfare ranges:
Model 1 would give the same welfare ranges across animals, including humans, conditional on capacity for suffering and pleasure.
Model 2 would give the same sentience-conditional welfare ranges across mammals (including humans) and birds, at least. My best guess is also the same across all vertebrates. I’m less sure that invertebrates can experience similarly intense pain even conditional on sentience, but it’s not extremely unlikely.
Model 3 would probably pretty generally give nonhuman animals welfare ranges at least ~0.1x humans’, conditional on sentience, according to RP.[2]
You can probably come up with some models that assign even lower welfare ranges to other animals, too, of course, including some relatively simple ones, although not simpler than 1.
Note that using cortical (or similar function) neuron counts also makes important assumptions about which neurons matter and when. Not all plausibly conscious animals have cortices, so you need to identify which structures have similar roles, or else, chauvinistically, rule these animals out entirely regardless of their capacities. So this approach is not that simple, either. Just counting all neurons would be simpler.
(I don’t work for RP anymore, and I’m not speaking on their behalf.)
I don’t think the reasons in favour of using neuron counts provide much support for weighing by neuron counts or any function of them in practice. Rather, they primarily support using neuron counts to inform missing data about functions and capacities that do determine welfare ranges (EDIT: or moral weights), in models of how welfare ranges (EDIT: or moral weights) are determined by functions and capacities. There’s a general trend that animals with more neurons have more capacities and more sophisticated versions of some capacities.
However, most functions and capacities seem pretty irrelevant to welfare ranges, even if relevant for what welfare is realized in specific circumstances. If an animal can already experience excruciating pain, presumably near the extreme of their welfare range, what do humans have that would make excruciating pain far worse for us in general, or otherwise give us far wider welfare ranges? And why?
“If an animal can already experience excruciating pain, presumably near the extreme of their welfare range, what do humans have that would make excruciating pain far worse for us in general, or otherwise give us far wider welfare ranges? And why?”
We have a far more advanced consciousness and self awareness, that may make our experience of pain orders of magnitude worse (or at least different) than for many animals—or not.
I think there is far more uncertainty in this question than many ackhnowledge—RP acknowledge the uncertainty but I don’t think present it as clearly as they could. Extreme pain for humans could be a wildly different experience than it is for animals, or it could be quite similar. Even if we assume hedonism (which I don’t), we can oversimplify the concepts of “Sentience” and “welfare ranges” to feel like we have more certainty over these numbers than we do.
I agree that that’s possible and worth including under uncertainty, but it doesn’t answer the “why”, so it’s hard to justify giving it much or disproportionate weight (relative to other accounts) without further argument. Why would self-awareness, say, make being in intense pain orders of magnitude worse?
And are we even much more self-aware than other animals when we are in intense pain? One of the functions of pain is to take our attention, and it does so more the more intense the pain. That might limit the use of our capacities for self-awareness: we’d be too focused on and distracted by the pain. Or, maybe our self-awareness or other advanced capacities distract us from the pain, making it less intense than in other animals.
(My own best guess is that at the extremes of excruciating pain, sophisticated self-awareness makes little difference to the intensity of suffering.)
by that logic, two chickens have the same moral weight as one chicken because they have the same functions and capacities, no?
They won’t be literally identical: they’ll differ in many ways, like physical details, cognitive expression and behavioural influence. They’re separate instantiations of the same broad class of functions or capacities.
I would say the number of times a function or capacity is realized in a brain can be relevant, but it seems pretty unlikely to me that a person can experience suffering hundreds of times simultaneously (and hundreds of times more than chickens, say). Rethink Priorities looked into these kinds of views here. (I’m a co-author on that article, but I don’t work at Rethink Priorities anymore, and I’m not speaking on their behalf.)
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
Oh, interesting. That moves my needle.
As I see it, we basically have a choice between:
simple methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (cortical neuron count)
complex methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (other stuff)
I much prefer the simple methodology where we can clearly see what assumptions we’re making and how that propagates out.
There are other simple methodologies that make vaguely plausible guesses (under hedonism), like:
welfare ranges are generally similar or just individual-relative across species capable of suffering and pleasure (RP’s Equality Model),
the intensity categories of pain defined by Welfare Footprint Project (or some other functionally defined categories) have similar ranges across animals that have them, and assign numerical weights to those categories, so that we should weigh “disabling pain” similarly across animals, including humans,
the pain intensity scales with the number of just-noticeable differences in pain intensities away from neutral across individuals, so we just weigh by their number (RP’s Just Noticeable Difference Model[1]).
In my view, 1, 2 and 3 are more plausible and defensible than views that would give you (cortical or similar function) neuron counts as a good approximation. I also think the actually right answer, if there’s any (so excluding the individual-relative interpretation for 1), will look like 2, but more complex and with possibly different functions. RP explicitly considered 1 and 3 in its work. These three models give chickens >0.1x humans’ welfare ranges:
Model 1 would give the same welfare ranges across animals, including humans, conditional on capacity for suffering and pleasure.
Model 2 would give the same sentience-conditional welfare ranges across mammals (including humans) and birds, at least. My best guess is also the same across all vertebrates. I’m less sure that invertebrates can experience similarly intense pain even conditional on sentience, but it’s not extremely unlikely.
Model 3 would probably pretty generally give nonhuman animals welfare ranges at least ~0.1x humans’, conditional on sentience, according to RP.[2]
You can probably come up with some models that assign even lower welfare ranges to other animals, too, of course, including some relatively simple ones, although not simpler than 1.
Note that using cortical (or similar function) neuron counts also makes important assumptions about which neurons matter and when. Not all plausibly conscious animals have cortices, so you need to identify which structures have similar roles, or else, chauvinistically, rule these animals out entirely regardless of their capacities. So this approach is not that simple, either. Just counting all neurons would be simpler.
(I don’t work for RP anymore, and I’m not speaking on their behalf.)
Although we could use a different function of the number instead, for increasing or diminishing marginal returns to additional JNDs.
Maybe lower for some species RP didn’t model, e.g. nematodes, tiny arthropods?