They won’t be literally identical: they’ll differ in many ways, like physical details, cognitive expression and behavioural influence. They’re separate instantiations of the same broad class of functions or capacities.
I would say the number of times a function or capacity is realized in a brain can be relevant, but it seems pretty unlikely to me that a person can experience suffering hundreds of times simultaneously (and hundreds of times more than chickens, say). Rethink Priorities looked into these kinds of views here. (I’m a co-author on that article, but I don’t work at Rethink Priorities anymore, and I’m not speaking on their behalf.)
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
There are other simple methodologies that make vaguely plausible guesses (under hedonism), like:
welfare ranges are generally similar or just individual-relative across species capable of suffering and pleasure (RP’s Equality Model),
the intensity categories of pain defined by Welfare Footprint Project (or some other functionally defined categories) have similar ranges across animals that have them, and assign numerical weights to those categories, so that we should weigh “disabling pain” similarly across animals, including humans,
the pain intensity scales with the number of just-noticeable differences in pain intensities away from neutral across individuals, so we just weigh by their number (RP’s Just Noticeable Difference Model[1]).
In my view, 1, 2 and 3 are more plausible and defensible than views that would give you (cortical or similar function) neuron counts as a good approximation. I also think the actually right answer, if there’s any (so excluding the individual-relative interpretation for 1), will look like 2, but more complex and with possibly different functions. RP explicitly considered 1 and 3 in its work. These three models give chickens >0.1x humans’ welfare ranges:
Model 1 would give the same welfare ranges across animals, including humans, conditional on capacity for suffering and pleasure.
Model 2 would give the same sentience-conditional welfare ranges across mammals (including humans) and birds, at least. My best guess is also the same across all vertebrates. I’m less sure that invertebrates can experience similarly intense pain even conditional on sentience, but it’s not extremely unlikely.
Model 3 would probably pretty generally give nonhuman animals welfare ranges at least ~0.1x humans’, conditional on sentience, according to RP.[2]
You can probably come up with some models that assign even lower welfare ranges to other animals, too, of course, including some relatively simple ones, although not simpler than 1.
Note that using cortical (or similar function) neuron counts also makes important assumptions about which neurons matter and when. Not all plausibly conscious animals have cortices, so you need to identify which structures have similar roles, or else, chauvinistically, rule these animals out entirely regardless of their capacities. So this approach is not that simple, either. Just counting all neurons would be simpler.
(I don’t work for RP anymore, and I’m not speaking on their behalf.)
by that logic, two chickens have the same moral weight as one chicken because they have the same functions and capacities, no?
They won’t be literally identical: they’ll differ in many ways, like physical details, cognitive expression and behavioural influence. They’re separate instantiations of the same broad class of functions or capacities.
I would say the number of times a function or capacity is realized in a brain can be relevant, but it seems pretty unlikely to me that a person can experience suffering hundreds of times simultaneously (and hundreds of times more than chickens, say). Rethink Priorities looked into these kinds of views here. (I’m a co-author on that article, but I don’t work at Rethink Priorities anymore, and I’m not speaking on their behalf.)
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
Oh, interesting. That moves my needle.
As I see it, we basically have a choice between:
simple methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (cortical neuron count)
complex methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (other stuff)
I much prefer the simple methodology where we can clearly see what assumptions we’re making and how that propagates out.
There are other simple methodologies that make vaguely plausible guesses (under hedonism), like:
welfare ranges are generally similar or just individual-relative across species capable of suffering and pleasure (RP’s Equality Model),
the intensity categories of pain defined by Welfare Footprint Project (or some other functionally defined categories) have similar ranges across animals that have them, and assign numerical weights to those categories, so that we should weigh “disabling pain” similarly across animals, including humans,
the pain intensity scales with the number of just-noticeable differences in pain intensities away from neutral across individuals, so we just weigh by their number (RP’s Just Noticeable Difference Model[1]).
In my view, 1, 2 and 3 are more plausible and defensible than views that would give you (cortical or similar function) neuron counts as a good approximation. I also think the actually right answer, if there’s any (so excluding the individual-relative interpretation for 1), will look like 2, but more complex and with possibly different functions. RP explicitly considered 1 and 3 in its work. These three models give chickens >0.1x humans’ welfare ranges:
Model 1 would give the same welfare ranges across animals, including humans, conditional on capacity for suffering and pleasure.
Model 2 would give the same sentience-conditional welfare ranges across mammals (including humans) and birds, at least. My best guess is also the same across all vertebrates. I’m less sure that invertebrates can experience similarly intense pain even conditional on sentience, but it’s not extremely unlikely.
Model 3 would probably pretty generally give nonhuman animals welfare ranges at least ~0.1x humans’, conditional on sentience, according to RP.[2]
You can probably come up with some models that assign even lower welfare ranges to other animals, too, of course, including some relatively simple ones, although not simpler than 1.
Note that using cortical (or similar function) neuron counts also makes important assumptions about which neurons matter and when. Not all plausibly conscious animals have cortices, so you need to identify which structures have similar roles, or else, chauvinistically, rule these animals out entirely regardless of their capacities. So this approach is not that simple, either. Just counting all neurons would be simpler.
(I don’t work for RP anymore, and I’m not speaking on their behalf.)
Although we could use a different function of the number instead, for increasing or diminishing marginal returns to additional JNDs.
Maybe lower for some species RP didn’t model, e.g. nematodes, tiny arthropods?