All arguments based on behavioral similarity only proof we all come from evolution
Shrimps have a welfare range of 0.430 (= 0.426*1.01) under Rethink Priorities’s (RP’s) quantitative model, which does not rely on behavioral proxies.
This model aggregates several quantifiably characterizable physiological measurements related to activity in the pain processing system. Many lacked data for certain species, and our numbers reflect the averages of those measures that had data for different taxa. In some cases, we used surrogates when specific species data was not available. The advantage of this approach is that all of the results lend themselves to model construction and plausibly have some connection to welfare. However, many of these features are found in the peripheral nervous system, and as such aren’t necessarily related to conscious experiences which presumably take place in the central nervous system. This approach also could be thought to reflect the flaws of only focusing on those things that are easily measurable at the expense of looking at features which are likely to be more directly relevant.
The quantitative model relies on:
Nociceptive just noticeable differences (JNDs).
Maximum nociceptor response (spikes/s).
Nociceptor density (NF/mm or NF/mm2).
Substance P concentration (ng/g).
Change in stress hormone (% from control).
Change in heart rate (% from control).
Change in respiration rate (% from control).
Brain mass to body mass ratio.
Encephalization quotient.
Neuron packing density.
To arrive to a super low welfare of shrimp, one has to be not only super confident that behavioral proxies do not matter for welfare, but also that a very specific structural criteria (e.g. number of neurons, which have major flaws) is practically all that matters.
neuron count ratios (Shrimp=0.01% of human)
RP estimated shrimp have 1*10^-6 as many neurons as humans, not 0.01 % (10^-4).
All those proxies tell us they have the wires to feel the pain. But what abour the self? You need the side of penalty and the side of self to have real pain. Pain shall inflicted to a conscious mind.
With their ridiculously small brains, how likely is a self on the receiving side of penalty?
Claiming the “quantitative model” doesn’t rely on behavioral proxies is technically true but misses the point I think.
The model looks mainly at physiological responses to stimuli. It seems similar to the behavioral model in that if some physiological thing like heart rate or respiration or stress level changes in A similar way to a human then they get a similar score to a human.
I can see the idea with behavioral proxies but I struggle to see what this model really adds. They then seem to bend over backwards somewhat to add other neurological measures that are not neuron count and that will bump up the moral weight as much as possible. “brain mass to body mass ratio” seems especially strange to me in this front
Thanks, Arturo.
Shrimps have a welfare range of 0.430 (= 0.426*1.01) under Rethink Priorities’s (RP’s) quantitative model, which does not rely on behavioral proxies.
The quantitative model relies on:
Nociceptive just noticeable differences (JNDs).
Maximum nociceptor response (spikes/s).
Nociceptor density (NF/mm or NF/mm2).
Substance P concentration (ng/g).
Change in stress hormone (% from control).
Change in heart rate (% from control).
Change in respiration rate (% from control).
Brain mass to body mass ratio.
Encephalization quotient.
Neuron packing density.
To arrive to a super low welfare of shrimp, one has to be not only super confident that behavioral proxies do not matter for welfare, but also that a very specific structural criteria (e.g. number of neurons, which have major flaws) is practically all that matters.
RP estimated shrimp have 1*10^-6 as many neurons as humans, not 0.01 % (10^-4).
All those proxies tell us they have the wires to feel the pain. But what abour the self? You need the side of penalty and the side of self to have real pain. Pain shall inflicted to a conscious mind.
With their ridiculously small brains, how likely is a self on the receiving side of penalty?
Claiming the “quantitative model” doesn’t rely on behavioral proxies is technically true but misses the point I think.
The model looks mainly at physiological responses to stimuli. It seems similar to the behavioral model in that if some physiological thing like heart rate or respiration or stress level changes in A similar way to a human then they get a similar score to a human.
I can see the idea with behavioral proxies but I struggle to see what this model really adds. They then seem to bend over backwards somewhat to add other neurological measures that are not neuron count and that will bump up the moral weight as much as possible. “brain mass to body mass ratio” seems especially strange to me in this front
..