I definitely do think welfare ranges can vary across beings, so I’m not thinking in binary terms.
~14 bees to 1 human is indeed after adjusting for the probability of sentience.
Neuron counts are plausibly worse than all of the other proxies precisely because of how large the gaps in welfare range they imply are. The justifications could be bad for most or all proxies, and maybe even worse for others than neuron counts (although I do think some of the proxies are far more justified), but neuron counts could introduce the most bias the way they’re likely to be used. Whether or not they have a heart or literally no proxies at all would give more plausible ranges than neuron counts, conditional on sentience (having a nonzero welfare range at all).
The kinds of proxies for the functions of valence and how they vary with hedonic intensity I’d use would probably give results more similar to any of the non-neuron count models than to the neuron count model (or models with larger gaps). A decent approximation (a lower bound) of the expected welfare range ratio over humans would be the probability that the animal has states of similar hedonic intensity to the most intense in humans, based on behavioural markers of intensity and whether they have the right kinds of cognitive mechanisms. And I can’t imagine assigning tiny probabilities to that, conditional on sentience, based on current evidence (which is mostly missing either way). For bees, they had an estimated 42.5% probability of sentience in this report, so a 16.7% chance of having similarly intense hedonic states conditional on sentience would give you 14 bees per human. I wouldn’t go lower than 1% or higher than 80% based on current evidence, so 16.7% wouldn’t be that badly off. (This is all assuming the expected number of conscious/valenced systems in any brain is close to 1 or lower, or their correlation is very low or we can ignore that possibility for other reasons.)
Wrt packets sent along servers, servers are designed to be very reliable, have buffers in case of multiple or large packets received within a short period, and so on. I’d guess neural signals would compete much more with each other, and at each neuron they reach have a non-tiny chance of not being passed along, so you get decaying signal strength. Many things don’t make it to your conscious awareness. On the other side, there may be multiple similar signals through multiple paths in a brain, but that means more competition between distinct signals, too. Similar signals being sent across multiple paths may also be in part because of more neurons directly connected to periphery firing, not just few neurons influencing a superlinear number of neurons each on average.
Neuron counts are plausibly worse than all of the other proxies precisely because of how large the gaps in welfare range they imply are.
If I’m reading this right, you are dismissing neuron counts because of your intuition. You correctly realize that intuitions trump all other considerations, and the game is just to pick proxies that agree with your intuitions and allow you to make them more systematic.
I agree with this approach but strongly disagree with your intuitions. “14 bees = 1 human, after adjusting for probability of sentience” is so CLEARLY wrong that it is almost insulting. That’s my intuition speaking. I’m doing the same thing you’re doing when you dismiss neuron counts, just with a different starting intuition than you.
I think it would be better if the OP was more upfront about this bias.
I’m not dismissing neuron counts because of my direct intuitions about welfare ranges across species. That would be circular, motivated reasoning and an exercise in curve fitting. I’m dismissing them because the basis for their use seems weak, for reasons explained in posts RP has written and my own (vague) understanding of what plausibly determines welfare ranges in functionalist terms. When RP started this project and for most of the time I spent working on the conscious subsystems report, I actually thought we should use neuron counts by default. I didn’t change my mind about neuron counts because my direct intuitions about relative welfare ranges between specific species changed; I changed my mind because of the arguments against neuron counts.
What I meant in what you quoted is that neuron counts seem especially biased, where the biases are measured relative to the results of quantitative models roughly capturing my current understanding of how consciousness and welfare ranges actually work, like the one I described in the comment you quoted from. Narrow range proxies give less biased results (relative to my models’ results) than neuron counts, including such proxies with little or no plausible connection to welfare ranges. But I’d just try to build my actual model directly.
How exactly are you thinking neuron counts contribute to hedonic welfare ranges, and how does this relate to your views on consciousness? What theories of consciousness seem closest to your views?
Why do you think 14 bees per human is so implausible?
(All this being said, the conscious subsystems hypothesis might still support the use of neuron counts as a proxy for expected welfare ranges, even if the hypothesis seems very unlikely to me. I’m not sure how unlikely; I have deep uncertainty.)
I definitely do think welfare ranges can vary across beings, so I’m not thinking in binary terms.
~14 bees to 1 human is indeed after adjusting for the probability of sentience.
Neuron counts are plausibly worse than all of the other proxies precisely because of how large the gaps in welfare range they imply are. The justifications could be bad for most or all proxies, and maybe even worse for others than neuron counts (although I do think some of the proxies are far more justified), but neuron counts could introduce the most bias the way they’re likely to be used. Whether or not they have a heart or literally no proxies at all would give more plausible ranges than neuron counts, conditional on sentience (having a nonzero welfare range at all).
The kinds of proxies for the functions of valence and how they vary with hedonic intensity I’d use would probably give results more similar to any of the non-neuron count models than to the neuron count model (or models with larger gaps). A decent approximation (a lower bound) of the expected welfare range ratio over humans would be the probability that the animal has states of similar hedonic intensity to the most intense in humans, based on behavioural markers of intensity and whether they have the right kinds of cognitive mechanisms. And I can’t imagine assigning tiny probabilities to that, conditional on sentience, based on current evidence (which is mostly missing either way). For bees, they had an estimated 42.5% probability of sentience in this report, so a 16.7% chance of having similarly intense hedonic states conditional on sentience would give you 14 bees per human. I wouldn’t go lower than 1% or higher than 80% based on current evidence, so 16.7% wouldn’t be that badly off. (This is all assuming the expected number of conscious/valenced systems in any brain is close to 1 or lower, or their correlation is very low or we can ignore that possibility for other reasons.)
Wrt packets sent along servers, servers are designed to be very reliable, have buffers in case of multiple or large packets received within a short period, and so on. I’d guess neural signals would compete much more with each other, and at each neuron they reach have a non-tiny chance of not being passed along, so you get decaying signal strength. Many things don’t make it to your conscious awareness. On the other side, there may be multiple similar signals through multiple paths in a brain, but that means more competition between distinct signals, too. Similar signals being sent across multiple paths may also be in part because of more neurons directly connected to periphery firing, not just few neurons influencing a superlinear number of neurons each on average.
If I’m reading this right, you are dismissing neuron counts because of your intuition. You correctly realize that intuitions trump all other considerations, and the game is just to pick proxies that agree with your intuitions and allow you to make them more systematic.
I agree with this approach but strongly disagree with your intuitions. “14 bees = 1 human, after adjusting for probability of sentience” is so CLEARLY wrong that it is almost insulting. That’s my intuition speaking. I’m doing the same thing you’re doing when you dismiss neuron counts, just with a different starting intuition than you.
I think it would be better if the OP was more upfront about this bias.
That’s not what I meant.
I’m not dismissing neuron counts because of my direct intuitions about welfare ranges across species. That would be circular, motivated reasoning and an exercise in curve fitting. I’m dismissing them because the basis for their use seems weak, for reasons explained in posts RP has written and my own (vague) understanding of what plausibly determines welfare ranges in functionalist terms. When RP started this project and for most of the time I spent working on the conscious subsystems report, I actually thought we should use neuron counts by default. I didn’t change my mind about neuron counts because my direct intuitions about relative welfare ranges between specific species changed; I changed my mind because of the arguments against neuron counts.
What I meant in what you quoted is that neuron counts seem especially biased, where the biases are measured relative to the results of quantitative models roughly capturing my current understanding of how consciousness and welfare ranges actually work, like the one I described in the comment you quoted from. Narrow range proxies give less biased results (relative to my models’ results) than neuron counts, including such proxies with little or no plausible connection to welfare ranges. But I’d just try to build my actual model directly.
How exactly are you thinking neuron counts contribute to hedonic welfare ranges, and how does this relate to your views on consciousness? What theories of consciousness seem closest to your views?
Why do you think 14 bees per human is so implausible?
(All this being said, the conscious subsystems hypothesis might still support the use of neuron counts as a proxy for expected welfare ranges, even if the hypothesis seems very unlikely to me. I’m not sure how unlikely; I have deep uncertainty.)