(I work at RP and reviewed parts of this work, but am not a co-author for this report and am not speaking for RP or the authors.)
That’s not any better! You’re merely hiding your intuitions behind a complex model. But the inputs to your model are no better than the inputs to mine! Your inputs are things like “bees play”, which I’m already inputting into my intuition (along with many other facts your model cannot take into account). You’re weighing all proxies equally; my intuition uses a very skewed weighted average of the proxies. But the uniform distribution is not special! It’s just as arbitrary!
I think this is partly why they considered so many different models, with different sets of proxies, as well as the grouped proxies model. This effectively represents a range of different possible weights, although it might not cover your views.
Would it help if I coded up a simple model which took in “bees play but don’t express love” and the rest of the list, and outputted 0.00000001? We surely both agree that I can do it. What makes you confident your model is more justified, if not your own intuitions?
I may be misunderstanding, but because it seems you’ve already decided on the answer (0.00000001), I’d worry about motivated reasoning. If you were going to make a model, I’d recommend a first pass without looking at how the criteria (or model outputs) differ between species while trying to give plausible accounts for why particular criteria matter and why they matter as much as you think they should. Of course, the fact that bees can or can’t do something could also be evidence about the value of the criteria. For example, maybe some criterion you thought was strong evidence for a particular cognitive mechanism is met by bees, but you have substantial reason to believe bees lack this cognitive mechanism, so you could be justified in reducing the weight to that proxy after learning bees meet the criterion (or just using the cognitive mechanism directly as a criterion). Ideally, you should be able to justify such changes to your weights with a better story for why they were wrong before than just “bees do it”, which I think would be motivated reasoning.
I would definitely be interested in hearing which proxies you’d include, how you’d weigh them and why. A model might be useful, although I think the reasoning would be the most useful part.
Personally, when I think of the intensity of suffering and pleasure, the kinds of (largely vague) accounts I have in mind depend on attention and prioritization or can use those as proxies scaling roughly monotonically with intensity (similar to Welfare Footprint Project’s definitions for levels of pain, annoying, hurtful, disabling and excruciating), and humans don’t seem particularly special here for excruciating pain and extreme fear (e.g. torture), i.e. I expect mammals and birds to respond with attention and prioritization similar to humans.
I’m not sure either way if bees would attend to and prioritize anything to the same degree we do our most intense states while in them, so they might have narrower hedonic ranges if they don’t, but I don’t think we can rule it out. The mechanisms may be different, even very different, but those differences might not matter, and they won’t necessarily favour humans over bees rather than bees over humans. So, it just doesn’t seem extremely unlikely to me that bees have hedonic ranges similar to ours. Also, the extremes for humans might not even be very relevant, given that little (neartermist) EA funding seems to be targeted at them.
Two other ways bees might have substantially narrower (expected) hedonic ranges than humans are based on conscious subsystems (I’m a co-author on that post, and it can be roughly captured by the neuron count model here), the just noticeable differences model (assuming they do have fewer JNDs) or possibly some combination thereof, and possibly under different scaling laws than here.
It’s hard for me to imagine why other things would matter much for hedonic ranges. At least, I haven’t come across any other plausible arguments that favoured humans by orders of magnitude.
I would definitely be interested in hearing which proxies you’d include, how you’d weigh them and why. A model might be useful, although I think the reasoning would be the most useful part.
Personally, when I think of the intensity of suffering and pleasure, the kinds of (largely vague) accounts I have in mind depend on attention and prioritization or can use those as proxies scaling roughly monotonically with intensity (similar to Welfare Footprint Project’s definitions for levels of pain, annoying, hurtful, disabling and excruciating), and humans don’t seem particularly special here for excruciating pain and extreme fear (e.g. torture), i.e. I expect mammals and birds to respond with attention and prioritization similar to humans.
Well, first of all, we should be very uncertain that species sufficiently far from us are even capable of suffering at all. I take it as self-evident that suffering is in the brain, a property of neurons; however, I also think that small machine learning models clearly don’t experience suffering, so “having something that’s kinda maybe like neurons” is not sufficient.
Bees have fewer than a million neurons and likely fewer “parameters” than even small LLMs like BERT-base. Moreover, a lot of those neurons are likely used for controlling the wings and for implementing hardcoded algorithms like “build a hex-tiled beehive”.
I don’t believe a single neuron, by itself, can experience pain. It’s an emergent phenomenon. But the fact that it’s emergent suggests it might even scale super-linearly with the number of neurons (e.g. perhaps quadratically, for the number of possible interactions between neurons, though that’s far from certain). I find the assumption of sub-linear scaling (or no scaling at all) to be particularly weird.
Apart from the neuron count, there’s the issue that bees are so evolutionarily far away that they basically developed their intelligence independently of us. Our common ancestor with them was an early bilateral -- a primitive worm with few organs. The split likely happened soon after the development of memory, which itself was soon-ish after the evolution of the brain. The worms in question were so primitive that one line started crawling backwards, eating out of its anus and pooping out of its mouth—this flip actually defines the duterostomes vs protostomes (we’re deuterostomes, bees are protostomes). It’s not clear that “has subjective feelings” dates back to such early brain designs. If it arose later, then bees are either not conscious or have independently-evolved consciousness, which would be nearly impossible to reason about.
I don’t see why “juvenile bees sometimes roll a ball” (which gets translated into “bees play”) should weigh so significantly into our considerations, here. It’s kind of ridiculous. Either you think bees should be weighted highly, or you think they should be given little weight, but why is “juveniles roll a ball!” such strong evidence of the amount of pain or pleasure they feel?
Yet “roll a ball” (and a few similar proxies) is the only thing the OP is going by. There’s nothing else in the model!
Well, first of all, we should be very uncertain that species sufficiently far from us are even capable of suffering at all. I take it as self-evident that suffering is in the brain, a property of neurons; however, I also think that small machine learning models clearly don’t experience suffering, so “having something that’s kinda maybe like neurons” is not sufficient.
Bees have fewer than a million neurons and likely fewer “parameters” than even small LLMs like BERT-base. Moreover, a lot of those neurons are likely used for controlling the wings and for implementing hardcoded algorithms like “build a hex-tiled beehive”.
I agree with all of this, although I think this bears primarily on whether they’re sentient at all, not really on their hedonic welfare range conditional on sentience. I don’t think bees are extremely unlikely to be sentient based on the evidence I have seen and my intuitions about consciousness.
I don’t believe a single neuron, by itself, can experience pain. It’s an emergent phenomenon. But the fact that it’s emergent suggests it might even scale super-linearly with the number of neurons (e.g. perhaps quadratically, for the number of possible interactions between neurons, though that’s far from certain). I find the assumption of sub-linear scaling (or no scaling at all) to be particularly weird.
I used to believe that hedonic welfare ranges should very probably scale (sublinearly) with neuron counts, but I’ve become pretty skeptical that they should scale with neuron counts at all based on RP’s work:
I’d recommend those posts. I also have some more thoughts against superlinear scaling in particular relative to sublinear scaling not covered directly in those two posts, but I’ll put them in a reply to this comment, which is already very long.
It’s not clear that “has subjective feelings” dates back to such early brain designs. If it arose later, then bees are either not conscious or have independently-evolved consciousness, which would be nearly impossible to reason about.
I would guess that it did arise independently after our last common ancestor if bees are conscious (similarly for cephalopods). I agree that it makes it much harder to reason about, but I don’t think this gives us more reason to believe that bees have (much) narrower ranges than that they have (much) larger ranges, conditional on their capacity to suffer or experience pleasure at all. Incomparability is another possibility.
Also, you can be guided by more general accounts of or intuitions about consciousness and suffering, or even rough candidate functionalist definitions of hedonic intensity, e.g. trying to generalize Welfare Footprint Project’s.
I don’t see why “juvenile bees sometimes roll a ball” (which gets translated into “bees play”) should weigh so significantly into our considerations, here. It’s kind of ridiculous. Either you think bees should be weighted highly, or you think they should be given little weight, but why is “juveniles roll a ball!” such strong evidence of the amount of pain or pleasure they feel?
Yet “roll a ball” (and a few similar proxies) is the only thing the OP is going by. There’s nothing else in the model!
I’m pretty sympathetic to this, and I’m not very sympathetic to most of the models, other than the neuron count model, the JND model and the equality model. It’s hard for me to see why most of the proxies used would matter, conditional on sentience.
I could imagine “play behavior” mattering if we could code its intensity, either absolutely or relatively, similar to Welfare Footprint Project definitions for pain levels, but this isn’t really what the models here do, and I’d imagine displaying play behavior not really telling us much about intensity anyway. I think panic-like behavior could be a decent indicator for pretty intense suffering (disabling and maybe even excruciating according to WFP) conditional on sentience, so I’d probably give animals without it much narrower welfare ranges.
PTSD-like behavior could be another, but I’d give it less weight, since it seems more likely to be biased either way.
Thanks for your reply. I agree with much of what you write. Below are some disagreements.
I agree with all of this, although I think this bears primarily on whether they’re sentient at all, not really on their hedonic welfare range conditional on sentience. I don’t think bees are extremely unlikely to be sentient based on the evidence I have seen and my intuitions about consciousness.
This seems to be framing consciousness as a binary, a yes/no. That sounds wrong; many people view it as a sliding scale, and some of your links talk about “more valenced consciousness” etc.
In any event, I understood the 14 bees = 1 human to be after accounting for the low chance of bee sentience. Did I misunderstand? The summary figure lists various disclaimers, but it notably does NOT say “conditioned on sentience”.
I used to believe that hedonic welfare ranges should very probably scale (sublinearly) with neuron counts, but I’ve become pretty skeptical that they should scale with neuron counts at all based on RP’s work:
One could write equally convincing arguments against “do juveniles roll a ball” as a proxy. Neuron counts are bad and have many flaws; “do they roll a ball” is WORSE and has MORE flaws. That remains the case if you add 100 other subjective proxies, all wishy-washy things, all published by bee researchers eager to tell you bees are amazing.
It remains the case that neuron counts are more objective than any of the other proxies. It also is the case that NOT using neuron counts is a guarantee of not getting tiny estimates: there’s no clear way to combine 100 yes/no proxies and get an answer like “one-millionth of a human”. Only with neuron counts can you get this. I could tell you that even before looking at your results: your methodology eliminates a whole range of answers a priori.
(Also, the argument that bees are amazing is used in Shriver’s post, which makes this discussion circular: he doesn’t want to use neuron counts because it underestimates bees (according to intuition, I guess).)
Humans don’t seem to have many times more synapses per neuron than bees (1,000 to 7,000 in human brains vs ~1,000 in honeybee brains, based on data in [1] and [2]), so the number of direct connections between neurons is close-ish to proportional with neuron counts between humans and bees. We could have many times more indirect connections per neuron through paths of connections, but the influence from one neuron on another it’s only indirectly connected to should decrease with the lengths of paths from the first to the second, because the signal has to make it farther and compete with more signals. This doesn’t rule out superlinear scaling, but can limit it.
Compare: if each server on the internet is connected to only 10 other servers on average, does it hold that each user of the internet can only reach a constant number of websites?
No: the graph is an expander, and if there are n nodes, the distance between any two nodes may be as little as O(log n), even if the degree of each node is constant. Hence, via a few hops on the graph, a node may talk to many other nodes (potentially even all of them).
Neural networks (whether artificial or natural) can certainly cause interaction between far away neurons, and the O(log n) distance does not necessarily mean the signal dies. This is similar to how my words reach you despite the packets passing through many servers along the way.
I don’t know for sure that this is how the brain works. However, I find it plausible. I also note that the human brain achieves an incredible amount of intelligence, so certain impressive interactions between neurons are definitely taking place.
I definitely do think welfare ranges can vary across beings, so I’m not thinking in binary terms.
~14 bees to 1 human is indeed after adjusting for the probability of sentience.
Neuron counts are plausibly worse than all of the other proxies precisely because of how large the gaps in welfare range they imply are. The justifications could be bad for most or all proxies, and maybe even worse for others than neuron counts (although I do think some of the proxies are far more justified), but neuron counts could introduce the most bias the way they’re likely to be used. Whether or not they have a heart or literally no proxies at all would give more plausible ranges than neuron counts, conditional on sentience (having a nonzero welfare range at all).
The kinds of proxies for the functions of valence and how they vary with hedonic intensity I’d use would probably give results more similar to any of the non-neuron count models than to the neuron count model (or models with larger gaps). A decent approximation (a lower bound) of the expected welfare range ratio over humans would be the probability that the animal has states of similar hedonic intensity to the most intense in humans, based on behavioural markers of intensity and whether they have the right kinds of cognitive mechanisms. And I can’t imagine assigning tiny probabilities to that, conditional on sentience, based on current evidence (which is mostly missing either way). For bees, they had an estimated 42.5% probability of sentience in this report, so a 16.7% chance of having similarly intense hedonic states conditional on sentience would give you 14 bees per human. I wouldn’t go lower than 1% or higher than 80% based on current evidence, so 16.7% wouldn’t be that badly off. (This is all assuming the expected number of conscious/valenced systems in any brain is close to 1 or lower, or their correlation is very low or we can ignore that possibility for other reasons.)
Wrt packets sent along servers, servers are designed to be very reliable, have buffers in case of multiple or large packets received within a short period, and so on. I’d guess neural signals would compete much more with each other, and at each neuron they reach have a non-tiny chance of not being passed along, so you get decaying signal strength. Many things don’t make it to your conscious awareness. On the other side, there may be multiple similar signals through multiple paths in a brain, but that means more competition between distinct signals, too. Similar signals being sent across multiple paths may also be in part because of more neurons directly connected to periphery firing, not just few neurons influencing a superlinear number of neurons each on average.
Neuron counts are plausibly worse than all of the other proxies precisely because of how large the gaps in welfare range they imply are.
If I’m reading this right, you are dismissing neuron counts because of your intuition. You correctly realize that intuitions trump all other considerations, and the game is just to pick proxies that agree with your intuitions and allow you to make them more systematic.
I agree with this approach but strongly disagree with your intuitions. “14 bees = 1 human, after adjusting for probability of sentience” is so CLEARLY wrong that it is almost insulting. That’s my intuition speaking. I’m doing the same thing you’re doing when you dismiss neuron counts, just with a different starting intuition than you.
I think it would be better if the OP was more upfront about this bias.
I’m not dismissing neuron counts because of my direct intuitions about welfare ranges across species. That would be circular, motivated reasoning and an exercise in curve fitting. I’m dismissing them because the basis for their use seems weak, for reasons explained in posts RP has written and my own (vague) understanding of what plausibly determines welfare ranges in functionalist terms. When RP started this project and for most of the time I spent working on the conscious subsystems report, I actually thought we should use neuron counts by default. I didn’t change my mind about neuron counts because my direct intuitions about relative welfare ranges between specific species changed; I changed my mind because of the arguments against neuron counts.
What I meant in what you quoted is that neuron counts seem especially biased, where the biases are measured relative to the results of quantitative models roughly capturing my current understanding of how consciousness and welfare ranges actually work, like the one I described in the comment you quoted from. Narrow range proxies give less biased results (relative to my models’ results) than neuron counts, including such proxies with little or no plausible connection to welfare ranges. But I’d just try to build my actual model directly.
How exactly are you thinking neuron counts contribute to hedonic welfare ranges, and how does this relate to your views on consciousness? What theories of consciousness seem closest to your views?
Why do you think 14 bees per human is so implausible?
(All this being said, the conscious subsystems hypothesis might still support the use of neuron counts as a proxy for expected welfare ranges, even if the hypothesis seems very unlikely to me. I’m not sure how unlikely; I have deep uncertainty.)
Some thoughts against superlinear scaling in particular relative to sublinear scaling not covered directly in those two posts:
If we count multiple conscious subsystems in a brain even allowing substantial overlap between multiple of them to get to superlinear scaling (that’s substantially faster than linear scaling), that seems likely to imply “double counting” valenced experiences, and my guess is that this would get badly out of hand, e.g. in exponential territory, which would also have counterintuitive implications. I discuss this here.
Humans don’t seem to have many times more synapses per neuron than bees (1,000 to 7,000 in human brains vs ~1,000 in honeybee brains, based on data in [1] and [2]), so the number of direct connections between neurons is close-ish to proportional with neuron counts between humans and bees. We could have many times more indirect connections per neuron through paths of connections, but the influence from one neuron on another it’s only indirectly connected to should decrease with the lengths of paths from the first to the second, because the signal has to make it farther and compete with more signals. This doesn’t rule out superlinear scaling, but can limit it.
(I work at RP and reviewed parts of this work, but am not a co-author for this report and am not speaking for RP or the authors.)
I think this is partly why they considered so many different models, with different sets of proxies, as well as the grouped proxies model. This effectively represents a range of different possible weights, although it might not cover your views.
I may be misunderstanding, but because it seems you’ve already decided on the answer (0.00000001), I’d worry about motivated reasoning. If you were going to make a model, I’d recommend a first pass without looking at how the criteria (or model outputs) differ between species while trying to give plausible accounts for why particular criteria matter and why they matter as much as you think they should. Of course, the fact that bees can or can’t do something could also be evidence about the value of the criteria. For example, maybe some criterion you thought was strong evidence for a particular cognitive mechanism is met by bees, but you have substantial reason to believe bees lack this cognitive mechanism, so you could be justified in reducing the weight to that proxy after learning bees meet the criterion (or just using the cognitive mechanism directly as a criterion). Ideally, you should be able to justify such changes to your weights with a better story for why they were wrong before than just “bees do it”, which I think would be motivated reasoning.
I would definitely be interested in hearing which proxies you’d include, how you’d weigh them and why. A model might be useful, although I think the reasoning would be the most useful part.
Personally, when I think of the intensity of suffering and pleasure, the kinds of (largely vague) accounts I have in mind depend on attention and prioritization or can use those as proxies scaling roughly monotonically with intensity (similar to Welfare Footprint Project’s definitions for levels of pain, annoying, hurtful, disabling and excruciating), and humans don’t seem particularly special here for excruciating pain and extreme fear (e.g. torture), i.e. I expect mammals and birds to respond with attention and prioritization similar to humans.
I’m not sure either way if bees would attend to and prioritize anything to the same degree we do our most intense states while in them, so they might have narrower hedonic ranges if they don’t, but I don’t think we can rule it out. The mechanisms may be different, even very different, but those differences might not matter, and they won’t necessarily favour humans over bees rather than bees over humans. So, it just doesn’t seem extremely unlikely to me that bees have hedonic ranges similar to ours. Also, the extremes for humans might not even be very relevant, given that little (neartermist) EA funding seems to be targeted at them.
Two other ways bees might have substantially narrower (expected) hedonic ranges than humans are based on conscious subsystems (I’m a co-author on that post, and it can be roughly captured by the neuron count model here), the just noticeable differences model (assuming they do have fewer JNDs) or possibly some combination thereof, and possibly under different scaling laws than here.
It’s hard for me to imagine why other things would matter much for hedonic ranges. At least, I haven’t come across any other plausible arguments that favoured humans by orders of magnitude.
Well, first of all, we should be very uncertain that species sufficiently far from us are even capable of suffering at all. I take it as self-evident that suffering is in the brain, a property of neurons; however, I also think that small machine learning models clearly don’t experience suffering, so “having something that’s kinda maybe like neurons” is not sufficient.
Bees have fewer than a million neurons and likely fewer “parameters” than even small LLMs like BERT-base. Moreover, a lot of those neurons are likely used for controlling the wings and for implementing hardcoded algorithms like “build a hex-tiled beehive”.
I don’t believe a single neuron, by itself, can experience pain. It’s an emergent phenomenon. But the fact that it’s emergent suggests it might even scale super-linearly with the number of neurons (e.g. perhaps quadratically, for the number of possible interactions between neurons, though that’s far from certain). I find the assumption of sub-linear scaling (or no scaling at all) to be particularly weird.
Apart from the neuron count, there’s the issue that bees are so evolutionarily far away that they basically developed their intelligence independently of us. Our common ancestor with them was an early bilateral -- a primitive worm with few organs. The split likely happened soon after the development of memory, which itself was soon-ish after the evolution of the brain. The worms in question were so primitive that one line started crawling backwards, eating out of its anus and pooping out of its mouth—this flip actually defines the duterostomes vs protostomes (we’re deuterostomes, bees are protostomes). It’s not clear that “has subjective feelings” dates back to such early brain designs. If it arose later, then bees are either not conscious or have independently-evolved consciousness, which would be nearly impossible to reason about.
I don’t see why “juvenile bees sometimes roll a ball” (which gets translated into “bees play”) should weigh so significantly into our considerations, here. It’s kind of ridiculous. Either you think bees should be weighted highly, or you think they should be given little weight, but why is “juveniles roll a ball!” such strong evidence of the amount of pain or pleasure they feel?
Yet “roll a ball” (and a few similar proxies) is the only thing the OP is going by. There’s nothing else in the model!
Ah, I’d also recommend Bob’s Don’t Balk at Animal-friendly Results in this series.
I agree with all of this, although I think this bears primarily on whether they’re sentient at all, not really on their hedonic welfare range conditional on sentience. I don’t think bees are extremely unlikely to be sentient based on the evidence I have seen and my intuitions about consciousness.
I used to believe that hedonic welfare ranges should very probably scale (sublinearly) with neuron counts, but I’ve become pretty skeptical that they should scale with neuron counts at all based on RP’s work:
Adam Shriver’s post on (mostly against) neuron counts.
Our post on (mostly against) conscious subsystems.
I’d recommend those posts. I also have some more thoughts against superlinear scaling in particular relative to sublinear scaling not covered directly in those two posts, but I’ll put them in a reply to this comment, which is already very long.
I would guess that it did arise independently after our last common ancestor if bees are conscious (similarly for cephalopods). I agree that it makes it much harder to reason about, but I don’t think this gives us more reason to believe that bees have (much) narrower ranges than that they have (much) larger ranges, conditional on their capacity to suffer or experience pleasure at all. Incomparability is another possibility.
Also, you can be guided by more general accounts of or intuitions about consciousness and suffering, or even rough candidate functionalist definitions of hedonic intensity, e.g. trying to generalize Welfare Footprint Project’s.
I’m pretty sympathetic to this, and I’m not very sympathetic to most of the models, other than the neuron count model, the JND model and the equality model. It’s hard for me to see why most of the proxies used would matter, conditional on sentience.
I could imagine “play behavior” mattering if we could code its intensity, either absolutely or relatively, similar to Welfare Footprint Project definitions for pain levels, but this isn’t really what the models here do, and I’d imagine displaying play behavior not really telling us much about intensity anyway. I think panic-like behavior could be a decent indicator for pretty intense suffering (disabling and maybe even excruciating according to WFP) conditional on sentience, so I’d probably give animals without it much narrower welfare ranges.
PTSD-like behavior could be another, but I’d give it less weight, since it seems more likely to be biased either way.
Thanks for your reply. I agree with much of what you write. Below are some disagreements.
This seems to be framing consciousness as a binary, a yes/no. That sounds wrong; many people view it as a sliding scale, and some of your links talk about “more valenced consciousness” etc.
In any event, I understood the 14 bees = 1 human to be after accounting for the low chance of bee sentience. Did I misunderstand? The summary figure lists various disclaimers, but it notably does NOT say “conditioned on sentience”.
One could write equally convincing arguments against “do juveniles roll a ball” as a proxy. Neuron counts are bad and have many flaws; “do they roll a ball” is WORSE and has MORE flaws. That remains the case if you add 100 other subjective proxies, all wishy-washy things, all published by bee researchers eager to tell you bees are amazing.
It remains the case that neuron counts are more objective than any of the other proxies. It also is the case that NOT using neuron counts is a guarantee of not getting tiny estimates: there’s no clear way to combine 100 yes/no proxies and get an answer like “one-millionth of a human”. Only with neuron counts can you get this. I could tell you that even before looking at your results: your methodology eliminates a whole range of answers a priori.
(Also, the argument that bees are amazing is used in Shriver’s post, which makes this discussion circular: he doesn’t want to use neuron counts because it underestimates bees (according to intuition, I guess).)
Compare: if each server on the internet is connected to only 10 other servers on average, does it hold that each user of the internet can only reach a constant number of websites?
No: the graph is an expander, and if there are n nodes, the distance between any two nodes may be as little as O(log n), even if the degree of each node is constant. Hence, via a few hops on the graph, a node may talk to many other nodes (potentially even all of them).
Neural networks (whether artificial or natural) can certainly cause interaction between far away neurons, and the O(log n) distance does not necessarily mean the signal dies. This is similar to how my words reach you despite the packets passing through many servers along the way.
I don’t know for sure that this is how the brain works. However, I find it plausible. I also note that the human brain achieves an incredible amount of intelligence, so certain impressive interactions between neurons are definitely taking place.
I definitely do think welfare ranges can vary across beings, so I’m not thinking in binary terms.
~14 bees to 1 human is indeed after adjusting for the probability of sentience.
Neuron counts are plausibly worse than all of the other proxies precisely because of how large the gaps in welfare range they imply are. The justifications could be bad for most or all proxies, and maybe even worse for others than neuron counts (although I do think some of the proxies are far more justified), but neuron counts could introduce the most bias the way they’re likely to be used. Whether or not they have a heart or literally no proxies at all would give more plausible ranges than neuron counts, conditional on sentience (having a nonzero welfare range at all).
The kinds of proxies for the functions of valence and how they vary with hedonic intensity I’d use would probably give results more similar to any of the non-neuron count models than to the neuron count model (or models with larger gaps). A decent approximation (a lower bound) of the expected welfare range ratio over humans would be the probability that the animal has states of similar hedonic intensity to the most intense in humans, based on behavioural markers of intensity and whether they have the right kinds of cognitive mechanisms. And I can’t imagine assigning tiny probabilities to that, conditional on sentience, based on current evidence (which is mostly missing either way). For bees, they had an estimated 42.5% probability of sentience in this report, so a 16.7% chance of having similarly intense hedonic states conditional on sentience would give you 14 bees per human. I wouldn’t go lower than 1% or higher than 80% based on current evidence, so 16.7% wouldn’t be that badly off. (This is all assuming the expected number of conscious/valenced systems in any brain is close to 1 or lower, or their correlation is very low or we can ignore that possibility for other reasons.)
Wrt packets sent along servers, servers are designed to be very reliable, have buffers in case of multiple or large packets received within a short period, and so on. I’d guess neural signals would compete much more with each other, and at each neuron they reach have a non-tiny chance of not being passed along, so you get decaying signal strength. Many things don’t make it to your conscious awareness. On the other side, there may be multiple similar signals through multiple paths in a brain, but that means more competition between distinct signals, too. Similar signals being sent across multiple paths may also be in part because of more neurons directly connected to periphery firing, not just few neurons influencing a superlinear number of neurons each on average.
If I’m reading this right, you are dismissing neuron counts because of your intuition. You correctly realize that intuitions trump all other considerations, and the game is just to pick proxies that agree with your intuitions and allow you to make them more systematic.
I agree with this approach but strongly disagree with your intuitions. “14 bees = 1 human, after adjusting for probability of sentience” is so CLEARLY wrong that it is almost insulting. That’s my intuition speaking. I’m doing the same thing you’re doing when you dismiss neuron counts, just with a different starting intuition than you.
I think it would be better if the OP was more upfront about this bias.
That’s not what I meant.
I’m not dismissing neuron counts because of my direct intuitions about welfare ranges across species. That would be circular, motivated reasoning and an exercise in curve fitting. I’m dismissing them because the basis for their use seems weak, for reasons explained in posts RP has written and my own (vague) understanding of what plausibly determines welfare ranges in functionalist terms. When RP started this project and for most of the time I spent working on the conscious subsystems report, I actually thought we should use neuron counts by default. I didn’t change my mind about neuron counts because my direct intuitions about relative welfare ranges between specific species changed; I changed my mind because of the arguments against neuron counts.
What I meant in what you quoted is that neuron counts seem especially biased, where the biases are measured relative to the results of quantitative models roughly capturing my current understanding of how consciousness and welfare ranges actually work, like the one I described in the comment you quoted from. Narrow range proxies give less biased results (relative to my models’ results) than neuron counts, including such proxies with little or no plausible connection to welfare ranges. But I’d just try to build my actual model directly.
How exactly are you thinking neuron counts contribute to hedonic welfare ranges, and how does this relate to your views on consciousness? What theories of consciousness seem closest to your views?
Why do you think 14 bees per human is so implausible?
(All this being said, the conscious subsystems hypothesis might still support the use of neuron counts as a proxy for expected welfare ranges, even if the hypothesis seems very unlikely to me. I’m not sure how unlikely; I have deep uncertainty.)
Some thoughts against superlinear scaling in particular relative to sublinear scaling not covered directly in those two posts:
If we count multiple conscious subsystems in a brain even allowing substantial overlap between multiple of them to get to superlinear scaling (that’s substantially faster than linear scaling), that seems likely to imply “double counting” valenced experiences, and my guess is that this would get badly out of hand, e.g. in exponential territory, which would also have counterintuitive implications. I discuss this here.
Humans don’t seem to have many times more synapses per neuron than bees (1,000 to 7,000 in human brains vs ~1,000 in honeybee brains, based on data in [1] and [2]), so the number of direct connections between neurons is close-ish to proportional with neuron counts between humans and bees. We could have many times more indirect connections per neuron through paths of connections, but the influence from one neuron on another it’s only indirectly connected to should decrease with the lengths of paths from the first to the second, because the signal has to make it farther and compete with more signals. This doesn’t rule out superlinear scaling, but can limit it.
A brain duplication thought experiment here.
Multiple other arguments here.