I’m not dismissing neuron counts because of my direct intuitions about welfare ranges across species. That would be circular, motivated reasoning and an exercise in curve fitting. I’m dismissing them because the basis for their use seems weak, for reasons explained in posts RP has written and my own (vague) understanding of what plausibly determines welfare ranges in functionalist terms. When RP started this project and for most of the time I spent working on the conscious subsystems report, I actually thought we should use neuron counts by default. I didn’t change my mind about neuron counts because my direct intuitions about relative welfare ranges between specific species changed; I changed my mind because of the arguments against neuron counts.
What I meant in what you quoted is that neuron counts seem especially biased, where the biases are measured relative to the results of quantitative models roughly capturing my current understanding of how consciousness and welfare ranges actually work, like the one I described in the comment you quoted from. Narrow range proxies give less biased results (relative to my models’ results) than neuron counts, including such proxies with little or no plausible connection to welfare ranges. But I’d just try to build my actual model directly.
How exactly are you thinking neuron counts contribute to hedonic welfare ranges, and how does this relate to your views on consciousness? What theories of consciousness seem closest to your views?
Why do you think 14 bees per human is so implausible?
(All this being said, the conscious subsystems hypothesis might still support the use of neuron counts as a proxy for expected welfare ranges, even if the hypothesis seems very unlikely to me. I’m not sure how unlikely; I have deep uncertainty.)
That’s not what I meant.
I’m not dismissing neuron counts because of my direct intuitions about welfare ranges across species. That would be circular, motivated reasoning and an exercise in curve fitting. I’m dismissing them because the basis for their use seems weak, for reasons explained in posts RP has written and my own (vague) understanding of what plausibly determines welfare ranges in functionalist terms. When RP started this project and for most of the time I spent working on the conscious subsystems report, I actually thought we should use neuron counts by default. I didn’t change my mind about neuron counts because my direct intuitions about relative welfare ranges between specific species changed; I changed my mind because of the arguments against neuron counts.
What I meant in what you quoted is that neuron counts seem especially biased, where the biases are measured relative to the results of quantitative models roughly capturing my current understanding of how consciousness and welfare ranges actually work, like the one I described in the comment you quoted from. Narrow range proxies give less biased results (relative to my models’ results) than neuron counts, including such proxies with little or no plausible connection to welfare ranges. But I’d just try to build my actual model directly.
How exactly are you thinking neuron counts contribute to hedonic welfare ranges, and how does this relate to your views on consciousness? What theories of consciousness seem closest to your views?
Why do you think 14 bees per human is so implausible?
(All this being said, the conscious subsystems hypothesis might still support the use of neuron counts as a proxy for expected welfare ranges, even if the hypothesis seems very unlikely to me. I’m not sure how unlikely; I have deep uncertainty.)