FWIW, I think something like conscious subsystems (in huge numbers in one neural network) is more plausible by design in future AI. It just seems unlikely in animals because all of the apparent subjective value seems to happen at roughly the highest level where everything is integrated in an animal brain.
Felt desire seems to (largely) be motivational salience, a top-down/voluntary attention control function driven by high-level interpretations of stimuli (e.g. objects, social situations), so relatively late in processing. Similarly, hedonic states depend on high-level interpretations, too.
Or, according to Attention Schema Theory, attention models evolved for the voluntary control of attention. It’s not clear what the value would be for an attention model at lower levels of organization before integration.
And evolution will select against realizing functions unnecessarily if they have additional costs, so we should provide a positive argument for the necessary functions being realized earlier or multiple times in parallel that overcomes or doesn’t incur such additional costs.
So, it’s not that integration necessarily reduces value; it’s that, in animals, all the morally valuable stuff happens after most of the integration, and apparently only once or in small number.
In artificial systems, the morally valuable stuff could instead be implemented separately by design at multiple levels.
EDIT:
I think there’s still crux about whether realizing the same function the same number of times but “to a greater degree” makes it more morally valuable. I think there are some ways of “to a greater degree” that don’t matter, and some that could. If it’s only sort of (vaguely) true that a system is realizing a certain function, or it realizes some but not all of the functions possibly necessary for some type of welfare in humans, then we might discount it for only meeting lower precisifications of the vague standards. But adding more neurons just doing the same things:
doesn’t make it more true that it realizes the function or the type of welfare (e.g. adding more neurons to my brain wouldn’t make it more true that I can suffer),
doesn’t clearly increase welfare ranges, and
doesn’t have any other clear reason for why it should make a moral difference (I think you disagree with this, based on your examples).
But maybe we don’t actually need good specific reasons to assign non-tiny probabilities to neuron count scaling for 2 or 3, and then we get domination of neuron count scaling in expectation, depending on what we’re normalizing by, like you suggest.
(Speaking for myself only.)
FWIW, I think something like conscious subsystems (in huge numbers in one neural network) is more plausible by design in future AI. It just seems unlikely in animals because all of the apparent subjective value seems to happen at roughly the highest level where everything is integrated in an animal brain.
Felt desire seems to (largely) be motivational salience, a top-down/voluntary attention control function driven by high-level interpretations of stimuli (e.g. objects, social situations), so relatively late in processing. Similarly, hedonic states depend on high-level interpretations, too.
Or, according to Attention Schema Theory, attention models evolved for the voluntary control of attention. It’s not clear what the value would be for an attention model at lower levels of organization before integration.
And evolution will select against realizing functions unnecessarily if they have additional costs, so we should provide a positive argument for the necessary functions being realized earlier or multiple times in parallel that overcomes or doesn’t incur such additional costs.
So, it’s not that integration necessarily reduces value; it’s that, in animals, all the morally valuable stuff happens after most of the integration, and apparently only once or in small number.
In artificial systems, the morally valuable stuff could instead be implemented separately by design at multiple levels.
EDIT:
I think there’s still crux about whether realizing the same function the same number of times but “to a greater degree” makes it more morally valuable. I think there are some ways of “to a greater degree” that don’t matter, and some that could. If it’s only sort of (vaguely) true that a system is realizing a certain function, or it realizes some but not all of the functions possibly necessary for some type of welfare in humans, then we might discount it for only meeting lower precisifications of the vague standards. But adding more neurons just doing the same things:
doesn’t make it more true that it realizes the function or the type of welfare (e.g. adding more neurons to my brain wouldn’t make it more true that I can suffer),
doesn’t clearly increase welfare ranges, and
doesn’t have any other clear reason for why it should make a moral difference (I think you disagree with this, based on your examples).
But maybe we don’t actually need good specific reasons to assign non-tiny probabilities to neuron count scaling for 2 or 3, and then we get domination of neuron count scaling in expectation, depending on what we’re normalizing by, like you suggest.