I agree with you about overlap and individuation. We decided to stick with this presentation for simplicity and brevity.
Some thoughts, speaking only for myself and not my co-authors:
I would treat the indeterminacy and issue of what kind of overlap you allow as partly a normative question, and therefore partly a matter of normative intuition and subject to normative uncertainty. If you assign weights to different ways of counting subsystems that give precise estimates (including precisifications of imprecise approaches), you can use a method for handling normative uncertainty to guide action (e.g. one of the methods discussed in https://www.moraluncertainty.com/ ).
While I actually expect some overlap to be allowed, I think reasonable constraints that prevent what looks like counterintuitive double counting to me will give you something that scales at most roughly proportionally with the number of neurons, if you pick the largest number of conscious subsystems of a brain you can get while following that set of constraints. This leaves no indeterminacy (other than more standard empirical or logical uncertainty), conditional on a set of precise constraints and this rule of picking the largest number. But you can have normative uncertainty about the constraints and/or the rule. As one potential constraint, if you have A1, A2 and A1+A2, you could count any two of them, but not all three together. Or, cluster them based on degree of overlap with some arbitrary sharp cutoff and pick one representative from each cluster. Or, you could pick non-overlapping subsets of neurons of the conscious subsystems to individuate them, so that each neuron can help individuate at most one conscious subsystem, but each neuron can still be part of and contribute to multiple conscious subsystems. You could also have function-specific constraints.
Furthermore, without such constraints, you may end up with huge and predictable differences in expected welfare ranges between typically developed humans, and possibly whale and elephant interventions beating human-targeted interventions in the near term (because they have more neurons per animal), despite how few whales and elephants would be affected per $ on average. This seems very morally counterintuitive to me, but largely based on intuitions that depended on there not being such huge differences in the number of conscious subsystems in the first place.
On the two hemispheres case, we have a report on phenomenal unity coming out soon that will discuss it. In this context, I’ll just say that 1 or 2 extra conscious subsystems or even a doubling or tripling of the number (in case there would still be many otherwise) wouldn’t make much difference to prioritization between species just on the basis of the number of conscious subsystems, and we wanted to focus on cases where individuals have suggested very large gaps between species.
Hi Luke, thanks for your comment!
I agree with you about overlap and individuation. We decided to stick with this presentation for simplicity and brevity.
Some thoughts, speaking only for myself and not my co-authors:
I would treat the indeterminacy and issue of what kind of overlap you allow as partly a normative question, and therefore partly a matter of normative intuition and subject to normative uncertainty. If you assign weights to different ways of counting subsystems that give precise estimates (including precisifications of imprecise approaches), you can use a method for handling normative uncertainty to guide action (e.g. one of the methods discussed in https://www.moraluncertainty.com/ ).
While I actually expect some overlap to be allowed, I think reasonable constraints that prevent what looks like counterintuitive double counting to me will give you something that scales at most roughly proportionally with the number of neurons, if you pick the largest number of conscious subsystems of a brain you can get while following that set of constraints. This leaves no indeterminacy (other than more standard empirical or logical uncertainty), conditional on a set of precise constraints and this rule of picking the largest number. But you can have normative uncertainty about the constraints and/or the rule. As one potential constraint, if you have A1, A2 and A1+A2, you could count any two of them, but not all three together. Or, cluster them based on degree of overlap with some arbitrary sharp cutoff and pick one representative from each cluster. Or, you could pick non-overlapping subsets of neurons of the conscious subsystems to individuate them, so that each neuron can help individuate at most one conscious subsystem, but each neuron can still be part of and contribute to multiple conscious subsystems. You could also have function-specific constraints.
Furthermore, without such constraints, you may end up with huge and predictable differences in expected welfare ranges between typically developed humans, and possibly whale and elephant interventions beating human-targeted interventions in the near term (because they have more neurons per animal), despite how few whales and elephants would be affected per $ on average. This seems very morally counterintuitive to me, but largely based on intuitions that depended on there not being such huge differences in the number of conscious subsystems in the first place.
On the two hemispheres case, we have a report on phenomenal unity coming out soon that will discuss it. In this context, I’ll just say that 1 or 2 extra conscious subsystems or even a doubling or tripling of the number (in case there would still be many otherwise) wouldn’t make much difference to prioritization between species just on the basis of the number of conscious subsystems, and we wanted to focus on cases where individuals have suggested very large gaps between species.