The solution you give to the water computer slicing problem seems to me to have much worse moral implications than two others of which I’m aware, which are also more directly intuitive to me irrespective of moral implications. I also don’t see a principled argument for the solution you give, rather than just as a response to the slicing thought experiment (and similar ones) and possibly dissatisfaction with alternatives.
Here are the two other solutions:
Slicing the computer does create two numerically distinct minds, and there was only one before (or it increases the number of them, but maybe it’s not 1->2). They are now physically and causally separate, and they weren’t before, so there are now more of them. We should just count them separately and discretely and add them up. You get additional moral weight by slicing. Pouring out water without slicing makes no difference, because the number of physically and causally separate instantiations is preserved. (You mention this solution in your post.)
We should weigh morally by some measure (volume, size, number of fine-grained — e.g. particle-level — instantiations of the relevant (sets of) causal roles[1]) of the substrate that is (exactly or roughly) preserved under splitting. Slicing therefore also (exactly or roughly) preserves the moral weight. Pouring out water without slicing reduces moral weight. (You don’t mention this solution in your post.)
On both views, adding more identical experiences would add more moral weight.
Both can actually be motivated the same way, by individuating and counting instantiations of the relevant sets of causal roles under functionalism, but they disagree on how exactly to do that. Both can also be motivated by measuring the “size” of tokens under an identity theory, but using different measures of tokens. This gives a generalized form for solutions.
In response to your description of solution 1, you write:
But this implies that it would be unethical to slice the computer in half and then pour out the water from one half of it, but it would not be unethical to pour out half the water from the original without any slicing, which doesn’t make a lot of sense.
I think you’re getting at some kind of moral discontinuity with respect to the physical facts, but if you just aggregate the value in experiences, including duplicates, there isn’t objectionable discontinuity. If you slice first, the extra will accumulate experiences and moral value over time until it is poured out, with the difference potentially increasing over time from 0, but gradually. If you pour the one out immediately after slicing, this is morally equivalent to just pouring half directly without slicing, because the poured out half won’t get the chance to accumulate any experiences.
The “unethical” you have in mind, I think, requires differences in moral value without corresponding differences in experiences. Allowing such differences will probably lead to similar moral discontinuities from creating and killing either way.[2]
Similar slicing/splitting thought experiments are discussed by Bostrom (2006) and Almond (2003-2007), where solutions like 2 are defended. I was quite sympathetic to 2 for a while, but I suspect it’s trying too hard to make something precise out of something that’s inherently quite vague/fuzzy. To avoid exponential numbers of minds anyway,[1] I’d guess 1 is about as easily defended.
Individuation and counting can be tricky, though, if we want to avoid exponential numbers of minds with overlapping causal roles as the size of the brain increases, because of exponential numbers of conscious subsets of brains, and there may not be any very principled solution, although there are solutions. See Tomasik, 2015, this thread and Crummett, 2022.
It could have to do with killing, earlier death, or an earlier end to a temporally extended identity, just being bad in itself, independently of the deprived experiences. But if you wanted to be sensitive to temporally extended identities, you’d find way more diversity in them, with way more possible sequences of experiences in them, and way more diversity in sequences of experiences across shrimp. It seems extraordinarily unlikely for any two typical shrimp that have been conscious for at least a day to have identical sequences of experiences.
It’s enough for one to have had more conscious experiences than the other, by just living slightly longer, say. They have eyes, which I’d guess result in quite a lot of different subjective visual details in their visual fields. It’s enough for any such detail to differ when we line up their experiences over time. And I’d guess a tiny part of the visual field can be updated 30 or more times per second, based on their critical flicker fusion frequencies.
Or, you could think of a more desire/preference-based theory where desire/preference frustration can be bad even when it makes no difference to experiences. In that case, on solution 1, slicing the computer and ending up with two beings who prefer to not die, and immediately pouring out one frustrates more than just pouring directly without first slicing.
But these views also seem morally discontinuous even if you ignore duplicates: if you create a non-duplicate mind (with a preference not to die or if killing or early death is bad in itself) and immediately kill it (or just after its first experience, or an immediate experience of wanting to not die), that would be bad.
It may be the case that slicing/splitting is a much slighter physical difference than creating a totally distinct short-lived mind. However, note that slicing could also be done to intermediate degrees, e.g. only slicing one of the nodes. Similarly, we can imagine cutting connections between the two hemispheres of a human brain, one at a time. How should we meaure the number of minds with intermediate levels of interhemispheric connectivity, between typical connectivity and none? If your position is continuous with respect to that, I imagine something similar could be used for intermediate degrees of slicing. Then there would be many morally intermediate states between the original water computer and the fully sliced water computer, and no discontinuity, although possibly a faster transition.
Sorry, I’ll engage more here.
The solution you give to the water computer slicing problem seems to me to have much worse moral implications than two others of which I’m aware, which are also more directly intuitive to me irrespective of moral implications. I also don’t see a principled argument for the solution you give, rather than just as a response to the slicing thought experiment (and similar ones) and possibly dissatisfaction with alternatives.
Here are the two other solutions:
Slicing the computer does create two numerically distinct minds, and there was only one before (or it increases the number of them, but maybe it’s not 1->2). They are now physically and causally separate, and they weren’t before, so there are now more of them. We should just count them separately and discretely and add them up. You get additional moral weight by slicing. Pouring out water without slicing makes no difference, because the number of physically and causally separate instantiations is preserved. (You mention this solution in your post.)
We should weigh morally by some measure (volume, size, number of fine-grained — e.g. particle-level — instantiations of the relevant (sets of) causal roles[1]) of the substrate that is (exactly or roughly) preserved under splitting. Slicing therefore also (exactly or roughly) preserves the moral weight. Pouring out water without slicing reduces moral weight. (You don’t mention this solution in your post.)
On both views, adding more identical experiences would add more moral weight.
Both can actually be motivated the same way, by individuating and counting instantiations of the relevant sets of causal roles under functionalism, but they disagree on how exactly to do that. Both can also be motivated by measuring the “size” of tokens under an identity theory, but using different measures of tokens. This gives a generalized form for solutions.
In response to your description of solution 1, you write:
I think you’re getting at some kind of moral discontinuity with respect to the physical facts, but if you just aggregate the value in experiences, including duplicates, there isn’t objectionable discontinuity. If you slice first, the extra will accumulate experiences and moral value over time until it is poured out, with the difference potentially increasing over time from 0, but gradually. If you pour the one out immediately after slicing, this is morally equivalent to just pouring half directly without slicing, because the poured out half won’t get the chance to accumulate any experiences.
The “unethical” you have in mind, I think, requires differences in moral value without corresponding differences in experiences. Allowing such differences will probably lead to similar moral discontinuities from creating and killing either way.[2]
Similar slicing/splitting thought experiments are discussed by Bostrom (2006) and Almond (2003-2007), where solutions like 2 are defended. I was quite sympathetic to 2 for a while, but I suspect it’s trying too hard to make something precise out of something that’s inherently quite vague/fuzzy. To avoid exponential numbers of minds anyway,[1] I’d guess 1 is about as easily defended.
Individuation and counting can be tricky, though, if we want to avoid exponential numbers of minds with overlapping causal roles as the size of the brain increases, because of exponential numbers of conscious subsets of brains, and there may not be any very principled solution, although there are solutions. See Tomasik, 2015, this thread and Crummett, 2022.
It could have to do with killing, earlier death, or an earlier end to a temporally extended identity, just being bad in itself, independently of the deprived experiences. But if you wanted to be sensitive to temporally extended identities, you’d find way more diversity in them, with way more possible sequences of experiences in them, and way more diversity in sequences of experiences across shrimp. It seems extraordinarily unlikely for any two typical shrimp that have been conscious for at least a day to have identical sequences of experiences.
It’s enough for one to have had more conscious experiences than the other, by just living slightly longer, say. They have eyes, which I’d guess result in quite a lot of different subjective visual details in their visual fields. It’s enough for any such detail to differ when we line up their experiences over time. And I’d guess a tiny part of the visual field can be updated 30 or more times per second, based on their critical flicker fusion frequencies.
Or, you could think of a more desire/preference-based theory where desire/preference frustration can be bad even when it makes no difference to experiences. In that case, on solution 1, slicing the computer and ending up with two beings who prefer to not die, and immediately pouring out one frustrates more than just pouring directly without first slicing.
But these views also seem morally discontinuous even if you ignore duplicates: if you create a non-duplicate mind (with a preference not to die or if killing or early death is bad in itself) and immediately kill it (or just after its first experience, or an immediate experience of wanting to not die), that would be bad.
It may be the case that slicing/splitting is a much slighter physical difference than creating a totally distinct short-lived mind. However, note that slicing could also be done to intermediate degrees, e.g. only slicing one of the nodes. Similarly, we can imagine cutting connections between the two hemispheres of a human brain, one at a time. How should we meaure the number of minds with intermediate levels of interhemispheric connectivity, between typical connectivity and none? If your position is continuous with respect to that, I imagine something similar could be used for intermediate degrees of slicing. Then there would be many morally intermediate states between the original water computer and the fully sliced water computer, and no discontinuity, although possibly a faster transition.