However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.
I’d say pain experienced during 0.1 seconds is about 10 times less bad than pain experienced during 1 second. I don’t see why we should discount it any further than that. Our particular human psychology might be better at dealing with injury if we expect it to end soon, but we can’t change what the observer-moment S(t) expects to happen without changing the state of it’s mind. If we change the state of it’s mind, it’s not a copy of S(t) anymore, and the argument fails.
In general, I can’t see how this plan would work. As you say, you can’t decrease the absolute number of suffering oberver-moments, so it won’t do any good from the perspective of total utilitarianism. The closest thing I can imagine is to “dilute” pain by creating similar but somewhat happier copies, if you believe in some sort of average utilitarianism that cares about identity. That seems like a strange moral theory, though.
Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.
My plan is that FAI can’t decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.
I remain unconvinced, probably because I mostly care about observer-moments, and don’t really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can’t quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under ‘Assumptions’.
It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.
I’d say pain experienced during 0.1 seconds is about 10 times less bad than pain experienced during 1 second. I don’t see why we should discount it any further than that. Our particular human psychology might be better at dealing with injury if we expect it to end soon, but we can’t change what the observer-moment S(t) expects to happen without changing the state of it’s mind. If we change the state of it’s mind, it’s not a copy of S(t) anymore, and the argument fails.
In general, I can’t see how this plan would work. As you say, you can’t decrease the absolute number of suffering oberver-moments, so it won’t do any good from the perspective of total utilitarianism. The closest thing I can imagine is to “dilute” pain by creating similar but somewhat happier copies, if you believe in some sort of average utilitarianism that cares about identity. That seems like a strange moral theory, though.
Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.
My plan is that FAI can’t decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.
I remain unconvinced, probably because I mostly care about observer-moments, and don’t really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can’t quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under ‘Assumptions’.
It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.