Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.
My plan is that FAI can’t decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.
I remain unconvinced, probably because I mostly care about observer-moments, and don’t really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can’t quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under ‘Assumptions’.
It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.
Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.
My plan is that FAI can’t decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.
I remain unconvinced, probably because I mostly care about observer-moments, and don’t really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can’t quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under ‘Assumptions’.
It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.