I remain unconvinced, probably because I mostly care about observer-moments, and don’t really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can’t quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under ‘Assumptions’.
It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.
I remain unconvinced, probably because I mostly care about observer-moments, and don’t really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can’t quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under ‘Assumptions’.
It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.