I had a similar question. Well stated. One answer is various arguments that “sentient valenced AGIs won’t maximise happiness of themselves” as noted by other commenters.
But I don’t think that is satisfying. Because most of the arguments (AFAIK) and appeals against AI risk don’t even mention this. So i think the appeal seems to take on board our feelings that “even if AIs take over and make themselves super happy with all the paper clips, that still feels bad”.
I had a similar question. Well stated. One answer is various arguments that “sentient valenced AGIs won’t maximise happiness of themselves” as noted by other commenters.
But I don’t think that is satisfying. Because most of the arguments (AFAIK) and appeals against AI risk don’t even mention this. So i think the appeal seems to take on board our feelings that “even if AIs take over and make themselves super happy with all the paper clips, that still feels bad”.