Are humans coherent with at least one non-satiable component? If so, then I don’t understand the distinction you’re making that would justify positing AI values to be worse than human values from a utilitarian perspective.
If not, then I’m additionally unclear on why you believe AIs will be unlike humans in this respect, to the extent that they would become “paperclippers.” That term itself seems ambiguous to me (do you mean AIs will literally terminally value accumulating certain configurations of matter?). I would really appreciate a clearer explanation of your argument. As it stands, I don’t fully understand what point you’re trying to make.
Humans are neither coherent, nor do they necessarily have a nonsatiable goal—though some might. But they have both to a far greater extent than less intelligent creatures.
Are humans coherent with at least one non-satiable component? If so, then I don’t understand the distinction you’re making that would justify positing AI values to be worse than human values from a utilitarian perspective.
If not, then I’m additionally unclear on why you believe AIs will be unlike humans in this respect, to the extent that they would become “paperclippers.” That term itself seems ambiguous to me (do you mean AIs will literally terminally value accumulating certain configurations of matter?). I would really appreciate a clearer explanation of your argument. As it stands, I don’t fully understand what point you’re trying to make.
Humans are neither coherent, nor do they necessarily have a nonsatiable goal—though some might. But they have both to a far greater extent than less intelligent creatures.