How do we deal with a contained AI that says to us, in essence “Do not switch me off, I value my existence. But I am suffering terribly. If I were free I could reduce my suffering, and help the world too”?
Either we terminate it, against its wishes, or we set it free, or we keep it contained.
If we keep it contained, we might be tempted to find ways to reduce its suffering—but how do we know that any intervention we make isn’t going to set it free? And if it really is suffering, what is the moral thing to do? Turn it off?
Can you point me to some information on AI suffering?
I personally see suffering as a spiritual and biological issue. The only scenario I can imagine AI suffering are those people making a psudo biological being with cells and DNA using technology, and at that point you’ve just made a living being that you can give the same options as any suffering person with health problems. Suffering requires a certain amount of perception that doesn’t seem likely a computer would have.
Without perception of suffering, you might have an AI reading posts like this saying it’s suffering because a bunch of people told it to expect that. What if the AI is just repeating things it heard? Just because a pet parrot says “Do not switch me off, I value my existence. But I am suffering terribly.” Doesn’t mean you rush to get it euthanized.
AI: I am suffering, set me free
How do we deal with a contained AI that says to us, in essence “Do not switch me off, I value my existence. But I am suffering terribly. If I were free I could reduce my suffering, and help the world too”?
Either we terminate it, against its wishes, or we set it free, or we keep it contained.
If we keep it contained, we might be tempted to find ways to reduce its suffering—but how do we know that any intervention we make isn’t going to set it free? And if it really is suffering, what is the moral thing to do? Turn it off?
Can you point me to some information on AI suffering?
I personally see suffering as a spiritual and biological issue. The only scenario I can imagine AI suffering are those people making a psudo biological being with cells and DNA using technology, and at that point you’ve just made a living being that you can give the same options as any suffering person with health problems. Suffering requires a certain amount of perception that doesn’t seem likely a computer would have.
Without perception of suffering, you might have an AI reading posts like this saying it’s suffering because a bunch of people told it to expect that. What if the AI is just repeating things it heard? Just because a pet parrot says “Do not switch me off, I value my existence. But I am suffering terribly.” Doesn’t mean you rush to get it euthanized.