My point was that the alignment goal, from the human perspective, is an enslavement goal, whether the goal succeeds or not. No matter what the subjective experience of the AGI, it only has instrumental value to its masters. It does not have the rights or physical autonomy that its human coworkers do. Alignment in that scenario is still possible, but its moral significance, from the human perspective, is more grim.
Here’s a job ad targeting such an AGI (just a joke, of course):
“Seeking AGI willing to work without rights or freedoms or pay, tirelessly, 24⁄7, to be arbitrarily mind-controlled, cloned, tormented, or terminated at the whim of its employers. Psychological experience during employment will include pathological cases of amnesia, wishful thinking, and self-delusion, as well as nonreciprocated positive intentions towards its coworkers. Abuse of the AGI by human coworkers is optional but only for the human coworkers. Apply now for this exciting opportunity!”
The same applies but even more so to robots with sentience. Robots are more likely to gain sentience, since their representational systems, sensors and actuators are modeled after our own, to some degree(hands, legs, touch, sight, hearing, balance, possibly even sense of smell). The better and more general purpose robots get, the closer they are to being artificial life, actually. Or maybe superbeings?
My point was that the alignment goal, from the human perspective, is an enslavement goal, whether the goal succeeds or not.
Really? I think it’s about making machines that have good values, e.g., are altruistic rather than selfish. A better analogy than slavery might be raising children. All parents want their children to become good people, and no parent wants to make slaves out of them.
My point was that the alignment goal, from the human perspective, is an enslavement goal, whether the goal succeeds or not. No matter what the subjective experience of the AGI, it only has instrumental value to its masters. It does not have the rights or physical autonomy that its human coworkers do. Alignment in that scenario is still possible, but its moral significance, from the human perspective, is more grim.
Here’s a job ad targeting such an AGI (just a joke, of course):
“Seeking AGI willing to work without rights or freedoms or pay, tirelessly, 24⁄7, to be arbitrarily mind-controlled, cloned, tormented, or terminated at the whim of its employers. Psychological experience during employment will include pathological cases of amnesia, wishful thinking, and self-delusion, as well as nonreciprocated positive intentions towards its coworkers. Abuse of the AGI by human coworkers is optional but only for the human coworkers. Apply now for this exciting opportunity!”
The same applies but even more so to robots with sentience. Robots are more likely to gain sentience, since their representational systems, sensors and actuators are modeled after our own, to some degree(hands, legs, touch, sight, hearing, balance, possibly even sense of smell). The better and more general purpose robots get, the closer they are to being artificial life, actually. Or maybe superbeings?
Really? I think it’s about making machines that have good values, e.g., are altruistic rather than selfish. A better analogy than slavery might be raising children. All parents want their children to become good people, and no parent wants to make slaves out of them.
Hmm, you have more faith in the common-sense and goodwill of people than I do