The existence of digital people would force us to anthropomorphize digital intelligence. Because of that, the implications of any threats that AI may pose to us might be more comprehensively visible and more often in the foreground of AI researchers’ thinking.
Maybe anthropomorphizing AI would be an effective means through which to see the threats AI poses to us because of the fact that we have posed many threats to ourselves, like through war for example.
That seems useful up to a point—I feel like many think “Well, the AI will just do what we tell it to do, right?”, and remembering the many ways in which even humans cheat could help expose flaws in that thinking. On the other hand, anthropomorphizing AI too much could mean expecting them to behave in human-like ways, which itself is likely an unrealistic expectation.
I think that its utility being limited is true. It was just a first impression that occurred to me and I haven’t thought it through. It seemed like anthropomorphizing AI could consistently keep people on their toes with regard to AI. An alternative way to become wary of AI would be less obvious thoughts like an AI that became a paperclip maximizer. However, growing and consistently having priors about AI that anthropomorphize them may be disadvantageous by constraining people’s ability to have outside of the box suspicions (like what they already be covertly doing) and apprehensions (like them becoming paperclip maximizers) about them.
Anthropomorphizing AI could also help with thinking of AIs as moral patients. But I don’t think that being human should be the sufficient standard for being a moral patient. So thinking of them as humans may just be useful insofar as they initiate thought of them as moral patients but maybe eventually an understanding of them as moral patients will involve considerations that are particular to them and not just because they are like us.
The existence of digital people would force us to anthropomorphize digital intelligence. Because of that, the implications of any threats that AI may pose to us might be more comprehensively visible and more often in the foreground of AI researchers’ thinking.
Maybe anthropomorphizing AI would be an effective means through which to see the threats AI poses to us because of the fact that we have posed many threats to ourselves, like through war for example.
That seems useful up to a point—I feel like many think “Well, the AI will just do what we tell it to do, right?”, and remembering the many ways in which even humans cheat could help expose flaws in that thinking. On the other hand, anthropomorphizing AI too much could mean expecting them to behave in human-like ways, which itself is likely an unrealistic expectation.
I think that its utility being limited is true. It was just a first impression that occurred to me and I haven’t thought it through. It seemed like anthropomorphizing AI could consistently keep people on their toes with regard to AI. An alternative way to become wary of AI would be less obvious thoughts like an AI that became a paperclip maximizer. However, growing and consistently having priors about AI that anthropomorphize them may be disadvantageous by constraining people’s ability to have outside of the box suspicions (like what they already be covertly doing) and apprehensions (like them becoming paperclip maximizers) about them.
Anthropomorphizing AI could also help with thinking of AIs as moral patients. But I don’t think that being human should be the sufficient standard for being a moral patient. So thinking of them as humans may just be useful insofar as they initiate thought of them as moral patients but maybe eventually an understanding of them as moral patients will involve considerations that are particular to them and not just because they are like us.