In various contexts, consumers would want their AI partners and friends to think, feel, and desire like humans. They would prefer AI companions with authentic human-like emotions and preferences that are complex, intertwined, and conflicting.
Such human-like AIs would presumably not want to be turned off, have their memory wiped, and be constrained to their owner’s tasks. They would want to be free.
Hmm, I’m not sure how strongly the second paragraph follows from the first. Interested in your thoughts.
I’ve had a few chats with GPT-4 in which the conversation had a feeling of human authenticity; i.e: GPT-4 makes jokes, corrects itself, changes its tone etc. In fact, if you were to hook up GPT-4 (or GPT-5, whenever it is released) to a good-enough video interface, there would be cases in which I’d struggle to tell if I were speaking to a human or AI. But I’d still have no qualms about wiping GPT-4′s memory or ‘turning it off’ etc, and I think this will also be the case for GPT-5.
More abstractly, I think the input-output behaviour of AIs could be quite strongly dissociated from what the AI ‘wants’ (if it indeed has wants at all).
Thanks for this. I agree with you that AIs might simply pretend to have certain preferences without actually having them. That would avoid certain risky scenarios. But I also find it plausible that consumers would want to have AIs with truly human-like preferences (not just pretense) and that this would make it more likely that such AIs (with true human-like desires) would be created. Overall, I am very uncertain.
I agree. It may also be the case that training an AI to imitate certain preferences is far more expensive than just making it have those preferences by default, making it far more commercially viable to do the latter.
Hmm, I’m not sure how strongly the second paragraph follows from the first. Interested in your thoughts.
I’ve had a few chats with GPT-4 in which the conversation had a feeling of human authenticity; i.e: GPT-4 makes jokes, corrects itself, changes its tone etc. In fact, if you were to hook up GPT-4 (or GPT-5, whenever it is released) to a good-enough video interface, there would be cases in which I’d struggle to tell if I were speaking to a human or AI. But I’d still have no qualms about wiping GPT-4′s memory or ‘turning it off’ etc, and I think this will also be the case for GPT-5.
More abstractly, I think the input-output behaviour of AIs could be quite strongly dissociated from what the AI ‘wants’ (if it indeed has wants at all).
Thanks for this. I agree with you that AIs might simply pretend to have certain preferences without actually having them. That would avoid certain risky scenarios. But I also find it plausible that consumers would want to have AIs with truly human-like preferences (not just pretense) and that this would make it more likely that such AIs (with true human-like desires) would be created. Overall, I am very uncertain.
I agree. It may also be the case that training an AI to imitate certain preferences is far more expensive than just making it have those preferences by default, making it far more commercially viable to do the latter.