Since AI systems will likely have a very different cognitive structure than biological humans, it seems quite unlikely that they will develop mental health issues like humans do. There are some interesting things that happen to the characters that large-language models “role-play” as: They switch from helpful to mischievous when the right situation arises.
I could see a future in which AI systems are emulating the behavior of specific humans, in which case they might exhibit behaviors that are similar to the ones of mentally ill humans.
On addiction problems:
If one takes the concept of addiction seriously, wireheading is a failure mode remarkably similar to it.
What would happen if AI developed mental health issues or addiction problems, surely it couldn’t be genetic, ?
On mental health:
Since AI systems will likely have a very different cognitive structure than biological humans, it seems quite unlikely that they will develop mental health issues like humans do. There are some interesting things that happen to the characters that large-language models “role-play” as: They switch from helpful to mischievous when the right situation arises.
I could see a future in which AI systems are emulating the behavior of specific humans, in which case they might exhibit behaviors that are similar to the ones of mentally ill humans.
On addiction problems:
If one takes the concept of addiction seriously, wireheading is a failure mode remarkably similar to it.