Not an expert on this/haven’t read all the prior discourse, but it seems like AIs acting like they’re humans is a major cause of “LLM psychosis” now and is implicated in a lot of future concerns about AI takeovers due to making it seem like people should trust the AIs more than they really should, plus anything about being worried about AIs turning evil due to the representation of scheming AIs in their training text (as well as concerns about handing over the future to AIs that aren’t actually moral patients). This work might make AIs act more human, or at least be useful for people who want to do that.
Not an expert on this/haven’t read all the prior discourse, but it seems like AIs acting like they’re humans is a major cause of “LLM psychosis” now and is implicated in a lot of future concerns about AI takeovers due to making it seem like people should trust the AIs more than they really should, plus anything about being worried about AIs turning evil due to the representation of scheming AIs in their training text (as well as concerns about handing over the future to AIs that aren’t actually moral patients). This work might make AIs act more human, or at least be useful for people who want to do that.