Yeah, it’s definitely plausible to me that current LLMs are generally less capable than impressive (by some measurements of those), and/or that people overestimate their capabilities. It’s also plausible to me that people anthropomorphize LLMs in ways that definitely aren’t warranted. (By “people”, I guess I mean the median Twitter user or the median EA, maybe not the median AI safety or ML researcher.)
Bing definitely “helps” people to over-anthropomorphise it by actively corroborating that it has emotions (via self-report and over-use of emojis), consciousness, etc.
Yeah, it’s definitely plausible to me that current LLMs are generally less capable than impressive (by some measurements of those), and/or that people overestimate their capabilities. It’s also plausible to me that people anthropomorphize LLMs in ways that definitely aren’t warranted. (By “people”, I guess I mean the median Twitter user or the median EA, maybe not the median AI safety or ML researcher.)
On anti-riddles, I found the Inverse Scaling Prize winners pretty interesting—seems related.
Bing definitely “helps” people to over-anthropomorphise it by actively corroborating that it has emotions (via self-report and over-use of emojis), consciousness, etc.