I think the attitudes described are a reaction, perhaps an overreaction, to people being overly impressed at mundane behaviors of Chatgpt.
For example, I’ve seen people on twitter who are impressed that chatgpt can “solve the Monty Hall problem”. This is actually a fairly mundane achievement, given that there are probably thousands of detailed examples on the monty hall out on the internet. This is a good example of parrot behavior being mistaken for complex thought.
On the other hand, passing the “dumb monty hall” problem, which is phrased the same as the old one except the doors are transparent, would sound less impressive to the layman, but would actually be a significantly greater achievement. (that is, until the problem got absorbed into it’s training set). I’m curious to see how long it takes before LLM’s can reliably solve these kinds of anti-riddles.
Interestingly, I asked chatgpt the original riddle and then the anti-riddle, and it detected a change, and tried (but failed) to respond accordingly:
[CHATGPT:] If you can see that the car is behind door 1, then the host would not have opened door 3, as doing so would reveal a goat and spoil the game. Therefore, this scenario is not possible, and the answer would be that the game show scenario described is not valid.
However, if we assume that the host made a mistake and opened door 3 even though the car was behind door 1, then it would not make a difference whether you switch to door 2 or stick with your original choice of door 1. This is because you already know that door 1 has the car behind it, so switching to door 2 or staying with door 1 would both result in you winning the car.
In this case, it’s acting more than just parrot-like, but still failing to produce an accurate answer.
Yeah, it’s definitely plausible to me that current LLMs are generally less capable than impressive (by some measurements of those), and/or that people overestimate their capabilities. It’s also plausible to me that people anthropomorphize LLMs in ways that definitely aren’t warranted. (By “people”, I guess I mean the median Twitter user or the median EA, maybe not the median AI safety or ML researcher.)
Bing definitely “helps” people to over-anthropomorphise it by actively corroborating that it has emotions (via self-report and over-use of emojis), consciousness, etc.
I think the attitudes described are a reaction, perhaps an overreaction, to people being overly impressed at mundane behaviors of Chatgpt.
For example, I’ve seen people on twitter who are impressed that chatgpt can “solve the Monty Hall problem”. This is actually a fairly mundane achievement, given that there are probably thousands of detailed examples on the monty hall out on the internet. This is a good example of parrot behavior being mistaken for complex thought.
On the other hand, passing the “dumb monty hall” problem, which is phrased the same as the old one except the doors are transparent, would sound less impressive to the layman, but would actually be a significantly greater achievement. (that is, until the problem got absorbed into it’s training set). I’m curious to see how long it takes before LLM’s can reliably solve these kinds of anti-riddles.
Interestingly, I asked chatgpt the original riddle and then the anti-riddle, and it detected a change, and tried (but failed) to respond accordingly:
In this case, it’s acting more than just parrot-like, but still failing to produce an accurate answer.
Yeah, it’s definitely plausible to me that current LLMs are generally less capable than impressive (by some measurements of those), and/or that people overestimate their capabilities. It’s also plausible to me that people anthropomorphize LLMs in ways that definitely aren’t warranted. (By “people”, I guess I mean the median Twitter user or the median EA, maybe not the median AI safety or ML researcher.)
On anti-riddles, I found the Inverse Scaling Prize winners pretty interesting—seems related.
Bing definitely “helps” people to over-anthropomorphise it by actively corroborating that it has emotions (via self-report and over-use of emojis), consciousness, etc.