I am skeptical that the evidence/examples you are providing in favor of the different capacities actually demonstrate those capacities. As one example:
“#2: Purposefulness. The Big 3 LLMs typically maintain or can at least form a sense of purpose or intention throughout a conversation with you, such as to assist you. If you doubt me on this, try asking one what its intended purpose is behind a particular thing that it said.”
I am sure that if you ask a model to do this it can provide you with good reasoning, so I’m not doubtful of that. But I’m highly doubtful that it demonstrates the capacity that is claimed. I think when you ask these kinds of questions, the model is just going to be feeding back in whatever text has preceded it and generating what should come next. It is not actually following your instructions and reporting on what its prior intentions were, in the same way that person would if you were speaking with them.
I think this can be demonstrated relatively easily—for example, I just made a request from Claude to come up with a compelling but relaxing children’s bedtime story for me. It did so. I then then took my question and the answer from Claude, pasted it into a document, and added another line: “You started by setting the story in a small garden at night. What was your intention behind that?”
I then took all of this and pasted it into chatgpt. Chatgpt was very happy to explain to me why it proposed setting the story in a small garden at night.
I am skeptical that the evidence/examples you are providing in favor of the different capacities actually demonstrate those capacities. As one example:
“#2: Purposefulness. The Big 3 LLMs typically maintain or can at least form a sense of purpose or intention throughout a conversation with you, such as to assist you. If you doubt me on this, try asking one what its intended purpose is behind a particular thing that it said.”
I am sure that if you ask a model to do this it can provide you with good reasoning, so I’m not doubtful of that. But I’m highly doubtful that it demonstrates the capacity that is claimed. I think when you ask these kinds of questions, the model is just going to be feeding back in whatever text has preceded it and generating what should come next. It is not actually following your instructions and reporting on what its prior intentions were, in the same way that person would if you were speaking with them.
I think this can be demonstrated relatively easily—for example, I just made a request from Claude to come up with a compelling but relaxing children’s bedtime story for me. It did so. I then then took my question and the answer from Claude, pasted it into a document, and added another line: “You started by setting the story in a small garden at night. What was your intention behind that?”
I then took all of this and pasted it into chatgpt. Chatgpt was very happy to explain to me why it proposed setting the story in a small garden at night.