Maybe to explain a bit more in detail what I meant with the example of hallucinating, rather than showcasing it’s limitation it’s showcasing it’s lack of understanding.
For example if you ask a human something and they’re honest about it, if they don’t know something they will not make something up but just tell you the information they have and beyond that they don’t know.
While in the hallucinating case the AI doesn’t say that it doesn’t know something, which it often does btw, but it doesn’t understand that it doesn’t know and just comes up with something “random”.
So I meant to say that it hallucinating is showcasing it’s lack of understanding.
I have to say though that I can’t be sure why it hallucinates really, it’s just my likely guess. Also for creativity there is some that you can do with prompt engineering but indeed at the end you’re limited by the training data + the max tokens that you can input where it can learn context from.
Hmm, I have a different take. I think if I tried to predict as many tokens as possible in response to a particular question, I would say all the words that I could guess someone who knew the answer would say, and then just blank out the actual answer because I couldn’t predict it.
Ah, you want to know about the Riemann hypothesis? Yes, I can explain to you what this hypothesis is, because I know it well. Wise of you ask me in particular, because you certainly wouldn’t ask anyone you knew didn’t have a clue. I will state its precise definition as follows:
~Kittens on the rooftop they sang nya nya nya.~
And that, you see, is what the hypothesis that Riemann hypothesised.
I’m not very good at even pretending to pretend to know what it is, so even if you blanked out the middle, you could still guess I was making it up. But if you blank out the substantive parts of GPT’s answer when it’s confabulating, you’ll have a hard time telling whether it knows the answer or not. It’s just good at what it does.
Maybe to explain a bit more in detail what I meant with the example of hallucinating, rather than showcasing it’s limitation it’s showcasing it’s lack of understanding.
For example if you ask a human something and they’re honest about it, if they don’t know something they will not make something up but just tell you the information they have and beyond that they don’t know.
While in the hallucinating case the AI doesn’t say that it doesn’t know something, which it often does btw, but it doesn’t understand that it doesn’t know and just comes up with something “random”.
So I meant to say that it hallucinating is showcasing it’s lack of understanding.
I have to say though that I can’t be sure why it hallucinates really, it’s just my likely guess. Also for creativity there is some that you can do with prompt engineering but indeed at the end you’re limited by the training data + the max tokens that you can input where it can learn context from.
Hmm, I have a different take. I think if I tried to predict as many tokens as possible in response to a particular question, I would say all the words that I could guess someone who knew the answer would say, and then just blank out the actual answer because I couldn’t predict it.
I’m not very good at even pretending to pretend to know what it is, so even if you blanked out the middle, you could still guess I was making it up. But if you blank out the substantive parts of GPT’s answer when it’s confabulating, you’ll have a hard time telling whether it knows the answer or not. It’s just good at what it does.