Being intelligent and being error-prone are not mutually exclusive. Humans are highly intelligent, and yet they make mistakes constantly. I believe AGI will have mental flaws and make errors as well.
ChatGPT is very far from human level intelligence. All it’s trying to do is predict text based on gargantuan amounts of training data. So if there are lot’s of examples online of the thing you’re trying to do, such as writing a cover letter, it can learn to do it very well, but if your task is highly specific, it will be more likely to have errors.
It’s still highly impressive though, speaking natural language used to be highly dificult for AI and ChatGPT nails it on this front. It can do things in terms of adapting to prompts and basic reasoning that I was surprised by.
Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won’t be such an AGI existential-threat as many prominent commentators, even in this community, assume.
However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven’t managed to turn into a hyperlink), I’m a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there.
So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to ‘responding to prompts’).
Best Wishes Apologies if I was waffling a bit above, I’d be delighted to hear your thoughts! Haris
PS: The edit is just changing the link to the article into a hyperlink :)
Being intelligent and being error-prone are not mutually exclusive. Humans are highly intelligent, and yet they make mistakes constantly. I believe AGI will have mental flaws and make errors as well.
ChatGPT is very far from human level intelligence. All it’s trying to do is predict text based on gargantuan amounts of training data. So if there are lot’s of examples online of the thing you’re trying to do, such as writing a cover letter, it can learn to do it very well, but if your task is highly specific, it will be more likely to have errors.
It’s still highly impressive though, speaking natural language used to be highly dificult for AI and ChatGPT nails it on this front. It can do things in terms of adapting to prompts and basic reasoning that I was surprised by.
Dear friend @titotal
Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won’t be such an AGI existential-threat as many prominent commentators, even in this community, assume.
However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven’t managed to turn into a hyperlink), I’m a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there.
So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to ‘responding to prompts’).
Best Wishes
Apologies if I was waffling a bit above, I’d be delighted to hear your thoughts!
Haris
PS: The edit is just changing the link to the article into a hyperlink :)