Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won’t be such an AGI existential-threat as many prominent commentators, even in this community, assume.
However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven’t managed to turn into a hyperlink), I’m a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there.
So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to ‘responding to prompts’).
Best Wishes Apologies if I was waffling a bit above, I’d be delighted to hear your thoughts! Haris
PS: The edit is just changing the link to the article into a hyperlink :)
Dear friend @titotal
Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won’t be such an AGI existential-threat as many prominent commentators, even in this community, assume.
However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven’t managed to turn into a hyperlink), I’m a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there.
So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to ‘responding to prompts’).
Best Wishes
Apologies if I was waffling a bit above, I’d be delighted to hear your thoughts!
Haris
PS: The edit is just changing the link to the article into a hyperlink :)