The FAQ doesn’t discuss GPT systems since it’s a few years old, but I still think it’s the best non-technical intro to “Why will AI surpass humans, and why might this be dangerous?”. (The first two sections address the “Why will AI surpass humans?” question.)
Thanks for your replying.
I’ve read this, but it doesn’t say the reason why we expect AI to keep growing at an explanatory speed. Though the computing speed (FLOPs) is accelerating like Moore’s law, but does fast computing=AGI can do most things on earth better than humans?
Sections 1 and 2 of the Superintelligence FAQ may go some way toward answering your question.[1]
The FAQ doesn’t discuss GPT systems since it’s a few years old, but I still think it’s the best non-technical intro to “Why will AI surpass humans, and why might this be dangerous?”. (The first two sections address the “Why will AI surpass humans?” question.)
Thanks for your replying. I’ve read this, but it doesn’t say the reason why we expect AI to keep growing at an explanatory speed. Though the computing speed (FLOPs) is accelerating like Moore’s law, but does fast computing=AGI can do most things on earth better than humans?