Oh, and my apologies for letting questions dangle—I think human intelligence is very limited, in the sense that it is built hyper-redundant against injuries, and so its architecture must be much larger in order to achieve the same task. The latest upgrade to language models, DeepMind’s RETRO architecture achieves the same performance as GPT-3 (which is to say, it can write convincing poetry) while using only 1/25th the network. GPT-3 was only 1% of a human brain’s connectivity, so RETRO is literally 1⁄2,500th of a human brain, with human-level performance. I think narrow super-intelligences will dominate, being more efficient than AGI or us.
In regards to overall algorithmic efficiency—in only five years we’ve seen multiple improvements to training and architecture, where what once took a million examples needs ten, or even generalizes to unseen data. Meanwhile, the Lottery Ticket can make a network 10x smaller, while boosting performance. There was even a supercomputer simulation which neural networks sped 2 BILLION-fold… which is insane. I expect more jumps in the math ahead, but I don’t think we have many of those leaps left before our intelligence-algorithms are just “as good as it gets”. Do you see a FOOM-event capable of 10x, 100x, or larger gains left to be found? I would bet there is a 100x is waiting, but it might become tricky and take successively more resources, asymptotic...
I think AGI would easily be capable of FOOM-ing 100x+ across the board. And as for AGI being developed, it seems like we are getting ever closer with each new breakthrough in ML (and there doesn’t seem to be anything fundamentally required that can be said to be “decades away” with high conviction).
Oh, and my apologies for letting questions dangle—I think human intelligence is very limited, in the sense that it is built hyper-redundant against injuries, and so its architecture must be much larger in order to achieve the same task. The latest upgrade to language models, DeepMind’s RETRO architecture achieves the same performance as GPT-3 (which is to say, it can write convincing poetry) while using only 1/25th the network. GPT-3 was only 1% of a human brain’s connectivity, so RETRO is literally 1⁄2,500th of a human brain, with human-level performance. I think narrow super-intelligences will dominate, being more efficient than AGI or us.
In regards to overall algorithmic efficiency—in only five years we’ve seen multiple improvements to training and architecture, where what once took a million examples needs ten, or even generalizes to unseen data. Meanwhile, the Lottery Ticket can make a network 10x smaller, while boosting performance. There was even a supercomputer simulation which neural networks sped 2 BILLION-fold… which is insane. I expect more jumps in the math ahead, but I don’t think we have many of those leaps left before our intelligence-algorithms are just “as good as it gets”. Do you see a FOOM-event capable of 10x, 100x, or larger gains left to be found? I would bet there is a 100x is waiting, but it might become tricky and take successively more resources, asymptotic...
I think AGI would easily be capable of FOOM-ing 100x+ across the board. And as for AGI being developed, it seems like we are getting ever closer with each new breakthrough in ML (and there doesn’t seem to be anything fundamentally required that can be said to be “decades away” with high conviction).