For what itâs worth, Iâve mostly not been interested in AI safety/âalignment (and am still mostly not), but this also seems like a pretty big deal to me. I havenât actually read the details, but this is basically not ânarrowâ AI anymore, right?
I guess the expressions ânarrowâ and âgeneralâ are a bit unfortunate, since I donât really want to call this either. I would want to reserve the term AGI for AI that can do at least this, but can also reason generally and abstractly, and excels at one-shot learning (although there are specific networks designed for one-shot learning, like Siamese networks. Actually, why arenât similar networks used more often,even as subnetworks?).
My take is that indeed, we now have AGIâbut itâs really shitty AGI, not even close to human-level. (GPT-3 was another example of this; pretty general, but not human-level.) It seems that we now have the know-how to train a system that combines all the abilities and knowledge of GPT-3 with all the abilities and knowledge of these game-playing agents. Such a system would qualify as AGI, but not human-level AGI. The question is how long itâll take, and how much money (to make it bigger, train for longer) to get to human-level or something dangerously powerful at least.
For what itâs worth, Iâve mostly not been interested in AI safety/âalignment (and am still mostly not), but this also seems like a pretty big deal to me. I havenât actually read the details, but this is basically not ânarrowâ AI anymore, right?
I guess the expressions ânarrowâ and âgeneralâ are a bit unfortunate, since I donât really want to call this either. I would want to reserve the term AGI for AI that can do at least this, but can also reason generally and abstractly, and excels at one-shot learning (although there are specific networks designed for one-shot learning, like Siamese networks. Actually, why arenât similar networks used more often,even as subnetworks?).
My take is that indeed, we now have AGIâbut itâs really shitty AGI, not even close to human-level. (GPT-3 was another example of this; pretty general, but not human-level.) It seems that we now have the know-how to train a system that combines all the abilities and knowledge of GPT-3 with all the abilities and knowledge of these game-playing agents. Such a system would qualify as AGI, but not human-level AGI. The question is how long itâll take, and how much money (to make it bigger, train for longer) to get to human-level or something dangerously powerful at least.