For what it’s worth, I’ve mostly not been interested in AI safety/alignment (and am still mostly not), but this also seems like a pretty big deal to me. I haven’t actually read the details, but this is basically not “narrow” AI anymore, right?
I guess the expressions “narrow” and “general” are a bit unfortunate, since I don’t really want to call this either. I would want to reserve the term AGI for AI that can do at least this, but can also reason generally and abstractly, and excels at one-shot learning (although there are specific networks designed for one-shot learning, like Siamese networks. Actually, why aren’t similar networks used more often,even as subnetworks?).
My take is that indeed, we now have AGI—but it’s really shitty AGI, not even close to human-level. (GPT-3 was another example of this; pretty general, but not human-level.) It seems that we now have the know-how to train a system that combines all the abilities and knowledge of GPT-3 with all the abilities and knowledge of these game-playing agents. Such a system would qualify as AGI, but not human-level AGI. The question is how long it’ll take, and how much money (to make it bigger, train for longer) to get to human-level or something dangerously powerful at least.
For what it’s worth, I’ve mostly not been interested in AI safety/alignment (and am still mostly not), but this also seems like a pretty big deal to me. I haven’t actually read the details, but this is basically not “narrow” AI anymore, right?
I guess the expressions “narrow” and “general” are a bit unfortunate, since I don’t really want to call this either. I would want to reserve the term AGI for AI that can do at least this, but can also reason generally and abstractly, and excels at one-shot learning (although there are specific networks designed for one-shot learning, like Siamese networks. Actually, why aren’t similar networks used more often,even as subnetworks?).
My take is that indeed, we now have AGI—but it’s really shitty AGI, not even close to human-level. (GPT-3 was another example of this; pretty general, but not human-level.) It seems that we now have the know-how to train a system that combines all the abilities and knowledge of GPT-3 with all the abilities and knowledge of these game-playing agents. Such a system would qualify as AGI, but not human-level AGI. The question is how long it’ll take, and how much money (to make it bigger, train for longer) to get to human-level or something dangerously powerful at least.