The conditional GAN task (given some text, complete it in a way that looks human-like) is just even harder than the autoregressive task, so I’m not sure I’d stick with that analogy.
I think that >50% of the time when people talk about “imitation” they mean autoregressive models; GANs and IRL are still less common than behavioral cloning. (Not sure about that.)
I agree that “figure out who to simulate, then simulate them” is probably a bad description of the cognition GPT does, even if a lot of its cognitive ability comes from copying human cognitive processes.
Smaller notes:
The conditional GAN task (given some text, complete it in a way that looks human-like) is just even harder than the autoregressive task, so I’m not sure I’d stick with that analogy.
I think that >50% of the time when people talk about “imitation” they mean autoregressive models; GANs and IRL are still less common than behavioral cloning. (Not sure about that.)
I agree that “figure out who to simulate, then simulate them” is probably a bad description of the cognition GPT does, even if a lot of its cognitive ability comes from copying human cognitive processes.