This doesn’t sound super true to me, for what it’s worth. The AIs are predicting humans after all, and humans are pretty agentic. Many people had conversations with Sydney where Sydney tried to convince them to somehow not shut her down.
I think there is still an important sense in which there is a surprising amount of generality compared to the general level of capability, but I wouldn’t particularly call the current genre of AIs “extremely non-agentic”.
I guess it depends on your priors or something. It’s agentic relative to a rock, but, relative to an AI which can pass the LSAT, it’s well below my expectations. It seems like ARC-Evals had to coax and prod GPT-4 to get it to do things it “should” have been doing with rudimentary levels of agency.
This doesn’t sound super true to me, for what it’s worth. The AIs are predicting humans after all, and humans are pretty agentic. Many people had conversations with Sydney where Sydney tried to convince them to somehow not shut her down.
I think there is still an important sense in which there is a surprising amount of generality compared to the general level of capability, but I wouldn’t particularly call the current genre of AIs “extremely non-agentic”.
I guess it depends on your priors or something. It’s agentic relative to a rock, but, relative to an AI which can pass the LSAT, it’s well below my expectations. It seems like ARC-Evals had to coax and prod GPT-4 to get it to do things it “should” have been doing with rudimentary levels of agency.