My take is that both were fairly wrong.[1] AI is much more generally intelligent and single systems are useful for many more things than Holden and the tool AI camp would have predicted. But they are also extremely non-agentic.
(To me this is actually rather surprising. I would have expected agency to be necessary to get this much general capability.)
Iâm tempted to call it a wash. But rereading Holdenâs writing in the linked post, it seems to be pretty narrowly arguing against AI as necessarily being agentic, which seems to have predicted the current world (though note thereâs still plenty of time for AIs to get agentic, and I still roughly believe the arguments that they probably will).
This doesnât sound super true to me, for what itâs worth. The AIs are predicting humans after all, and humans are pretty agentic. Many people had conversations with Sydney where Sydney tried to convince them to somehow not shut her down.
I think there is still an important sense in which there is a surprising amount of generality compared to the general level of capability, but I wouldnât particularly call the current genre of AIs âextremely non-agenticâ.
I guess it depends on your priors or something. Itâs agentic relative to a rock, but, relative to an AI which can pass the LSAT, itâs well below my expectations. It seems like ARC-Evals had to coax and prod GPT-4 to get it to do things it âshouldâ have been doing with rudimentary levels of agency.
My take is that both were fairly wrong.[1] AI is much more generally intelligent and single systems are useful for many more things than Holden and the tool AI camp would have predicted. But they are also extremely non-agentic.
(To me this is actually rather surprising. I would have expected agency to be necessary to get this much general capability.)
Iâm tempted to call it a wash. But rereading Holdenâs writing in the linked post, it seems to be pretty narrowly arguing against AI as necessarily being agentic, which seems to have predicted the current world (though note thereâs still plenty of time for AIs to get agentic, and I still roughly believe the arguments that they probably will).
This seems unsurprising, tbh. I think everyone now should be pretty uncertain about how AI will go in the future.
This doesnât sound super true to me, for what itâs worth. The AIs are predicting humans after all, and humans are pretty agentic. Many people had conversations with Sydney where Sydney tried to convince them to somehow not shut her down.
I think there is still an important sense in which there is a surprising amount of generality compared to the general level of capability, but I wouldnât particularly call the current genre of AIs âextremely non-agenticâ.
I guess it depends on your priors or something. Itâs agentic relative to a rock, but, relative to an AI which can pass the LSAT, itâs well below my expectations. It seems like ARC-Evals had to coax and prod GPT-4 to get it to do things it âshouldâ have been doing with rudimentary levels of agency.