[Question] Can we evaluate the “tool versus agent” AGI prediction?

In 2012, Holden Karnofsky[1] critiqued MIRI (then SI) by saying “SI appears to neglect the potentially important distinction between ‘tool’ and ‘agent’ AI.” He particularly claimed:

Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will work

I understand this to be the first introduction of the “tool versus agent” ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):

  1. Tool AI is nontrivial

  2. Tool AI is not obviously the way AGI should or will be developed

Gwern more directly replied by saying:

AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.

11 years later, can we evaluate the accuracy of these predictions?

  1. ^

    Some Bayes points go to LW commenter shminux for saying that this Holden kid seems like he’s going places

Crossposted to LessWrong (16 points, 7 comments)