Fair point. It seems that the central property of AI systems this arguments rests on is their speed, or the time until you get feedback. I agree it seems likely that AI training time (and then ability to evaluate performance on withheld test data or similar) in wall-clock speed will be shorter than feedback loops for humans (e.g. education reforms, genetic engineering, …).
However, some ways in which this could fail to enable rapid self-improvement:
The speed advantage could be offset by other differences, e.g. even less interpretable “thinking processes”.
Performance at certain tasks may be bottlenecked by feedback from slow real-world interactions. (If sim2real transfer doesn’t work well for some tasks.)
Fair point. It seems that the central property of AI systems this arguments rests on is their speed, or the time until you get feedback. I agree it seems likely that AI training time (and then ability to evaluate performance on withheld test data or similar) in wall-clock speed will be shorter than feedback loops for humans (e.g. education reforms, genetic engineering, …).
However, some ways in which this could fail to enable rapid self-improvement:
The speed advantage could be offset by other differences, e.g. even less interpretable “thinking processes”.
Performance at certain tasks may be bottlenecked by feedback from slow real-world interactions. (If sim2real transfer doesn’t work well for some tasks.)