I suspect you are more broadly underestimating the extent to which people used “insect-level intelligence” as a generic stand-in for “pretty dumb,” though I haven’t looked at the discussion in Mind Children and Moravec may be making a stronger claim.
I think that’s good push-back and a fair suggestion: I’m not sure how seriously the statement in Nick’s paper was meant to be taken. I hadn’t considered that it might be almost entirely a quip. (I may ask him about this.)
Moravec’s discussion in Mind Children is similarly brief: He presents a graph of the computing power of different animal’s brains and states that “lab computers are roughly equal in power to the nervous systems of insects.”He also characterizes current AI behaviors as “insectlike” and writes: “I believe that robots with human intelligence will be common within fifty years. By comparison, the best of today’s machines have minds more like those of insects than humans. Yet this performance itself represents a giant leap forward in just a few decades.” I don’t think he’s just being quippy, but there’s also no suggestion that he means anything very rigorous/specific by his suggestion.
Rodney Brooks, I think, did mean for his comparisons to insect intelligence to be taken very seriously. The idea of his “nouvelle AI program” was to create AI systems that match insect intelligence, then use that as a jumping-off point for trying to produce human-like intelligence. I think walking and obstacle navigation, with several legs, was used as the main dimension of comparison. The Brooks case is a little different, though, since (IIRC) he only claimed that his robots exhibited important aspects of insect intelligence or fell just short insect intelligence, rather than directly claiming that they actually matched insect intelligence. On the other hand, he apparently felt he had gotten close enough to transition to the stage of the project that was meant to go from insect-level stuff to human-level stuff.
A plausible reaction to these cases, then, might be:
OK, Rodney Brooks did make a similar comparison, and was a major figure at the time, but his stuff was pretty transparently flawed. Moravec’s and Bostrom’s comments were at best fairly off-hand, suggesting casual impressions more than they suggest outcomes of rigorous analysis. The more recent “insect-level intelligence” claim is pretty different, since it’s built on top of much more detailed analysis than anything Moravec/Bostrom did, and it’s less obviously flawed than Brooks’ analysis. The likelihood that it reflects an erroneous impression is, therefore, a lot lower. The previous cases shouldn’t actually do much to raise our suspicion levels.
I think there’s something to this reaction, particularly if there’s now more rigorous work being done to operationalize and test the “insect-level intelligence” claim. I hadn’t yet seen the recent post you linked to, which, at first glance, seems like a good and clear piece of work. The more rigorous work is done to flesh out the argument, the less I’m inclined to treat the Bostrom/Moravec/Brooks cases as part of an epistemically relevant reference class.
My impression a few years ago was that the claim wasn’t yet backed by any really clear/careful analysis. At least, the version that filtered down to me seemed to be substantially based on fuzzy analogies between RL agent behavior and insect behavior, without anyone yet knowing much about insect behavior. (Although maybe this was a misimpression.) So I probably do stand by the reference class being relevant back then.
Overall, to sum up, my position here is something like: “The Bostrom/Moravec/Brooks cases do suggest that it might be easy to see roughly insect-level intelligence, if that’s what you expect to see and you’re relying on fuzzy impressions, paying special attention to stuff AI systems can already do, or not really operationalizing your claims. This should make us more suspicious of modern claims that we’ve recently achieved ‘insect-level intelligence,’ unless they’re accompanied by transparent and pretty obviously robust reasoning. Insofar as this work is being done, though, the Bostrom/Moravec/Brooks cases become weaker grounds for suspicion.”
I do think my main impression of insect <-> simulated robot parity comes from very fuzzy evaluations of insect motor control vs simulated robot motor control (rather than from any careful analysis, of which I’m a bit more skeptical though I do think it’s a relevant indicator that we are at least trying to actually figure out the answer here in a way that wasn’t true historically). And I do have only a passing knowledge of insect behavior, from watching youtube videos and reading some book chapters about insect learning. So I don’t think it’s unfair to put it in the same reference class as Rodney Brooks’ evaluations to the extent that his was intended as a serious evaluation.
I think that’s good push-back and a fair suggestion: I’m not sure how seriously the statement in Nick’s paper was meant to be taken. I hadn’t considered that it might be almost entirely a quip. (I may ask him about this.)
Moravec’s discussion in Mind Children is similarly brief: He presents a graph of the computing power of different animal’s brains and states that “lab computers are roughly equal in power to the nervous systems of insects.”He also characterizes current AI behaviors as “insectlike” and writes: “I believe that robots with human intelligence will be common within fifty years. By comparison, the best of today’s machines have minds more like those of insects than humans. Yet this performance itself represents a giant leap forward in just a few decades.” I don’t think he’s just being quippy, but there’s also no suggestion that he means anything very rigorous/specific by his suggestion.
Rodney Brooks, I think, did mean for his comparisons to insect intelligence to be taken very seriously. The idea of his “nouvelle AI program” was to create AI systems that match insect intelligence, then use that as a jumping-off point for trying to produce human-like intelligence. I think walking and obstacle navigation, with several legs, was used as the main dimension of comparison. The Brooks case is a little different, though, since (IIRC) he only claimed that his robots exhibited important aspects of insect intelligence or fell just short insect intelligence, rather than directly claiming that they actually matched insect intelligence. On the other hand, he apparently felt he had gotten close enough to transition to the stage of the project that was meant to go from insect-level stuff to human-level stuff.
A plausible reaction to these cases, then, might be:
I think there’s something to this reaction, particularly if there’s now more rigorous work being done to operationalize and test the “insect-level intelligence” claim. I hadn’t yet seen the recent post you linked to, which, at first glance, seems like a good and clear piece of work. The more rigorous work is done to flesh out the argument, the less I’m inclined to treat the Bostrom/Moravec/Brooks cases as part of an epistemically relevant reference class.
My impression a few years ago was that the claim wasn’t yet backed by any really clear/careful analysis. At least, the version that filtered down to me seemed to be substantially based on fuzzy analogies between RL agent behavior and insect behavior, without anyone yet knowing much about insect behavior. (Although maybe this was a misimpression.) So I probably do stand by the reference class being relevant back then.
Overall, to sum up, my position here is something like: “The Bostrom/Moravec/Brooks cases do suggest that it might be easy to see roughly insect-level intelligence, if that’s what you expect to see and you’re relying on fuzzy impressions, paying special attention to stuff AI systems can already do, or not really operationalizing your claims. This should make us more suspicious of modern claims that we’ve recently achieved ‘insect-level intelligence,’ unless they’re accompanied by transparent and pretty obviously robust reasoning. Insofar as this work is being done, though, the Bostrom/Moravec/Brooks cases become weaker grounds for suspicion.”
I do think my main impression of insect <-> simulated robot parity comes from very fuzzy evaluations of insect motor control vs simulated robot motor control (rather than from any careful analysis, of which I’m a bit more skeptical though I do think it’s a relevant indicator that we are at least trying to actually figure out the answer here in a way that wasn’t true historically). And I do have only a passing knowledge of insect behavior, from watching youtube videos and reading some book chapters about insect learning. So I don’t think it’s unfair to put it in the same reference class as Rodney Brooks’ evaluations to the extent that his was intended as a serious evaluation.
Yeah, FWIW I haven’t found any recent claims about insect comparisons particularly rigorous.