I agree that the properties are somewhat simplified, but a key problem here is that the intuition and knowledge we have about how to make software better fails for deep learning. Current procedures for developing debugging software work less well for neural networks doing text prediction than psychology does. And at that point, from the point of view of actually interacting with the systems, it seems worse to group software and AI than to group AI and humans. Obviously, however, calling current AI humanlike is mostly wrong. But that just shows that we don’t want to use these categories!
I agree that the properties are somewhat simplified, but a key problem here is that the intuition and knowledge we have about how to make software better fails for deep learning. Current procedures for developing debugging software work less well for neural networks doing text prediction than psychology does. And at that point, from the point of view of actually interacting with the systems, it seems worse to group software and AI than to group AI and humans. Obviously, however, calling current AI humanlike is mostly wrong. But that just shows that we don’t want to use these categories!