Really interesting. I think there are connections to the extended mind thesis—where mental processes are in part constituted by the external body and world, such that cognition can’t be neatly circumscribed to the brain. This seems a deeper system than is modelled by ‘computation done by neurons + training data’.
Another complication is the relationship between ecological or environmental complexity and the evolution of intelligence. Peter Godfrey-Smith’s ‘Environmental Complexity and the Evolution of Cognition’ is a good read on this. Other comments on this post point to video game worlds and getting interaction by copying the evolving agent—but I think this may drastically understate the complexity of co-evolving sets of organisms in the real world.
I think it’s unlikely that developing artificial intelligence requires these wrinkles of mind extension or environmental complexity. But I interpret the evolutionary anchor argument as a generous upper bound based on what we know evolution did at least once. For that purpose, our model should probably defer to evolution’s wrinkles rather than assume they’re irrelevant.
Really interesting. I think there are connections to the extended mind thesis—where mental processes are in part constituted by the external body and world, such that cognition can’t be neatly circumscribed to the brain. This seems a deeper system than is modelled by ‘computation done by neurons + training data’.
Another complication is the relationship between ecological or environmental complexity and the evolution of intelligence. Peter Godfrey-Smith’s ‘Environmental Complexity and
the Evolution of Cognition’ is a good read on this. Other comments on this post point to video game worlds and getting interaction by copying the evolving agent—but I think this may drastically understate the complexity of co-evolving sets of organisms in the real world.
I think it’s unlikely that developing artificial intelligence requires these wrinkles of mind extension or environmental complexity. But I interpret the evolutionary anchor argument as a generous upper bound based on what we know evolution did at least once. For that purpose, our model should probably defer to evolution’s wrinkles rather than assume they’re irrelevant.