Thanks for another fascinating comment. Although we haven’t been framing the subject in this way (the Braitenberg reference is new to me), we’ve been thinking about similar issues for a long time. At an early stage of the project we had a spreadsheet that attempted to judge the extent to which a handful of robots and AI programs exhibited the 53 features we investigated for invertebrates. We de-prioritized the spreadsheet because filling it in required too many subjective judgment calls and we worried that the methodology we used to investigate invertebrate sentience wouldn’t be applicable to non-biological organisms. Ultimately, this is a question we hope to return to. There is ample material to explore: functionalism (and its denial) in philosophy of mind, graded states of consciousness, “evolution” in artificial reinforcement learning, the analogy between nonhuman animals and robots, and many others.
Thanks for your enriching comment, Gavin. Just wanted to add to Jason’s response that, unfortunately, there is no consensus on whether various features potentially indicative of consciousness would be adaptive for any conscious individual, regardless of a species’ evolutionary history and its adaptive needs.
Complicating things even further, we do not even have such thing as a ‘universal’ intelligence measuring instrument for humans–cultural differences in intelligence determine results country by country. The above points out that we need more research that tells us both criteria for understanding which features might be more robust for detecting consciousness, and forms of measurement that are sensitive to relevant differences between different groups of individuals.
Hey Gavin!
Thanks for another fascinating comment. Although we haven’t been framing the subject in this way (the Braitenberg reference is new to me), we’ve been thinking about similar issues for a long time. At an early stage of the project we had a spreadsheet that attempted to judge the extent to which a handful of robots and AI programs exhibited the 53 features we investigated for invertebrates. We de-prioritized the spreadsheet because filling it in required too many subjective judgment calls and we worried that the methodology we used to investigate invertebrate sentience wouldn’t be applicable to non-biological organisms. Ultimately, this is a question we hope to return to. There is ample material to explore: functionalism (and its denial) in philosophy of mind, graded states of consciousness, “evolution” in artificial reinforcement learning, the analogy between nonhuman animals and robots, and many others.
Thanks for your enriching comment, Gavin. Just wanted to add to Jason’s response that, unfortunately, there is no consensus on whether various features potentially indicative of consciousness would be adaptive for any conscious individual, regardless of a species’ evolutionary history and its adaptive needs.
Complicating things even further, we do not even have such thing as a ‘universal’ intelligence measuring instrument for humans–cultural differences in intelligence determine results country by country. The above points out that we need more research that tells us both criteria for understanding which features might be more robust for detecting consciousness, and forms of measurement that are sensitive to relevant differences between different groups of individuals.