I mentioned Braitenberg vehicles in a reply to one of Jason’s other posts and then realized I hadn’t seen these mentioned elsewhere in relation to invertebrate sentience (or in EA really), so I thought it would be worth mentioning them here as the concept may provide some interesting perspectives. Essentially, the vehicles are a thought experiment by Braitenberg (a neuroscientist) on intelligence based on building up from something simple that moves faster when it doesn’t like where it is (vehicle 1) to a vehicle that is practically human (vehicle 14). Essentially, the book explores at what point can we agree that the vehicle is intelligent, even if the mystery of biological intelligence isn’t present because we built it (actually, this is almost exactly analogous to Mesh:Hero experiment described in the 2017 Consciousness report). Strangely, the work seems to be better known by roboticist than by neuroscientists.
I think Braitenberg vehicles could be a useful reference for this project as the vehicles were all based on biological concepts of different levels of intelligence, and the vehicles may already have been discussed by philosophers of intelligence as to what level constitutes a threshold for a intelligent (probably analogous to conscious) entity. Indeed, the vehicles could also provide inspiration for something analogous to the sentience score requested by Sammy, as each vehicle was intended to represent something of a ‘step up’ in intelligence. So one could take the max or average level that a taxa reaches on such a scale as its score.
Thanks for another fascinating comment. Although we haven’t been framing the subject in this way (the Braitenberg reference is new to me), we’ve been thinking about similar issues for a long time. At an early stage of the project we had a spreadsheet that attempted to judge the extent to which a handful of robots and AI programs exhibited the 53 features we investigated for invertebrates. We de-prioritized the spreadsheet because filling it in required too many subjective judgment calls and we worried that the methodology we used to investigate invertebrate sentience wouldn’t be applicable to non-biological organisms. Ultimately, this is a question we hope to return to. There is ample material to explore: functionalism (and its denial) in philosophy of mind, graded states of consciousness, “evolution” in artificial reinforcement learning, the analogy between nonhuman animals and robots, and many others.
Thanks for your enriching comment, Gavin. Just wanted to add to Jason’s response that, unfortunately, there is no consensus on whether various features potentially indicative of consciousness would be adaptive for any conscious individual, regardless of a species’ evolutionary history and its adaptive needs.
Complicating things even further, we do not even have such thing as a ‘universal’ intelligence measuring instrument for humans–cultural differences in intelligence determine results country by country. The above points out that we need more research that tells us both criteria for understanding which features might be more robust for detecting consciousness, and forms of measurement that are sensitive to relevant differences between different groups of individuals.
I mentioned Braitenberg vehicles in a reply to one of Jason’s other posts and then realized I hadn’t seen these mentioned elsewhere in relation to invertebrate sentience (or in EA really), so I thought it would be worth mentioning them here as the concept may provide some interesting perspectives. Essentially, the vehicles are a thought experiment by Braitenberg (a neuroscientist) on intelligence based on building up from something simple that moves faster when it doesn’t like where it is (vehicle 1) to a vehicle that is practically human (vehicle 14). Essentially, the book explores at what point can we agree that the vehicle is intelligent, even if the mystery of biological intelligence isn’t present because we built it (actually, this is almost exactly analogous to Mesh:Hero experiment described in the 2017 Consciousness report). Strangely, the work seems to be better known by roboticist than by neuroscientists.
I think Braitenberg vehicles could be a useful reference for this project as the vehicles were all based on biological concepts of different levels of intelligence, and the vehicles may already have been discussed by philosophers of intelligence as to what level constitutes a threshold for a intelligent (probably analogous to conscious) entity. Indeed, the vehicles could also provide inspiration for something analogous to the sentience score requested by Sammy, as each vehicle was intended to represent something of a ‘step up’ in intelligence. So one could take the max or average level that a taxa reaches on such a scale as its score.
Hey Gavin!
Thanks for another fascinating comment. Although we haven’t been framing the subject in this way (the Braitenberg reference is new to me), we’ve been thinking about similar issues for a long time. At an early stage of the project we had a spreadsheet that attempted to judge the extent to which a handful of robots and AI programs exhibited the 53 features we investigated for invertebrates. We de-prioritized the spreadsheet because filling it in required too many subjective judgment calls and we worried that the methodology we used to investigate invertebrate sentience wouldn’t be applicable to non-biological organisms. Ultimately, this is a question we hope to return to. There is ample material to explore: functionalism (and its denial) in philosophy of mind, graded states of consciousness, “evolution” in artificial reinforcement learning, the analogy between nonhuman animals and robots, and many others.
Thanks for your enriching comment, Gavin. Just wanted to add to Jason’s response that, unfortunately, there is no consensus on whether various features potentially indicative of consciousness would be adaptive for any conscious individual, regardless of a species’ evolutionary history and its adaptive needs.
Complicating things even further, we do not even have such thing as a ‘universal’ intelligence measuring instrument for humans–cultural differences in intelligence determine results country by country. The above points out that we need more research that tells us both criteria for understanding which features might be more robust for detecting consciousness, and forms of measurement that are sensitive to relevant differences between different groups of individuals.