I’m puzzled by Mallatt’s response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don’t know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, and hiding when injured, as well as structural tests such as a multiple levels of intermediate processing from the sensory input to motor output. Existing AIs already pass the structural test I listed, and I believe they could pass the behavior tests with a simple virtual environment and reward function. I don’t see a principled way of including the simplest types of animal conscious while any form of computer consciousness.
Yeah, I think this is a worry for his view. I do also personally assign a somewhat higher likelihood to invertebrate consciousness than modern AI consciousness because of evolutionary relatedness, greater structural homology, and because they probably satisfy more of the criteria for consciousness that I would use.
You might be interested in my next interview on this subject which will be with someone who discusses modern AI and robotics findings in the context of invertebrate consciousness, and comes to a more sceptical conclusion based on that.
I’m puzzled by Mallatt’s response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don’t know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, and hiding when injured, as well as structural tests such as a multiple levels of intermediate processing from the sensory input to motor output. Existing AIs already pass the structural test I listed, and I believe they could pass the behavior tests with a simple virtual environment and reward function. I don’t see a principled way of including the simplest types of animal conscious while any form of computer consciousness.
Yeah, I think this is a worry for his view. I do also personally assign a somewhat higher likelihood to invertebrate consciousness than modern AI consciousness because of evolutionary relatedness, greater structural homology, and because they probably satisfy more of the criteria for consciousness that I would use.
You might be interested in my next interview on this subject which will be with someone who discusses modern AI and robotics findings in the context of invertebrate consciousness, and comes to a more sceptical conclusion based on that.