they’ve got leading advocates of two leading consciousness theories (global workspace theory and integrated information theory;
Thanks for sharing! This sounds like a promising start. I’m skeptical that things like this could fully resolve the disagreements, but they could make progress that would be helpful in evaluating AIs.
I do think that there is a tension between taking a strong view that AI is not conscious/ will not be conscious for a long time, versus assuming that animals with very different brain structures do have conscious experience.
If animals with very different brains are conscious, then I’m sympathetic with the thought that we could probably make conscious systems if we really tried. Modern AI systems look a bit Chinese roomish, so it might still be that the incentives aren’t there to put in the effort to make really conscious systems.
“I do think that there is a tension between taking a strong view that AI is not conscious/ will not be conscious for a long time, versus assuming that animals with very different brain structures do have conscious experience.”
If animals with very different brains are conscious, then I’m sympathetic with the thought that we could probably make conscious systems if we really tried.
Currently, as I heard from someone who works in a lab that researchers the perceptions of pain for no apparent reason by brain scans (EEG and MR), it is just challenging to come up with an understanding how the brain works, leave alone how consciousness emerges. There are other ways to assess animal consciousness, such as by evolutionary comparisons and observation. So, it does not follow that if we find (a high probability) that different animals are conscious, we would likely be able to make conscious systems.
There are also different types of consciousness,[1] including that related to sensing, processing, and perceiving. So, depending on your definition, AI can be perceived already conscious since it takes and processes inputs.
Thanks for sharing! This sounds like a promising start. I’m skeptical that things like this could fully resolve the disagreements, but they could make progress that would be helpful in evaluating AIs.
If animals with very different brains are conscious, then I’m sympathetic with the thought that we could probably make conscious systems if we really tried. Modern AI systems look a bit Chinese roomish, so it might still be that the incentives aren’t there to put in the effort to make really conscious systems.
Currently, as I heard from someone who works in a lab that researchers the perceptions of pain for no apparent reason by brain scans (EEG and MR), it is just challenging to come up with an understanding how the brain works, leave alone how consciousness emerges. There are other ways to assess animal consciousness, such as by evolutionary comparisons and observation. So, it does not follow that if we find (a high probability) that different animals are conscious, we would likely be able to make conscious systems.
There are also different types of consciousness,[1] including that related to sensing, processing, and perceiving. So, depending on your definition, AI can be perceived already conscious since it takes and processes inputs.
Anil Seth. Being You: A New Science of Consciousness (2021)