‘The structure of academia rewards people for developing one theory and sticking to it. There are few academic incentives for reaching a consensus or even hashing out the relative probabilities of different views.’
I agree with this, but I just wanted to link out to a new paradigm that I think gets good traction against this problem (and also highlights some ongoing consciousness research). This is being funded by the Templeton Foundation (a large philanthropic science funder), and essentially they’ve got leading advocates of two leading consciousness theories (global workspace theory and integrated information theory; see here) to go head-to-head in a kind of structured adversarial experiment. That is, they’ve together developed a series of experiments, and together agreed beforehand that ‘if the results go [x] way, this supports [x] theory’. Afaik, the results haven’t been published yet.
Disclaimer that in a previous life I was a comparative psychologist, so I am nerdily interested in consciousness. But I do think that there is a tension between taking a strong view that AI is not conscious/ will not be conscious for a long time, versus assuming that animals with very different brain structures do have conscious experience. (A debate that I have seen play out in comparative cognition research, e.g. are animals all using ‘chinese room’ type computations). Perhaps that will turn out to be justified (e.g. maybe consciousness is an inherent property of living systems, and not of non-living ones), but I am a little skeptical that it’s that simple.
they’ve got leading advocates of two leading consciousness theories (global workspace theory and integrated information theory;
Thanks for sharing! This sounds like a promising start. I’m skeptical that things like this could fully resolve the disagreements, but they could make progress that would be helpful in evaluating AIs.
I do think that there is a tension between taking a strong view that AI is not conscious/ will not be conscious for a long time, versus assuming that animals with very different brain structures do have conscious experience.
If animals with very different brains are conscious, then I’m sympathetic with the thought that we could probably make conscious systems if we really tried. Modern AI systems look a bit Chinese roomish, so it might still be that the incentives aren’t there to put in the effort to make really conscious systems.
“I do think that there is a tension between taking a strong view that AI is not conscious/ will not be conscious for a long time, versus assuming that animals with very different brain structures do have conscious experience.”
If animals with very different brains are conscious, then I’m sympathetic with the thought that we could probably make conscious systems if we really tried.
Currently, as I heard from someone who works in a lab that researchers the perceptions of pain for no apparent reason by brain scans (EEG and MR), it is just challenging to come up with an understanding how the brain works, leave alone how consciousness emerges. There are other ways to assess animal consciousness, such as by evolutionary comparisons and observation. So, it does not follow that if we find (a high probability) that different animals are conscious, we would likely be able to make conscious systems.
There are also different types of consciousness,[1] including that related to sensing, processing, and perceiving. So, depending on your definition, AI can be perceived already conscious since it takes and processes inputs.
‘The structure of academia rewards people for developing one theory and sticking to it. There are few academic incentives for reaching a consensus or even hashing out the relative probabilities of different views.’
I agree with this, but I just wanted to link out to a new paradigm that I think gets good traction against this problem (and also highlights some ongoing consciousness research). This is being funded by the Templeton Foundation (a large philanthropic science funder), and essentially they’ve got leading advocates of two leading consciousness theories (global workspace theory and integrated information theory; see here) to go head-to-head in a kind of structured adversarial experiment. That is, they’ve together developed a series of experiments, and together agreed beforehand that ‘if the results go [x] way, this supports [x] theory’. Afaik, the results haven’t been published yet.
Disclaimer that in a previous life I was a comparative psychologist, so I am nerdily interested in consciousness. But I do think that there is a tension between taking a strong view that AI is not conscious/ will not be conscious for a long time, versus assuming that animals with very different brain structures do have conscious experience. (A debate that I have seen play out in comparative cognition research, e.g. are animals all using ‘chinese room’ type computations). Perhaps that will turn out to be justified (e.g. maybe consciousness is an inherent property of living systems, and not of non-living ones), but I am a little skeptical that it’s that simple.
Thanks for sharing! This sounds like a promising start. I’m skeptical that things like this could fully resolve the disagreements, but they could make progress that would be helpful in evaluating AIs.
If animals with very different brains are conscious, then I’m sympathetic with the thought that we could probably make conscious systems if we really tried. Modern AI systems look a bit Chinese roomish, so it might still be that the incentives aren’t there to put in the effort to make really conscious systems.
Currently, as I heard from someone who works in a lab that researchers the perceptions of pain for no apparent reason by brain scans (EEG and MR), it is just challenging to come up with an understanding how the brain works, leave alone how consciousness emerges. There are other ways to assess animal consciousness, such as by evolutionary comparisons and observation. So, it does not follow that if we find (a high probability) that different animals are conscious, we would likely be able to make conscious systems.
There are also different types of consciousness,[1] including that related to sensing, processing, and perceiving. So, depending on your definition, AI can be perceived already conscious since it takes and processes inputs.
Anil Seth. Being You: A New Science of Consciousness (2021)