I think the greater potential concern is false-positives on consciousness, not false negatives
This is definitely a serious worry, but it seems much less likely to me.
One way this could happen is if we build large numbers of general purpose AI systems that we don’t realize are conscious and/or can suffer. However, I think that suffering is a pretty specialized cognitive state that was designed by natural selection for a role specific to our cognitive limitations and not one we are likely encounter by accident while building artificial systems. (It seems more likely to me that digital minds won’t suffer, but will have states that are morally relevant that we don’t realize are morally relevant because we’re so focused on suffering.)
Another way this could happen is if we artificially simulate large numbers of biological minds in detail. However, it seems very unlikely to me that we will ever run those simulations and very unlikely that we miss the potential for accidental suffering if we do. At least in the short term, I expect most plausible digital minds will be intentionally designed to be conscious, which I think makes the risks of mistakenly believing they’re conscious more of a worry.
That said, I’m wary of trying adjudicate which is a more concerning for topics that are still so speculative.
This is definitely a serious worry, but it seems much less likely to me.
One way this could happen is if we build large numbers of general purpose AI systems that we don’t realize are conscious and/or can suffer. However, I think that suffering is a pretty specialized cognitive state that was designed by natural selection for a role specific to our cognitive limitations and not one we are likely encounter by accident while building artificial systems. (It seems more likely to me that digital minds won’t suffer, but will have states that are morally relevant that we don’t realize are morally relevant because we’re so focused on suffering.)
Another way this could happen is if we artificially simulate large numbers of biological minds in detail. However, it seems very unlikely to me that we will ever run those simulations and very unlikely that we miss the potential for accidental suffering if we do. At least in the short term, I expect most plausible digital minds will be intentionally designed to be conscious, which I think makes the risks of mistakenly believing they’re conscious more of a worry.
That said, I’m wary of trying adjudicate which is a more concerning for topics that are still so speculative.
I kinda like “z-risk”, for similar reasons.