Thanks for the post—I’ve encountered this “consciousness must arise from an analog substrate” view before in places like this conversation with Magnus Vinding and David Pearce, and am interested in understanding it better.
I don’t think I really follow the argument for this view, but even granting that consciousness requires an analog substrate, would that change our priorities? It seems as though those who want to create artificial sentience (including conscious uploads of human minds) would just use analog computers instead. I suppose if you’re imagining a future in which artificial sentience might arise and have transformative effects before other forms of transformative AI, this consideration could be important as it would take time and perhaps be intractable for now to develop analog computers competitive with digital ones.
But assuming high-fidelity digital mind emulations are essentially p-zombies and aren’t behaviourally distinguishable from conscious minds, I think there’s only a few ways your argument would only have strategic relevance for us, none of which seem super compelling to me.
It could be that we should expect people to be comfortable being uploaded as digital minds in worlds where digital minds are in fact conscious, but not comfortable with this otherwise. I don’t think the public is good enough at philosophy of mind that this would hold!
We could be concerned that the first few uploads created before we develop a better understanding of consciousness were made on a mistaken assumption they would have subjective experience, and are not having the (hopefully happy) lives we wanted them to have, but this seems pretty low-stakes to me.
There might be path-dependencies in what human-originating civilization winds up valuing, and unless we adopt the view that consciousness requires analog substrates ahead of creating supposedly-conscious digital minds, we are at greater risk for ending up with the “wrong” moral valuation.
Maybe it is important for us to have a very clear understanding of consciousness, and this is a key component of that. (But I would be wary about backfire risk: I expect in the current moment advancing our understanding of consciousness is slightly negative for the reasons discussed here.)
Thanks for the post—I’ve encountered this “consciousness must arise from an analog substrate” view before in places like this conversation with Magnus Vinding and David Pearce, and am interested in understanding it better.
I don’t think I really follow the argument for this view, but even granting that consciousness requires an analog substrate, would that change our priorities? It seems as though those who want to create artificial sentience (including conscious uploads of human minds) would just use analog computers instead. I suppose if you’re imagining a future in which artificial sentience might arise and have transformative effects before other forms of transformative AI, this consideration could be important as it would take time and perhaps be intractable for now to develop analog computers competitive with digital ones.
But assuming high-fidelity digital mind emulations are essentially p-zombies and aren’t behaviourally distinguishable from conscious minds, I think there’s only a few ways your argument would only have strategic relevance for us, none of which seem super compelling to me.
It could be that we should expect people to be comfortable being uploaded as digital minds in worlds where digital minds are in fact conscious, but not comfortable with this otherwise. I don’t think the public is good enough at philosophy of mind that this would hold!
We could be concerned that the first few uploads created before we develop a better understanding of consciousness were made on a mistaken assumption they would have subjective experience, and are not having the (hopefully happy) lives we wanted them to have, but this seems pretty low-stakes to me.
There might be path-dependencies in what human-originating civilization winds up valuing, and unless we adopt the view that consciousness requires analog substrates ahead of creating supposedly-conscious digital minds, we are at greater risk for ending up with the “wrong” moral valuation.
Maybe it is important for us to have a very clear understanding of consciousness, and this is a key component of that. (But I would be wary about backfire risk: I expect in the current moment advancing our understanding of consciousness is slightly negative for the reasons discussed here.)