The idea of the intention-action gap is really interesting. I would imagine that the personal utility lost by closing this gap is also a significant factor. Meaning, if I recognize that this AI is sentient, what can I no longer do with/​to it? If the sacrifice is too inconvenient, we might not be in such a hurry to concede that our intuitions are right by acting on them.
The idea of the intention-action gap is really interesting. I would imagine that the personal utility lost by closing this gap is also a significant factor. Meaning, if I recognize that this AI is sentient, what can I no longer do with/​to it? If the sacrifice is too inconvenient, we might not be in such a hurry to concede that our intuitions are right by acting on them.