Second, the kind of mind required to operate as an intelligent agent in the real world likely demands sophisticated cognitive abilities for perception and long-term planning—abilities that appear sufficient to give rise to many morally relevant forms of consciousness.
A problem is that it is quite possible that sophisticated cognitivie abilities are present without any conscious experience being present. Some AIs might be some kind of p-zombies, and without a working theory of consciousness it is not possible to know at this point.
If AIs are some kind of p-zombies, then it could be a moral mistake to give them moral value, as preferences (without consciousness) might not matter intrinsically, whereas there is a more intuitive case for conscious pleasant/unpleasant experience mattering in themselves.
I would be curious about the following question: given our uncertainty about consciousness in AIs, what should we do so that things are robustly good? It’s not clear that giving AIs more autonomy is robustly good: perhaps this increases the chance of disempowerment (peaceful or violent as you say) and if AI have no moral value because they are not conscious, granting them autonomy could result in pretty bad outcomes.
A problem is that it is quite possible that sophisticated cognitivie abilities are present without any conscious experience being present. Some AIs might be some kind of p-zombies, and without a working theory of consciousness it is not possible to know at this point.
If AIs are some kind of p-zombies, then it could be a moral mistake to give them moral value, as preferences (without consciousness) might not matter intrinsically, whereas there is a more intuitive case for conscious pleasant/unpleasant experience mattering in themselves.
I would be curious about the following question: given our uncertainty about consciousness in AIs, what should we do so that things are robustly good? It’s not clear that giving AIs more autonomy is robustly good: perhaps this increases the chance of disempowerment (peaceful or violent as you say) and if AI have no moral value because they are not conscious, granting them autonomy could result in pretty bad outcomes.