When you say “feeling”, are you referring to conscious experience of the AI, or mechanistic positive and negative signals?
The former. The latter has no moral patienthood I guess
If consciousness, super-high uncertainty on what consciousness even is, what the correct ontology for it is. But can be discussed.
I’ve been reading more about this and I realize there is great disagreement
If positive and negative reward signals, then AI today already runs based on positive and negative reward signals as you mention.
Of course, but their ‘concious experience’ of these signals need not agree with how they are coded in the algorithm. They could ‘feel pain from maximizing’ … we just don’t know.
The former. The latter has no moral patienthood I guess
I’ve been reading more about this and I realize there is great disagreement
Of course, but their ‘concious experience’ of these signals need not agree with how they are coded in the algorithm. They could ‘feel pain from maximizing’ … we just don’t know.