Of course if we can’t ascertain their internal states we can’t reasonably condition our decisions on same, but that seems to me to be a different question from whether, if they have internal states, those are morally relevant.
My title was “LLMs cannot usefully be moral patients”. That is all I am claiming.
I am separately unsure whether they have internal experiences. For me, meditating on how, if they do have internal experiences, those are separate from what’s being communicated (which is just an attempt to predict the next token based on the input data), leads me to suspect that maybe they just don’t have such experiences—or if they do, they are so alien as to be incomprehensible to us. I’m not sure about this, though. I mostly want to make the narrower claim of “we can ignore LLM welfare”. That narrow claim seems controversial enough around here!
The claim that they can’t be moral patients doesn’t seem to me to be well-supported by the fact that their statements aren’t informative about their feelings. Can you explain how you think the latter implies the former?
They can’t USEFULLY be moral patients. You can’t, in practice, treat them as moral patients when making decisions. That’s because you don’t know how your actions affect their welfare. You can still label them moral patients if you want, but that’s not useful, since it cannot inform your decisions.
Of course if we can’t ascertain their internal states we can’t reasonably condition our decisions on same, but that seems to me to be a different question from whether, if they have internal states, those are morally relevant.
My title was “LLMs cannot usefully be moral patients”. That is all I am claiming.
I am separately unsure whether they have internal experiences. For me, meditating on how, if they do have internal experiences, those are separate from what’s being communicated (which is just an attempt to predict the next token based on the input data), leads me to suspect that maybe they just don’t have such experiences—or if they do, they are so alien as to be incomprehensible to us. I’m not sure about this, though. I mostly want to make the narrower claim of “we can ignore LLM welfare”. That narrow claim seems controversial enough around here!
The claim that they can’t be moral patients doesn’t seem to me to be well-supported by the fact that their statements aren’t informative about their feelings. Can you explain how you think the latter implies the former?
They can’t USEFULLY be moral patients. You can’t, in practice, treat them as moral patients when making decisions. That’s because you don’t know how your actions affect their welfare. You can still label them moral patients if you want, but that’s not useful, since it cannot inform your decisions.