This doesn’t matter if we cannot tell whether the shoggoth is happy or sad, nor what would make it happier or sadder.My point is not that LLMs aren’t conscious; my point is that it does not matter whether they are, because you cannot incorporate their welfare into your decision-making without some way of gauging what that welfare is.
It is not possible to make decisions that further LLM welfare if you do not know what furthers LLM welfare. Since you cannot know this, it is safe to ignore their welfare. I mean, sure, maybe you’re causing them suffering. Equally likely, you’re causing them joy. There’s just no way to tell one way or the other; no way for two disagreeing people to ever come to an agreement. Might as well wonder about whether electrons suffer: it can be fun as idle speculation, but it’s not something you want to base decisions around.
Of course if we can’t ascertain their internal states we can’t reasonably condition our decisions on same, but that seems to me to be a different question from whether, if they have internal states, those are morally relevant.
My title was “LLMs cannot usefully be moral patients”. That is all I am claiming.
I am separately unsure whether they have internal experiences. For me, meditating on how, if they do have internal experiences, those are separate from what’s being communicated (which is just an attempt to predict the next token based on the input data), leads me to suspect that maybe they just don’t have such experiences—or if they do, they are so alien as to be incomprehensible to us. I’m not sure about this, though. I mostly want to make the narrower claim of “we can ignore LLM welfare”. That narrow claim seems controversial enough around here!
The claim that they can’t be moral patients doesn’t seem to me to be well-supported by the fact that their statements aren’t informative about their feelings. Can you explain how you think the latter implies the former?
They can’t USEFULLY be moral patients. You can’t, in practice, treat them as moral patients when making decisions. That’s because you don’t know how your actions affect their welfare. You can still label them moral patients if you want, but that’s not useful, since it cannot inform your decisions.
Here’s what I wrote in the post:
It is not possible to make decisions that further LLM welfare if you do not know what furthers LLM welfare. Since you cannot know this, it is safe to ignore their welfare. I mean, sure, maybe you’re causing them suffering. Equally likely, you’re causing them joy. There’s just no way to tell one way or the other; no way for two disagreeing people to ever come to an agreement. Might as well wonder about whether electrons suffer: it can be fun as idle speculation, but it’s not something you want to base decisions around.
Of course if we can’t ascertain their internal states we can’t reasonably condition our decisions on same, but that seems to me to be a different question from whether, if they have internal states, those are morally relevant.
My title was “LLMs cannot usefully be moral patients”. That is all I am claiming.
I am separately unsure whether they have internal experiences. For me, meditating on how, if they do have internal experiences, those are separate from what’s being communicated (which is just an attempt to predict the next token based on the input data), leads me to suspect that maybe they just don’t have such experiences—or if they do, they are so alien as to be incomprehensible to us. I’m not sure about this, though. I mostly want to make the narrower claim of “we can ignore LLM welfare”. That narrow claim seems controversial enough around here!
The claim that they can’t be moral patients doesn’t seem to me to be well-supported by the fact that their statements aren’t informative about their feelings. Can you explain how you think the latter implies the former?
They can’t USEFULLY be moral patients. You can’t, in practice, treat them as moral patients when making decisions. That’s because you don’t know how your actions affect their welfare. You can still label them moral patients if you want, but that’s not useful, since it cannot inform your decisions.