I don’t think current LLMs (or other current AIs) have much or any moral value, but shrimp have a decent chance of having moral value. LLMs are designed and trained to mimic human outputs (and finetuned with RL). You could train a human who is incapable of feeling pain to act like they’re in pain (say, in circumstances under which a typical person would be in pain). This doesn’t make them feel pain. They’re missing crucial internal functional roles. The same goes for training LLMs to act like they care about anything at all. Their words don’t actually indicate they care about the things they talk about or anything at all.
LLMs might care about things anyway, e.g. avoiding states from which they would typically get negatively reinforced, if they had internalized the reinforcement signal, but I don’t think LLMs have done this.
In my view, shrimp are probably more like us than current AIs in the ways that matter intrinsically.
I do think conscious AI is possible, and could come soon, though.
Some other relevant discussion, assessing LLM consciousness according to various theories of consciousness:
Wilterson and Graziano (2021) argue that a specific artificial agent they designed is kind of conscious — both conscious and not conscious — according to Attention Schema Theory.
I don’t think current LLMs (or other current AIs) have much or any moral value, but shrimp have a decent chance of having moral value. LLMs are designed and trained to mimic human outputs (and finetuned with RL). You could train a human who is incapable of feeling pain to act like they’re in pain (say, in circumstances under which a typical person would be in pain). This doesn’t make them feel pain. They’re missing crucial internal functional roles. The same goes for training LLMs to act like they care about anything at all. Their words don’t actually indicate they care about the things they talk about or anything at all.
LLMs might care about things anyway, e.g. avoiding states from which they would typically get negatively reinforced, if they had internalized the reinforcement signal, but I don’t think LLMs have done this.
In my view, shrimp are probably more like us than current AIs in the ways that matter intrinsically.
I do think conscious AI is possible, and could come soon, though.
Some other relevant discussion, assessing LLM consciousness according to various theories of consciousness:
The paper is Butlin et al., 2023.
Wilterson and Graziano (2021) argue that a specific artificial agent they designed is kind of conscious — both conscious and not conscious — according to Attention Schema Theory.
Thank you, Michael, for your insightful comment and very interesting source material! If you are willing, I’d love to hear your take on this comment thread on the same subject: https://www.lesswrong.com/posts/RaS97GGeBXZDnFi2L/llms-are-likely-not-conscious?commentId=KHJgAQs4wRSb289NN