In my view, everything is sentient in expectation, in the sense that everything has a positive expected moral weight for the reasons described by Brian Tomasik here. So I think the relevant question is not whether LLMs are sentient (they are in expectation), but rather:
What is the expected moral weight of LLMs (as a function of their properties, such as number of parameters)?
What can we do (if anything) to increase the wellbeing of LLMs?
These are obviously super hard questions, but they are so important and neglected that research on them may well be cost-effective.
The Brian Tomasik post you link to considers the view that fundamental physical operations may have moral weight (call this view āPhysics Sentienceā).
[Edit: see Tomasikās comment below. What I say below is true of a different sort of Physics Sentience view like constitutive micropsychism, but not necessarily of Brianās own view, which has somewhat different motivations and implications]
But even if true, [many versions of] Physics Sentience [but not necessarily Tomasikās] doesnāt have straightforward implications about what high-level systems, like organisms and AI systems, also comprise a sentient subject of experience. Consider: a human being touching a stove is experiencing pain on Physics Sentience; but a pan touching a stove is not experiencing pain. On Physics Sentience, the pan is made up of sentient matter, but this doesnāt mean that the pan qua pan is also a moral patient, another subject of experience that will suffer if it touches the stove.
To apply this to the LLMs case:
-Physics Sentience will hold that the hardware on which LLMs run is sentientāafter all, itās a bunch of fundamental physical operations.
-But Physics Sentience will also hold that the hardware on which a giant lookup table is running is sentient, to the same extent and for the same reason.
-Physics Sentience is silent on whether thereās a difference between (1) and (2), in the way that thereās a difference between the human and the pan.
The same thing holds for other panpsychist views of consciousness, fwiw. Panpsychist views that hold that fundamental matter is consciousness donāt tell us anything, themselves, about what animals or AI systems are sentient. It just says they are made of conscious (or proto-conscious) matter.
I linked to Brian Tomasikās post to provide useful context, but I wanted to point to a more general argument: we do not understand sentience/āconsciousness well enough to claim LLMs (or whatever) have null expected moral weight.
Ah, thanks! Well, even if it wasnāt appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) donāt entail AI sentience :)
Unlike the version of panpsychism that has become fashionable in philosophy in recent years, my version of panpsychism is based on the fuzziness of the concept of consciousness. My view is involves attributing consciousness to all physical systems (including higher-level ones like organisms and AIs) to the degree they show various properties that we think are important for consciousness, such as perhaps a global workspace, higher-order reflection, learning and memory, intelligence, etc. Iām a panpsychist because I think at least some attributes of consciousness can be seen even in fundamental physics to a non-zero degree. However, I personally would attribute much more consciousness to an LLM than to a rock that has equal mass as the machines running the LLM. I think itās less obvious whether an LLM is more sentient than a collection of computers doing an equal number of more banal computations, such as database queries or video-game graphics.
Hi Brian! Thanks for your reply. I think youāre quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesnāt entail much about LLMs. Iām going to update my comment above to make that clearer, and sorry for running together your view with those others.
Thanks for writing this!
In my view, everything is sentient in expectation, in the sense that everything has a positive expected moral weight for the reasons described by Brian Tomasik here. So I think the relevant question is not whether LLMs are sentient (they are in expectation), but rather:
What is the expected moral weight of LLMs (as a function of their properties, such as number of parameters)?
What can we do (if anything) to increase the wellbeing of LLMs?
These are obviously super hard questions, but they are so important and neglected that research on them may well be cost-effective.
The Brian Tomasik post you link to considers the view that fundamental physical operations may have moral weight (call this view āPhysics Sentienceā).
[Edit: see Tomasikās comment below. What I say below is true of a different sort of Physics Sentience view like constitutive micropsychism, but not necessarily of Brianās own view, which has somewhat different motivations and implications]
But even if true, [many versions of] Physics Sentience [but not necessarily Tomasikās] doesnāt have straightforward implications about what high-level systems, like organisms and AI systems, also comprise a sentient subject of experience. Consider: a human being touching a stove is experiencing pain on Physics Sentience; but a pan touching a stove is not experiencing pain. On Physics Sentience, the pan is made up of sentient matter, but this doesnāt mean that the pan qua pan is also a moral patient, another subject of experience that will suffer if it touches the stove.
To apply this to the LLMs case:
-Physics Sentience will hold that the hardware on which LLMs run is sentientāafter all, itās a bunch of fundamental physical operations.
-But Physics Sentience will also hold that the hardware on which a giant lookup table is running is sentient, to the same extent and for the same reason.
-Physics Sentience is silent on whether thereās a difference between (1) and (2), in the way that thereās a difference between the human and the pan.
The same thing holds for other panpsychist views of consciousness, fwiw. Panpsychist views that hold that fundamental matter is consciousness donāt tell us anything, themselves, about what animals or AI systems are sentient. It just says they are made of conscious (or proto-conscious) matter.
Thanks for the clarification!
I linked to Brian Tomasikās post to provide useful context, but I wanted to point to a more general argument: we do not understand sentience/āconsciousness well enough to claim LLMs (or whatever) have null expected moral weight.
Ah, thanks! Well, even if it wasnāt appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) donāt entail AI sentience :)
Unlike the version of panpsychism that has become fashionable in philosophy in recent years, my version of panpsychism is based on the fuzziness of the concept of consciousness. My view is involves attributing consciousness to all physical systems (including higher-level ones like organisms and AIs) to the degree they show various properties that we think are important for consciousness, such as perhaps a global workspace, higher-order reflection, learning and memory, intelligence, etc. Iām a panpsychist because I think at least some attributes of consciousness can be seen even in fundamental physics to a non-zero degree. However, I personally would attribute much more consciousness to an LLM than to a rock that has equal mass as the machines running the LLM. I think itās less obvious whether an LLM is more sentient than a collection of computers doing an equal number of more banal computations, such as database queries or video-game graphics.
Hi Brian! Thanks for your reply. I think youāre quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesnāt entail much about LLMs. Iām going to update my comment above to make that clearer, and sorry for running together your view with those others.
No worries. :) The update looks good.