The Brian Tomasik post you link to considers the view that fundamental physical operations may have moral weight (call this view “Physics Sentience”).
[Edit: see Tomasik’s comment below. What I say below is true of a different sort of Physics Sentience view like constitutive micropsychism, but not necessarily of Brian’s own view, which has somewhat different motivations and implications]
But even if true, [many versions of] Physics Sentience [but not necessarily Tomasik’s] doesn’t have straightforward implications about what high-level systems, like organisms and AI systems, also comprise a sentient subject of experience. Consider: a human being touching a stove is experiencing pain on Physics Sentience; but a pan touching a stove is not experiencing pain. On Physics Sentience, the pan is made up of sentient matter, but this doesn’t mean that the pan qua pan is also a moral patient, another subject of experience that will suffer if it touches the stove.
To apply this to the LLMs case:
-Physics Sentience will hold that the hardware on which LLMs run is sentient—after all, it’s a bunch of fundamental physical operations.
-But Physics Sentience will also hold that the hardware on which a giant lookup table is running is sentient, to the same extent and for the same reason.
-Physics Sentience is silent on whether there’s a difference between (1) and (2), in the way that there’s a difference between the human and the pan.
The same thing holds for other panpsychist views of consciousness, fwiw. Panpsychist views that hold that fundamental matter is consciousness don’t tell us anything, themselves, about what animals or AI systems are sentient. It just says they are made of conscious (or proto-conscious) matter.
I linked to Brian Tomasik’s post to provide useful context, but I wanted to point to a more general argument: we do not understand sentience/consciousness well enough to claim LLMs (or whatever) have null expected moral weight.
Ah, thanks! Well, even if it wasn’t appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) don’t entail AI sentience :)
Unlike the version of panpsychism that has become fashionable in philosophy in recent years, my version of panpsychism is based on the fuzziness of the concept of consciousness. My view is involves attributing consciousness to all physical systems (including higher-level ones like organisms and AIs) to the degree they show various properties that we think are important for consciousness, such as perhaps a global workspace, higher-order reflection, learning and memory, intelligence, etc. I’m a panpsychist because I think at least some attributes of consciousness can be seen even in fundamental physics to a non-zero degree. However, I personally would attribute much more consciousness to an LLM than to a rock that has equal mass as the machines running the LLM. I think it’s less obvious whether an LLM is more sentient than a collection of computers doing an equal number of more banal computations, such as database queries or video-game graphics.
Hi Brian! Thanks for your reply. I think you’re quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesn’t entail much about LLMs. I’m going to update my comment above to make that clearer, and sorry for running together your view with those others.
The Brian Tomasik post you link to considers the view that fundamental physical operations may have moral weight (call this view “Physics Sentience”).
[Edit: see Tomasik’s comment below. What I say below is true of a different sort of Physics Sentience view like constitutive micropsychism, but not necessarily of Brian’s own view, which has somewhat different motivations and implications]
But even if true, [many versions of] Physics Sentience [but not necessarily Tomasik’s] doesn’t have straightforward implications about what high-level systems, like organisms and AI systems, also comprise a sentient subject of experience. Consider: a human being touching a stove is experiencing pain on Physics Sentience; but a pan touching a stove is not experiencing pain. On Physics Sentience, the pan is made up of sentient matter, but this doesn’t mean that the pan qua pan is also a moral patient, another subject of experience that will suffer if it touches the stove.
To apply this to the LLMs case:
-Physics Sentience will hold that the hardware on which LLMs run is sentient—after all, it’s a bunch of fundamental physical operations.
-But Physics Sentience will also hold that the hardware on which a giant lookup table is running is sentient, to the same extent and for the same reason.
-Physics Sentience is silent on whether there’s a difference between (1) and (2), in the way that there’s a difference between the human and the pan.
The same thing holds for other panpsychist views of consciousness, fwiw. Panpsychist views that hold that fundamental matter is consciousness don’t tell us anything, themselves, about what animals or AI systems are sentient. It just says they are made of conscious (or proto-conscious) matter.
Thanks for the clarification!
I linked to Brian Tomasik’s post to provide useful context, but I wanted to point to a more general argument: we do not understand sentience/consciousness well enough to claim LLMs (or whatever) have null expected moral weight.
Ah, thanks! Well, even if it wasn’t appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) don’t entail AI sentience :)
Unlike the version of panpsychism that has become fashionable in philosophy in recent years, my version of panpsychism is based on the fuzziness of the concept of consciousness. My view is involves attributing consciousness to all physical systems (including higher-level ones like organisms and AIs) to the degree they show various properties that we think are important for consciousness, such as perhaps a global workspace, higher-order reflection, learning and memory, intelligence, etc. I’m a panpsychist because I think at least some attributes of consciousness can be seen even in fundamental physics to a non-zero degree. However, I personally would attribute much more consciousness to an LLM than to a rock that has equal mass as the machines running the LLM. I think it’s less obvious whether an LLM is more sentient than a collection of computers doing an equal number of more banal computations, such as database queries or video-game graphics.
Hi Brian! Thanks for your reply. I think you’re quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesn’t entail much about LLMs. I’m going to update my comment above to make that clearer, and sorry for running together your view with those others.
No worries. :) The update looks good.