The original article by Kristin Andrews makes a distinction between the question of the distribution of consciousness (which animals are conscious?) and the question of the dimension of consciousness in a given animal (How is the animal conscious? What does its mental life look like?). It then argues that we should first, as a kind of working hypothesis, assume that all animals are conscious, and instead study the dimension of consciousness question, to only then develop a theory of consciousness that is capable of answering the distribution question.
In my view, you should ‘go all the way’ with this and go directly to the conclusion that the ‘distribution question’ is ill-formed (or redundant), and only questions regarding the way mental lives are structured are meaningful. Any process or activity can semantically be labeled as a ‘mental life’ and the ‘dimension question’ is then ultimately a proxy for how similar you think these mental lives are to human mental lives (or some kind of idealized extrapolation of human mental life, the question then of course becoming what this extrapolation should look like). So if you want, you can assign processes (e.g. behavioral and neural processes) that are structured very differently from human mental lives a low score of ‘consciousness’ and processes very similar to human mental lives a very high score, without supposing an additional on-off property of sentience. In that sense, the ‘dimension of consciousness’ question just answers the reformulated ‘distribution’ question with it, if you then label processes with a very low score as ‘unconscious’.
In my view, ‘sentience’ in the sense of the ‘hard problem’ is a red herring (and a philosophically ill-formed concept). The ethically relevant question is whether a mental life is structured to include interests that our moral values demand we respect.
As an aside, the summary rightfully notes that unreflectively assuming that something is unconcious can lead to ethical concerns. So it is a bit jarring how the text just seemingly states as a fact that modern LLMs are unconscious without giving any reasoning for this (though the text might also be interpreted to assume this for the sake of answering a hypothetical objection).
The original article by Kristin Andrews makes a distinction between the question of the distribution of consciousness (which animals are conscious?) and the question of the dimension of consciousness in a given animal (How is the animal conscious? What does its mental life look like?). It then argues that we should first, as a kind of working hypothesis, assume that all animals are conscious, and instead study the dimension of consciousness question, to only then develop a theory of consciousness that is capable of answering the distribution question.
In my view, you should ‘go all the way’ with this and go directly to the conclusion that the ‘distribution question’ is ill-formed (or redundant), and only questions regarding the way mental lives are structured are meaningful. Any process or activity can semantically be labeled as a ‘mental life’ and the ‘dimension question’ is then ultimately a proxy for how similar you think these mental lives are to human mental lives (or some kind of idealized extrapolation of human mental life, the question then of course becoming what this extrapolation should look like). So if you want, you can assign processes (e.g. behavioral and neural processes) that are structured very differently from human mental lives a low score of ‘consciousness’ and processes very similar to human mental lives a very high score, without supposing an additional on-off property of sentience. In that sense, the ‘dimension of consciousness’ question just answers the reformulated ‘distribution’ question with it, if you then label processes with a very low score as ‘unconscious’.
In my view, ‘sentience’ in the sense of the ‘hard problem’ is a red herring (and a philosophically ill-formed concept). The ethically relevant question is whether a mental life is structured to include interests that our moral values demand we respect.
As an aside, the summary rightfully notes that unreflectively assuming that something is unconcious can lead to ethical concerns. So it is a bit jarring how the text just seemingly states as a fact that modern LLMs are unconscious without giving any reasoning for this (though the text might also be interpreted to assume this for the sake of answering a hypothetical objection).
Thanks for the great point, PSR! Strongly upvoted, and agreed.