I think I have a similar question to Will: if there can be preferences or welfare without consciousness, wouldn’t that also apply to plants (+ bacteria etc)? (and maybe the conclusion is that it does! but I don’t see people discussing that very much, despite the fact that unlike for AI it’s not a hypothetical situation) It’s certainly the case “that their lives could go better or worse, or their concerns and interests could be more or less respected”.
Along those lines, this quote seemed relevant: “our concepts were pinned down in a situation where there weren’t a lot of ambiguous cases, where we had relatively sharp distinctions between, say, humans, nonhuman animals, and inanimate objects” [emphasis not mine] Maybe so, but there’s a big gap between nonhuman animals and inanimate objects!
It’s an excellent question! There are two ways to go here:
keep the liberal notion of preferences/desires, one that seems like it would apply to plants and bacteria, and conclude that moral patienthood is very widespread indeed. As you note, few people go for this view (I don’t either). But you can find people bumping up against this view:
Korsgaard: “The difference between the plant’s tropic responses and the animal’s action might even, ultimately, be a matter of degree. In that case, plants would be, in a very elementary sense, agents, and so might be said to have a final good.” (quoted in this lecture on moral patienthood by Peter Godfrey-Smith.
Think that for patienthood what’s required is a more more demanding notion of “preference”, such that plants don’t satisfy it but dogs and people do. And there are ways of making “preference” more demanding besides “conscious preference”. You might think that morally-relevant preferences/desires have to have some kind of complexity, or some kind of rational structure, or something like that. That’s of course quite hand-wavy—I don’t think anyone has a really satisfying account.
Here’s a remark from Francois Kammerer, who thinks that moral status cannot be about consciousness (which he thinks does not exist), argues that it should be about desire, and who lays out nicely the ‘scale’ of desires of various levels of demandingness:
On the one extreme, we can think of the most basic way of desiring: a creature can value negatively or positively certain state of affairs, grasped in the roughest way through some basic sensing system. On some views, entities as simple as bacteria can do that (Lyon & Kuchling, 2021). On the other hand, we can think of the most sophisticated ways of desiring. Creatures such as, at least, humans, can desire for a thing to thrive in what they take to be its own proper way to thrive and at the same time desire their own desire for this thing to thrive to persist – an attitude close to what Harry Frankfurt called “caring” (Frankfurt, 1988). Between the two, we intuitively admit that there is some kind of progressive and multidimensional scale of desires, which is normatively relevant – states of caring matter more than the most basic desires. When moving towards an ethic without sentience, we would be wise to ground our ethical system on conceptsthat we will treat as complex and degreed, and even more as “complexifiable” as the study of human, animal and artificial minds progresses.
Here’s a remark from Francois Kammerer, who thinks that moral status cannot be about consciousness (which he thinks does not exist)
Nitpick: Kammerer (probably) does not think consciousness does not exist. He’s an illusionist, so thinks consciousness is not phenomenal, and so specifically phenomenal consciousness does not exist. That just means he thinks the characterization of consciousness as phenomenal is mistaken. He could still believe moral status should be about consciousness, just not phenomenal consciousness.
True, I should have been more precise—by consciousness I meant phenomenal consciousness. On your (correct) point about Kammerer being open to consciousness more generally, here’s Kammerer (I’m sure he’s made this point elsewhere too):
Illusionists are not committed to the view that our introspective states (such as the phenomenal judgment “I am in pain”) do not reliably track any real and important psychological property. They simply deny that such properties are phenomenal, and that there is something it is like to instantiate them. Frankish suggests calling such properties “quasi-phenomenal properties” (Frankish 2016, p. 15)—purely physico-functional and non-phenomenal properties which are reliably tracked (but mischaracterized as phenomenal) by our introspective mechanisms. For the same reason (Frankish 2016, p. 21), illusionists are not committed to the view that a mature psychological science will not mention any form of consciousness beyond, for example, access-consciousness. After all, quasi-phenomenal consciousness may very well happen to have interesting distinctive features from the point of view of a psychologist.
But on your last sentence
He could still believe moral status should be about consciousness, just not phenomenal consciousness.
While that position is possible, Kammerer does make it clear that he does not hold it, and thinks it is untenable for similar reasons that he thinks moral status is not about phenomenal consciousness. (cf. p. 8)
Hmm, I think any account of desire as moral grounds, which Kammerer suggests as an alternative, is going to face objections based on indeterminacy and justification like those Kammerer raises against (quasi-)phenomenal consciousness as moral grounds.
Indeterminacy: Kammerer talks about a multidimensional scale of desires. Why isn’t desire just indeterminate, too? Or, we can think of (quasi-)phenomenality as a multidimensional scale, too.[1]
Justification: Our own desires also appear to us to be phenomenal and important (probably in large part) because of their apparent phenomenality (tied to feelings, like fear, hunger and physical attraction, or otherwise conscious states, e.g. goals or moral views of which we are conscious). If and because they appear important due to their apparent phenomenality, they would also be undermined as normative grounds.[2] Kammerer talks about us finding “unconscious pains” to not matter intrinsically (or not much, anyway), but we would find the same of “unconscious desires”.[3]
For each creature, and even more for each species, there will be differences (sometimes slight, sometimes big) in the kinds of broadcasting of information in global workspaces, or in the kind of higher-order representation, etc., that they instantiate. The processes they instantiate will be similar in some respects to the processes constituting phenomenal consciousness, but also dissimilar in others; and there will be dissimilarities at various levels of abstractions (from the most abstract – the overall functional structure implemented – to the most concrete – the details of the implementation). Therefore, what these creatures will have is something that somewhat resembles (to various degrees) the “real thing out there” present in our case. Will the resemblance be such that the corresponding state also counts as phenomenally conscious, or not – will it be enough for the global broadcasting, the higher-order representation, etc., to be of the right kind – the kind that constitutes phenomenal consciousness? It is hard to see how there could be always be a fact of the matter here.
The reason why we were so strongly inclined to see sentience as a normative magic bullet in the first place (and then used it as a normative black box) was that the value of some phenomenal states seemed particularly obvious and beyond doubt. While normative skepticism seemed a credible threat in all kinds of non-phenomenal cases, with valenced phenomenal states – most typically, pain – it seemed that we were on sure grounds. Of course, feeling pain is bad – just focus on it and you will see for yourself! So, in spite of persisting ignorance regarding so many aspects of phenomenal consciousness, it seemed that we knew that it had this sort of particularly significant intrinsic value that made it able to be our normative magic bullet, because we could introspectively grasp this value in the most secure way.15 However, if reductive materialism/weak illusionism is true, our introspective grasp of phenomenal consciousness is, to a great extent, illusory: phenomenal consciousness really exists, but it does not exist in the way in which we introspectively grasp and characterize it. This undercuts our reason to believe that certain phenomenal states have a certain value: if introspection of phenomenal states is illusory – if phenomenal states are not as they seem to be – then it means that the conclusions of phenomenal introspection must be treated with great care and a high degree of suspicion, which entails that our introspective grasp of the value of phenomenal states cannot be highly trusted.
That phenomenal states seem of particular significance compared to neighboring non-phenomenal states manifests itself in the fact that we draw a series of stark normative contrasts. For example, we draw a stark normative contrast between phenomenal and their closest non-phenomenal equivalent. We care a lot about the intense pain that one might phenomenally experience during a medical procedure – arguably, because such pain seems really bad. On the other hand, if, thanks to anesthesia, a patient does not experience phenomenally conscious pain during surgery, their brain might still enter in nonphenomenally conscious states that are the non-phenomenal states closest to phenomenal pain (something like “subliminal pain” or “unconscious pain”) – but we will probably not worry too much. If indeed we fully believe these states to be non-phenomenal – to have no associated subjective experience, “nothing it’s like” to be in them – we will probably judge that they have little intrinsic moral relevance – if at all – and we will not do much to avoid them. They will be a matter of curiosity, not of deep worry.
While that position is possible, Kammerer does make it clear that he does not hold it, and thinks it is untenable for similar reasons that he thinks moral status is not about phenomenal consciousness. (cf. p. 8)
Interesting. I guess he would think desires, understood functionally, are not necessarily quasi-phenomenal. I suspect desires should be understood as quasi-phenomenal, or even as phenomenal illusions themselves.
If unpleasantness, in phenomenal terms, would just be (a type or instance of the property of) phenomenal badness, then under illusionism, unpleasantness could be an appearance of badness, understood functionally, and so the quasi-phenomenal counterpart or an illusion of phenomenal badness.
I also think of desires (and hedonic states and moral beliefs, and some others) as appearances of normative reasons, i.e. things seeming good, bad, better or worse. This can be understood functionally or representationally. Here’s a pointer to some more discussion. These appearances could themselves be illusions, e.g. by misrepresenting things as mattering or with phenomenal badness/goodness/betterness/worseness. Or, they could dispose beings that introspect on them in certain ways to falsely believe in some stance-independent moral facts, like that pleasure is good, suffering is bad, that it’s better that desires be satisfied, etc.. But there are no stance-independent moral facts, and those beliefs are illusions. Or they dispose the introspective to believe in phenomenal badness/goodness/betterness/worseness.
Thanks! - super helpful and interesting, much appreciated.
I suppose my takeaway, all the while setting consciousness aside, is [still] along the lines: (a) ‘having preferences’ is not a sufficient indicator for what we’re trying to figure out; (b) we are unlikely to converge on a satisfying / convincing single dimension or line in the sand; (c) moral patienthood is therefore almost certainly a matter of degree (although we may feel like we can assign 0 or 1 at the extremes) - which fits my view of almost everything in the world; (d) empirically coming up with concrete numbers for those interior values is going to be very very hard, and reasonable people will disagree, so everyone should be cautious about making any strong or universal claims; and (e) this all applies to plants just as much as to AI, so they deserve a bit more consideration in the discussion.
I think I have a similar question to Will: if there can be preferences or welfare without consciousness, wouldn’t that also apply to plants (+ bacteria etc)? (and maybe the conclusion is that it does! but I don’t see people discussing that very much, despite the fact that unlike for AI it’s not a hypothetical situation) It’s certainly the case “that their lives could go better or worse, or their concerns and interests could be more or less respected”.
Along those lines, this quote seemed relevant: “our concepts were pinned down in a situation where there weren’t a lot of ambiguous cases, where we had relatively sharp distinctions between, say, humans, nonhuman animals, and inanimate objects” [emphasis not mine] Maybe so, but there’s a big gap between nonhuman animals and inanimate objects!
It’s an excellent question! There are two ways to go here:
keep the liberal notion of preferences/desires, one that seems like it would apply to plants and bacteria, and conclude that moral patienthood is very widespread indeed. As you note, few people go for this view (I don’t either). But you can find people bumping up against this view:
Korsgaard: “The difference between the plant’s tropic responses and the animal’s action might even, ultimately, be a matter of degree. In that case, plants would be, in a very elementary sense, agents, and so might be said to have a final good.” (quoted in this lecture on moral patienthood by Peter Godfrey-Smith.
Think that for patienthood what’s required is a more more demanding notion of “preference”, such that plants don’t satisfy it but dogs and people do. And there are ways of making “preference” more demanding besides “conscious preference”. You might think that morally-relevant preferences/desires have to have some kind of complexity, or some kind of rational structure, or something like that. That’s of course quite hand-wavy—I don’t think anyone has a really satisfying account.
Here’s a remark from Francois Kammerer, who thinks that moral status cannot be about consciousness (which he thinks does not exist), argues that it should be about desire, and who lays out nicely the ‘scale’ of desires of various levels of demandingness:
Nitpick: Kammerer (probably) does not think consciousness does not exist. He’s an illusionist, so thinks consciousness is not phenomenal, and so specifically phenomenal consciousness does not exist. That just means he thinks the characterization of consciousness as phenomenal is mistaken. He could still believe moral status should be about consciousness, just not phenomenal consciousness.
True, I should have been more precise—by consciousness I meant phenomenal consciousness. On your (correct) point about Kammerer being open to consciousness more generally, here’s Kammerer (I’m sure he’s made this point elsewhere too):
But on your last sentence
While that position is possible, Kammerer does make it clear that he does not hold it, and thinks it is untenable for similar reasons that he thinks moral status is not about phenomenal consciousness. (cf. p. 8)
Hmm, I think any account of desire as moral grounds, which Kammerer suggests as an alternative, is going to face objections based on indeterminacy and justification like those Kammerer raises against (quasi-)phenomenal consciousness as moral grounds.
Indeterminacy: Kammerer talks about a multidimensional scale of desires. Why isn’t desire just indeterminate, too? Or, we can think of (quasi-)phenomenality as a multidimensional scale, too.[1]
Justification: Our own desires also appear to us to be phenomenal and important (probably in large part) because of their apparent phenomenality (tied to feelings, like fear, hunger and physical attraction, or otherwise conscious states, e.g. goals or moral views of which we are conscious). If and because they appear important due to their apparent phenomenality, they would also be undermined as normative grounds.[2] Kammerer talks about us finding “unconscious pains” to not matter intrinsically (or not much, anyway), but we would find the same of “unconscious desires”.[3]
Interesting. I guess he would think desires, understood functionally, are not necessarily quasi-phenomenal. I suspect desires should be understood as quasi-phenomenal, or even as phenomenal illusions themselves.
If unpleasantness, in phenomenal terms, would just be (a type or instance of the property of) phenomenal badness, then under illusionism, unpleasantness could be an appearance of badness, understood functionally, and so the quasi-phenomenal counterpart or an illusion of phenomenal badness.
I also think of desires (and hedonic states and moral beliefs, and some others) as appearances of normative reasons, i.e. things seeming good, bad, better or worse. This can be understood functionally or representationally. Here’s a pointer to some more discussion. These appearances could themselves be illusions, e.g. by misrepresenting things as mattering or with phenomenal badness/goodness/betterness/worseness. Or, they could dispose beings that introspect on them in certain ways to falsely believe in some stance-independent moral facts, like that pleasure is good, suffering is bad, that it’s better that desires be satisfied, etc.. But there are no stance-independent moral facts, and those beliefs are illusions. Or they dispose the introspective to believe in phenomenal badness/goodness/betterness/worseness.
Thanks! - super helpful and interesting, much appreciated.
I suppose my takeaway, all the while setting consciousness aside, is [still] along the lines: (a) ‘having preferences’ is not a sufficient indicator for what we’re trying to figure out; (b) we are unlikely to converge on a satisfying / convincing single dimension or line in the sand; (c) moral patienthood is therefore almost certainly a matter of degree (although we may feel like we can assign 0 or 1 at the extremes) - which fits my view of almost everything in the world; (d) empirically coming up with concrete numbers for those interior values is going to be very very hard, and reasonable people will disagree, so everyone should be cautious about making any strong or universal claims; and (e) this all applies to plants just as much as to AI, so they deserve a bit more consideration in the discussion.
When is Plant Welfare Debate Week??