Which animals realize which types of subjective welfare?

Summary

In a previous piece, I defined and discussed four potential types of subjective welfare: hedonic states, felt desires, belief-like preferences and choice-based preferences. Setting aside the moral question of which types of welfare matter intrinsically, humans are of course capable of all four types of capacities. Many other animals seem capable of (conscious) hedonic states and felt desires, and plenty has been written on the topic (e.g. Muehlhauser, 2017 and Waldhorn et al., 2019), but this is not entirely uncontroversial. I will not focus on these. Instead, my focus and key takeaways here are the following:

  1. It doesn’t seem too unlikely that belief-like preferences are also available to animals with conscious hedonic states and conscious felt desires and even more likely still in mammals (and probably birds) in general, with these conscious hedonic states and conscious felt desires being or grounding belief-like preferences, but more sophisticated capacities could be required (more).

  2. I make a probably controversial case for global belief-like preferences — e.g. belief-like preferences about the individual’s own life as a whole — in other animals, as potentially degenerate or limiting cases: global belief-like preferences could just consider only the immediate, and so conscious hedonic states or felt desires could qualify or ground them (more).

  3. It’s unclear if choice-based preferences are also available to animals with conscious hedonic states and conscious felt desires; this could be fairly sensitive to our characterization of choice-based preferences (more).

Acknowledgements

Thanks to Brian Tomasik, Derek Shiller and Bob Fischer for feedback on earlier drafts. All errors are my own.

Absence of evidence is a weak argument

Before commenting further on specifics, we should be careful not to treat absence of evidence for a given capacity in an animal as strong evidence for its absence. No one may have set up the right experiment or observed them under the right conditions to find evidence for or against a given capacity. The number of cognitive capacities recognized in other animals has also grown substantially over the past decades, although there may also be false positives, overinterpretation and publication bias. This could warrant assigning at least non-tiny probabilities, like 10%, to the possibility that an animal has a given capacity, unless there are specific strong arguments against.

Belief-like preferences in other animals

A thinking rat (Credit: ChatGPT’s image generator, DALL·E)

Recall my definition and illustrations from a previous piece:

Belief-like preferences: judgements or beliefs about subjective value, worth, need, desirability, good, bad, betterness or worseness. If I judge it to be better to have a 10% chance of dying tomorrow than to live the rest of my life with severe chronic pain — in line with the standard gamble approach to estimating quality-adjusted life years (QALYs) — then I prefer it on this account. Or, I might judge my life to be going well, and am satisfied with my life as a whole — a kind of global preference (Parfit, 1984, Plant, 2020) —, because it’s going well in work, family and other aspects important to me. Other examples include:

  1. our everyday explicitly conscious goals, like finishing a task or going to the gym today,

  2. more important explicit goals or projects, like getting married, raising children, helping people,

  3. moral beliefs, like that pain is bad, knowledge is good, utilitarianism, against harming others, about fair treatment, etc.,

  4. general beliefs about how important things are to us, other honestly reported preferences, overall judgements about possible states of the world.

Howe (1994), Berridge (2009), and Berridge and Robinson (2016) use the term cognitive desires for a similar concept. Howe (1994) also uses desiderative beliefs, and Berridge (2009), ordinary wanting.

It seems likely (e.g. with 80% to ~100% probability) to me that many other animals have (capacities worth calling) beliefs, including even many insects and also probably some existing AI systems.[1] It also seems reasonable to assign conscious animals substantial probability of having beliefs about value, and that some such beliefs are conscious or at least represented roughly by conscious states, so qualify as belief-like preferences. In particular, as discussed in a previous piece, (conscious) hedonic states and (conscious) felt desires (if any), as subjective appearances of value or reasons, may themselves already qualify or generate them. For example, if believing something is bad means representing it in some kind of negative light, being motivated to avoid, stop or prevent it, or otherwise acting as if it’s bad (see Schwitzgebel, 2023, section 1 for various accounts of belief), then hedonic states and aversive felt desires seem to do these[2] and should qualify. Conscious hedonic states and felt desires could play several (functional) roles by which we would normally characterize beliefs, concepts and judgements. Belief-like preferences could therefore be quite common across animals, realized by all animals with conscious hedonic states or conscious felt desires.

The hedonic states and felt desires of animals could also qualify as the conscious recognition and application of their concepts of good and bad: when they believe something is bad, they feel bad. And this feeling is also a conscious judgement that something is bad.

This presents a basic case, but we could require more for belief-like preferences. I will discuss some possibilities, their plausibility and some relevant evidence in other animals.

Requiring conscious thought

If we expect belief-like preferences to be conscious (or that they could be conscious), we might require conscious thought, e.g. the individual must think to themselves that something is bad, good, better, worse, etc.. What would that mean?

Conscious thoughts seem to be a special case of experiencing sensations caused by our beliefs and desires (Carruthers, 2013). There seems to be a lot humans can do with them that’s unavailable to or very limited in other animals.

We can reason and deliberate in conscious thought. However, whether or not other animals can do these, I wouldn’t require them in my account of belief-like preferences, because some thoughts are basic and not the result of explicit reasoning or deliberation at all. It’s usually immediately obvious when something hurts, and there are simple and direct ways to represent that in thought, like thinking “Ow!” or “That hurts!”. We don’t need to explicitly weigh the evidence for and against in thought before deciding that something hurts.

Given this possible simplicity, whatever causal, functional or representational role conscious thought is meant to play here between belief or desire and conscious representation, it’s plausible to me that conscious hedonic states and felt desires can play that role on their own. The conscious unpleasantness or aversion is just like a conscious expression that something is bad. The mode could be conscious unpleasantness or aversion rather than sensory, via inner speech.

There also appear to be basically otherwise normal humans who don’t have any or extremely limited conscious mental imagery, including no visual imagery (aphantasia) or inner speech (Austin & Jess, 2020, Quiet Mind Inside, 2020a, Quiet Mind Inside, 2020b, Krempel, 2023), so either they wouldn’t have belief-like preferences, or much more plausibly and less arbitrarily, we also count their thinking done with external aids, like speaking out loud, drawing, writing or sign language. So, the fact that thinking happens internally doesn’t seem to matter morally, and saying “Ow!” or “That hurts!” out loud should count, too.

And besides through conscious hedonic states and felt desires, conscious animals can also be indirectly conscious of their beliefs and desires in general by experiencing the sensations caused by them, e.g. by hearing their own vocalizations or sensing their own body movements. Many other animals have their own ways of saying “That hurts!”, whether by vocalization or gesture, and of which they can also be conscious.

So, it doesn’t seem like conscious thought should draw an important qualitative line here between humans and other animals.

Requiring more from judgement

Perhaps mere consciousness of some consequences of their beliefs or desires isn’t enough, and instead, they must consciously judge something to be the case, e.g. consciously judge a situation to be bad. What does that take?

As already mentioned, conscious unpleasantness and aversion seem like conscious judgements that something is worse or bad. Even without pinning down exactly what’s required of conscious judgement, this may already be enough.

Or, we might require some metacognition. Perhaps, they must be conscious of their degree of belief or confidence. There’s evidence of varying strength for metacognition across a wide variety of species, including some invertebrates.[3] But this could be unconscious. Still, some animals may in fact have specifically epistemic emotions and metacognitive feelings like curiosity, surprise, uncertainty, confidence or feeling of knowing, or appraisals (felt evaluations) of the value of cognitive effort (Carruthers and Williams, 2022, Carruthers, 2021, Goupil and Proust, 2023, Vogl et al., 2021) that track epistemic states and confidence, and could indeed be conscious. And they would presumably be about things that matter subjectively to the animal, e.g. to choose between options with differing attractiveness or unattractiveness, because it doesn’t seem like there’d be any other adaptive use for them.

Another possible requirement could be the (conscious?) self-attribution of a belief, goal, desire or preference, e.g. “I believe p”, which would presumably require a concept of a belief, goal, desire or preference, a concept of self and one’s own mind. This may not include many species, only those with enough theory of mind and capacity for higher-order consciousness.

But when someone says or thinks “That’s bad”, that seems to be a judgement. Where exactly is the self-attribution of belief? It’s either unconscious or not linguistic. If it’s conscious but not linguistic, how is it experienced? And do other animals have experiences that can play similar roles? And if it’s unconscious, why does it matter?

Blasimme and Bortolotti (2010, p.84) make a more general objection to beliefs requiring the concept of belief:

We don’t normally think that in order to have something we need to possess the concept of that something

For example, a dog doesn’t need to have a concept of a stomach to have a stomach. Requiring the self-attribution of something would go even further. However, here, we are considering the possibility that judging requires a concept of belief and its self-attribution, not that belief requires these. Judging could be more demanding than belief.

Requiring more from normative concepts or reasons

As discussed above, hedonic states and felt desires may already be consciously felt applications of normative concepts.

However, there is also another sense in which individuals could potentially recognize and apply a concept of bad: if they can generalizably discriminate subjective badness from its absence in particular, a kind of higher-order. Rats and pigs seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[4]

I could not find any similar studies in other animals. Still, I would guess the discrimination of anxiety would apply generally across mammals and birds. I expect a rough upper bound on the requirements for similar behaviour would correspond with having working memory, voluntary/​top-down attention control, and for the feeling of anxiety to enter into working memory and so be available as a discriminative stimulus (although unconscious discrimination of unpleasantness or aversion is also possible). According to the review Nieder, 2022, there is good evidence for working memory and the voluntary/​top-down control of attention at least across mammals and birds, and some suggestive evidence for them in some fish and evidence of working memory in honeybees. On the other hand, Neider (2022) only cited one negative result for any species, specifically for working memory in archerfish.

I also couldn’t find any studies even on the generalization between anxiety or fear and other unpleasant or aversive states independently of any anxiety or fear they involve, like physical pain, disgust, anger, hunger or frustration. Such generalizability between all unpleasant and/​or aversive states, perhaps weaker the less similar, doesn’t seem very unlikely, in case there’s some common feeling to unpleasantness or aversion regardless of its cause.

Putting these results into perspective, humans often believe that what we find unpleasant or aversive is bad, including often the unpleasantness or aversion themselves. Our concepts of bad and some other normative concepts could roughly be elaborations of and generalizations from other animals’. Our moral concepts may also be based in part on our moral emotions (Walsh, 2021), some of which also seem to appear in other animals. Even concerns for fairness and altruistic concern for others are found in some other animals, as inequity aversion and emotional empathy/​contagion that motivates helping behaviour. See Pérez-Manrique & Gomila, 2022 for a comprehensive review finding evidence for emotional contagion and/​or helping behaviour across a fairly wide variety of species. In particular, there is good evidence for emotional contagion driving helping behaviour in rats, who are generally well-studied.[5] Furthermore, the apparently selective emotional contagion of hens to the distress of their chicks (Edgar et al, 2011) and not of familiar adult conspecifics (Edgar et al., 2012) also seems best explained evolutionarily as motivating protective responses. Anger as a moral emotion also seems available to animals who aggressively protect their offspring but are less aggressive without offspring. Some other moral emotions also seem plausible in other animals.[6]

Still, we could require more. It’s unclear if what rats do, say, extends to a more general concept of a reason, on which Silverstein (2017)’s account of normative self-governance is based.[7] At first, this may seem far too abstract for nonlinguistic animals, but, again, we should consider what they themselves care about and whether they have a concept that serves a similar functional role in decision-making. Do they have concepts capturing pleasure and unpleasantness together, and appetitive desire and aversion together, or all of them together under a general concept of a reason? As far as I know, there’s no evidence either way. Generalization between all hedonic states doesn’t seem very likely to me, because pleasure and unpleasantness just feel different.[8] However, the animal’s own mechanism of motivational salience — which drives attention to stimuli and so, in one way, makes them subjectively important — could be one about which an animal could more plausibly have beliefs and so form concepts, but this would be a minimal theory of (own) mind, a relatively sophisticated capacity.

Even if they have reasons and those reasons are weighed together, e.g. via attention, motivational salience or even, say, by alternatingly attending to different desires and reasons, they may not use any concept of reason to do this. A further possibility is that they have a concept of a reason, but rather than act on the basis of these concepts, they act instead directly on the basis of the reasons themselves. It’s not clear if they’d form belief-like preferences in this case.

It’s not obvious either way that we should require them to have a concept of a reason if they still have concepts for kinds of reasons, e.g. of bad or even for a specific bad part of an experience, e.g. anxiety like rats apparently do.

We could further require that the preferences be based on (normative) principles or standards, like Korsgaard (2010)’s account of normative self-government. However, her account seems too strict to apply even to many humans in many instances, and, so in my view, implausible as a grounds for belief-like preferences. Moral particularists, virtue ethicists (Haidt & Joseph, 2004[9], Athanassoulis, 2010[10], Frankish, 2021[11]) and some others reject principles or standards, and are guided primarily by direct intuition, “gut” or feelings, or at least decide on their basis after reflection, e.g. Do the math, then burn the math and go with your gut. Few or no other animals may “do the math” or otherwise use principles or standards, but many seem guided by their feelings, guts or intuitions. The process could be essentially the same, just in other animals the limiting or degenerate case where actually no formal principles or standards are applied. And again, often in humans, no principles or standards are applied at all before judging. So, requirements like Korsgaard’s normative self-government seem too demanding to be very likely to me.

Requiring beliefs to be less ambiguous

One issue with treating (conscious) hedonic states or felt desires like actual beliefs or as grounding or representing corresponding beliefs is that it can lead to apparent conflicts in beliefs. Someone could believe going to work is good, in the typical cognitive sense, but find it pleasant or be attracted to it in the attentional sense, and they could even instead find it unpleasant or aversive. Their feelings can also change over time. Can someone believe something is both good and not good (or even bad) simultaneously? In general, can someone believe both p and not p simultaneously?

There are multiple responses to this.

  1. The beliefs are actually about (slightly) different things. Work (or an aspect of work) is good in one way, and not good or bad in another way. Work could be bad for someone in the moment based on their feelings about it while or before working, but good for them in the long term. Or, parts of their job could be bad — unpleasant or aversive —, but not the job as a whole.

  2. The beliefs come from separate systems or processes, so any disagreement is more like interpersonal disagreement instead of one individual holding contradictory beliefs.

  3. We can just have directly inconsistent beliefs. In general, we can have logically inconsistent beliefs, and directly holding p and not p is a special case. This can happen in practice due to framing, e.g. asking someone a yes-or-no question in two different ways but with identical meanings.

  4. Hedonic states and felt desires aren’t beliefs and don’t directly ground or represent beliefs.

  5. Instead hedonic states and felt desires could just be or could just ground appearances. Appearances can conflict, i.e. things can seem multiple ways, and beliefs may (typically) result from the weighing of appearances. Beliefs may themselves also be types of appearances, but not all appearances are beliefs.

  6. Other animals could still have beliefs about their hedonic states and felt desires or that otherwise take them into account.

  7. Why doesn’t the problem apply symmetrically to the typical cognitive beliefs? We’re saying cognitive beliefs and feelings can’t both be (or directly ground or represent) beliefs because they can disagree in an individual. Why not deny that cognitive beliefs are (or directly ground or represent) beliefs instead?

Assigning credences

Taken together, considering both my uncertainty about the standards for defining belief-like preferences and the capacities of various animals, it doesn’t seem very unlikely that other animals — and especially mammals and birds — have belief-like preferences. Assigning credences seems fairly arbitrary at this point given my uncertainty not just about animals’ capacities but also the target. I’ll make an attempt anyway, with fairly wide ranges.

As I mentioned above, it seems likely (e.g. with 80% to ~100% probability) to me that many other animals have (capacities worth calling) beliefs, including even many insects. But belief-like preferences aren’t just beliefs and might not be beliefs at all. They are beliefs — or perhaps just appearances — of value or reasons. Plausibly, we should also require them to be consciously experienced by the animal, and perhaps specifically as conscious judgements. Conscious hedonic states and felt desires could already qualify as conscious judgements and conscious representations of beliefs about value or reasons, and are themselves appearances of value or reasons. This was the basic case.

I find it reasonably likely that the basic case is enough, but it also seems reasonably likely that this is missing something important in what it means to judge, e.g. metacognition. It’s also possible that more general concepts of good and bad, or of a reason, are required and should be used in the right way. Beliefs in general, desires in general, hedonic states and felt desires together wouldn’t be enough.

I’d guess there’s a 30-80% chance that conscious hedonic states and felt desires (and simpler/​more common capacities like beliefs and desires) are enough for belief-like preferences, i.e. that (almost) any animal with conscious hedonic states and felt desires has belief-like preferences.

For example, I’m quite confident that mammals generally have conscious hedonic states and felt desires, e.g. 80-95%. Treating the probabilities as independent, this gives a lower bound range of 80%*30%=24% to 80%*95%=76%.

However, mammals, including rats, who — I’d guess — would be near the lower end of sophistication among mammals, given their relatively small brains, seem more impressive in multiple ways than just having (conscious) hedonic states and felt desires. Rats seem able to generalizably discriminate anxiety from its absence. Still, it’s not clear that rats do so for unpleasantness or aversion in general, and it seems unlikely that they have a concept of a reason that they would generalizably discriminate similarly. I’d also expect many mammals to have epistemic emotions or metacognitive feelings, and would expect some in rats. So, I’d tentatively guess mammals generally have belief-like preferences with probability 40-90%.

Birds don’t seem too far behind. They don’t seem much less likely to be conscious at all or have conscious hedonic states and felt desires. I also expect them to generalizably discriminate anxiety from its absence. Perhaps 30-80% for belief-like preferences across birds.

The probabilities would be lower in reptiles, amphibians, fishes, cephalopods, arthropods and other invertebrates, but at least around 30-80% of their probabilities of having conscious hedonic states and felt desires at all. For some estimates of the probabilities of consciousness/​sentience across species, see Muehlhauser, 2017, section 4.2 and Duffy, 2023.

Global belief-like preferences

On the other hand, other animals may seem especially unlikely to have global belief-like preferences, i.e. belief-like preferences about their lives as a whole (Parfit, 1984) like life satisfaction (Plant, 2020), quality-adjusted life years (QALYs), and generalizations of QALYs to include extrinsic goals (Hazen, 2007), or other preferences about the state of the world as a whole or all-things-considered, like an individual’s moral beliefs, their utility function or preferences over outcomes, prospects or actions. These contrast with local preferences, which are narrower in scope, like just about your job in general, your current project, that a meeting goes well, or scoring that goal in a soccer match.

About global preferences in other animals, Plant (2020) writes and then further defends the following:

Many sentient entities seem incapable of making these sorts of judgements, such as non-human animals or humans with cognitive disabilities. Such evaluations require complicated cognitive machinery that these beings conspicuously lack; to make an overall evaluation, one must decide which standard(s) you’re going to use to evaluate your life and then, taking all the various bits and pieces of your life in aggregate, make a judgement. We might, for instance, believe that dogs can feel pleasure and pain, have beliefs and desires, likes and dislikes, but nevertheless doubt they are capable of deciding how satisfied they are with their lives as a whole.

He claims self-awareness is required, and proposes mirror self-recognition as a minimal bar to pass, which few species do. And if global preferences are as Plant describes them above, I’d set the bar even higher, as narrative identity, mental time travel and/​or counterfactual thinking, or more broadly, autonoetic consciousness, allowing them to recall and judge their pasts and imagine and judge possible futures.

However, there are at least a few things we can say in defense of global preferences in other animals, to which Plant (2020) has not already replied. I think it’s possible that global (belief-like) preferences — or at least things worthy of being called global preferences and treated similarly — don’t require all of what Plant expects of them or the mental faculties I just listed.

First, other animals may have global (belief-like) preferences that happen to only take into account their immediate concerns. These could just be their hedonic states or overall felt desires in response to what’s currently happening to and around them, or related beliefs. It’s coherent and possible in principle — even if not very psychologically plausible in practice — for a human to report their life satisfaction based just on how they’re feeling at the moment and believe it. Perhaps more plausible, if a human had severe amnesia, both retrograde and anterograde, forgetting their past and being unable to form much in terms of new memories, they might judge their life so far on the basis of how they’re feeling at the time. They could still have global preferences about their futures beyond the immediate, but life satisfaction is a judgement about life so far (and perhaps where they expect it to go). It’s unclear how odd or objectionable the implications of counting hedonic states or felt desires as global preferences in other animals would be.[12]

Or, other animals’ global preferences could be their particular dispositions to generate hedonic states or felt desires in response to specific circumstances. Humans’ global preferences probably are also just our dispositions to think certain things or are otherwise rarely conscious. How often do people think about their life satisfaction or what kinds of lives they want to live, say? If someone recently and unexpectedly lost their child, we should be able to say they are worse off according to their global preferences, as long as they would judge their own life badly even if they just haven’t yet because they’ve been too preoccupied with the loss. Plant (2020) also agrees with using hypothetical rather than just actual judgements.

Plant (2020) also considers but ultimately rejects the proposal of imagining how animals would report their life satisfaction if they could, basically because it’s too large a departure in capacities and the kind of being they are. I’m inclined to agree with him on this, but I’m not sure.[13]

Finally, it’s not clear that it should be required that they decide by what standards to judge or that that even picks out something special. The decision doesn’t seem to make the preference a different kind of thing; it’s still just another belief-like preference, but caused by a decision or an intention. It seems enough to be able to imagine or even just recognize presentations or descriptions of their life so far or possible futures for themselves (attributing those to themselves) and just judge them, and how pleasant, unpleasant or attractive they feel could count as such judgements.[14] Now, other animals might or might not recall their pasts and imagine their futures and judge them even in this way — that’s mental time travel — or even properly attribute the events to themselves if presented with them, but they can judge the present, and as I just argued, that might be enough, giving them extremely short-sighted global preferences.

So, it seems to me there is at least some case for many other animals having global preferences, enough to not entirely dismiss the possibility, but also not enough for great confidence that they do have them.

There’s also a separate question about whether global belief-like preferences should even be granted more moral weight than other belief-like preferences. I’d guess they’re the same kinds of belief-like preferences, realized in essentially the same ways, just about different things. When someone is deciding what food they prefer or what kind of life would be best for themself, essentially the same neurological and mental processes are probably (or can be) involved.

Choice-based preferences in other animals

Recall my definition and illustrations from a previous piece:

Choice-based preferences: our actual choices, as in revealed preferences, or what we would choose.[15] If I choose an apple over an orange when both are available in a given situation, then, on this account and in that situation, I prefer the apple to the orange.[16] In order to exclude reflexive and unconscious responses (even if learned), I’d consider only choices under voluntary/​top-down/​cognitive control, driven (at least in part) by reasons of which the individual is conscious, (conscious) intentions[17] and/​or choices following (conscious) plans (any nonempty subset of these requirements). Conscious reasons could include hedonic states, felt desires or belief-like preferences. For example, the normal conscious experience of pain drives behaviour in part through executive functions (cognitive control), not just through (apparently) unconscious pain responses like the withdrawal reflex.

Heathwood (2019) calls these behavioral desires and argues against their moral importance, because they don’t require any genuine attraction to the chosen options (or aversion to the alternatives).

As I argued in that piece, based on Heathwood (2019), it seems unlikely that choice-based preferences matter much intrinsically at all, but in case they do, which other animals have them, if any?

It seems to me that animals meet some plausible standards for choice-based preferences. Many can act through voluntary/​top-down control, guided by their conscious felt desires or hedonic states, assuming these are conscious at all. If that’s all that’s required, then effectively all animals with conscious hedonic states or felt desires and top-down control could have choice-based preferences.

However, we might require intentions or plans, or even that the animal be conscious of them. Which other animals have intentions or plans? Which ones are they conscious of their intentions or plans?

There is evidence for anticipatory planning in some other animals, there’s so far no evidence for strategic planning, according to Selter (2020). There’s evidence for mental time travel, prospective reasoning and future planning in corvids (crows, ravens, jays, magpies) and nonhuman apes (Carruthers, 2019, p. 26-29, Corballis, 2019, Zentall, 2013), but these animals are not exploited by humans in very large numbers, so they’d probably be difficult to help cost-effectively. Chickens may demonstrate some self-control to delay gratification (Abeyesinghe et al. 2005, cited by Marino (2017)), but I doubt this requires planning. Navigation may involve planning or mental simulation in general, like hippocampal preplay in rats (Dragoi & Tonegawa, 2011, Lai et al., 2023), but this could be unconscious like blindsight, if it does not involve conscious mental imagery, whether visual,[18] motor, proprioceptive, tactile or something else.

On the other hand, even honey bees communicate with each other about the direction, distance and quality of opportunities via the waggle dance. If they’re conscious at all, then the honey bees who observe such dances are effectively conscious of a rough plan, i.e. where to go, which they may or may not follow. Perhaps consciousness of plans is most useful in nonhuman animals for the communication of plans.

Stepping back, it seems that many other animals have plans and intentions of some kind. However, it’s not clear which are specifically conscious of them at all or often, and I’m not aware of much evidence either way.

Multiple lines

Even if other animals turn out to have choice-based preferences, belief-like preferences and even global preferences, humans seem far ahead in our capacities for them, or at least the number of possible considerations and depth. We might have more morally valuable versions of these types of preferences, or it could be vague whether animals have them at all or the differences could otherwise matter a lot.

We recognize far more concepts and far more sophisticated concepts. We’re more self-aware: we have self-narratives, introspect more and more flexibly, and have greater social self-awareness. We have fine and flexible control over highly compositional and expressive representations through our uses of language and mental time travel and scenario building. Whatever versions of these capacities other animals have, when they even have any at all, seem rarely used or very impoverished (e.g. Carruthers, 2013). We use ours frequently, and they facilitate several other potentially important capacities in us, but that are likely absent or rare in other animals. We can decide by what standards to judge our lives and more clearly have global preferences at all (Plant, 2020). We can deliberate in normative terms, using principles and standards, about what we should do (Korsgaard, 2010). We can plan strategically (Selter, 2020). We have self-authorship: we can decide what kinds of people we want to be and change ourselves and our values (Latyshev, 2023).

This gives us multiple standards, with different bars to meet. How should we decide between them? Perhaps we shouldn’t, and instead we should weigh them all, so that meeting a higher bar is more morally important than only meeting a lower bar. I will sketch how to do this and some implications in another piece.

  1. ^

    There has been some controversy about beliefs in other animals, e.g. Davidson (e.g. 1982, summarized in Schwitzgebel, 2023, section 4) denied beliefs to other animals on the basis of lacking rich enough networks of beliefs to specify their contents, a concept of belief, and language. However, none of these seem required for belief to me (there’s no particularly good line for “rich enough” networks), and it seems to me that the case for beliefs in other animals is strong.

    Malcolm, 1973 makes a commonsense case with multiple illustrations that for some proposition p, believing (or thinking) that p doesn’t require having conscious symbolic or linguistic thought that p:

    You and I notice, for example, that Robinson is walking in a gingerly way, and you ask why? I reply, “Because he realizes that the path is slippery.” I do not imply that the proposition, “This path is slippery,” crossed his mind. Another example: I wave at a man across the quad. Later on I may say to someone, “I saw Kaspar today.” It may be true that I recognized Kaspar, or recognized that the man across the quad was Kaspar, but not true that I thought to myself, “That is Kaspar.” Turning from propositional verbs to emotions and sensations, it is plainly false that whenever a man is angry he thinks of the proposition, “I am angry”; or that whenever he feels a pain in his leg the thought, “I have a pain leg,” occurs to him.

    and it is natural to attribute beliefs to other animals in similar ways:

    Suppose our dog is chasing the neighbor’s cat. The latter runs full tilt toward and oak tree, but suddenly swerves at the last moment and disappears up a nearby maple. The dog doesn’t see this maneuver and on arriving at the oak tree he rears up on his hind legs, paws the trunk as if trying to scale it, and barks excitedly into the branches above. We who observe the whole episode from a window say, “He thinks that the cat went up that oak tree.” We say “thinks” because he is barking up the wrong tree. If the cat had gone up the oak tree and if the dog’s performance had been the same, we could have said, “He knows that the cat went up the oak.”

    See also Schwitzgebel, 2023, section 1 for an overview of accounts of belief, as well as Routley, 1981, Blasimme & Bortolotti, 2010, pp.85-88, Schwitzgebel, 2023, section 4, and Andrews & Monsó, 2021, section 3.4 for responses to skeptical cases like Davidson’s. For example, Blasimme and Bortolotti (2010, p.84) list and include some brief responses to some of these skeptical accounts:

    Conditions for intentional agency that we find in the traditional philosophical literature on mindedness seem to rely on the possession or exercise of capacities that are at least as conceptually sophisticated as intentional agency itself. Here are some examples [Davidson (1984, 2004); Dennett (1979, 1995); Carruthers (1989)]:

    (a) One can have beliefs only if one has the concept of belief.

    (b) One can have beliefs only if one is rational.

    (c) One can have beliefs only if one is self-conscious.

    (d) One can have beliefs only if one has a language.

    Although each of these statements is motivated by some theory of what intentional states are, it is worth noticing how strong these conditions are [Maclntyre (1999); Bortolotti (2008)]. We don’t normally think that in order to have something we need to possess the concept of that something (condition a), as we don’t normally think that it is necessary to do something well in order to do it at all (condition b).

    It’s also notable that Carruthers previously defended higher-order thought theories of consciousness (Carruthers, 2005a, Carruthers, 2018, p.191) and denied beliefs, morally important consciousness and moral significance to other animals in their own right (Carruthers, 1989, Carruthers, 1992), but now seems to attribute beliefs widely across animals, takes the issue of the existence of phenomenal consciousness — as a human concept derived from our particular consciousness — in other animals to be indeterminate and unimportant, and supports “teasing out the moral relevance of the various sorts of cognitive organization that we discover in animals” and, like Dawkins, “bracket[ing] questions of consciousness in our treatment of animals, focusing, rather, on questions of animal welfare, health, and flourishing” (Carruthers, 2018; see also Carruthers, 2020). Carruthers (2005b) “[a]rgues that belief/​desire psychology – and with it a form of first-order access-consciousness – are very widely distributed in the animal kingdom, being shared even by navigating insects”, and Carruthers (2013) repeats this and, largely on the basis of the evidence supporting the global workspace theory/​model of consciousness, further argues against distinctly human minds.

  2. ^

    Or when a hedonic state and an aversive felt desire occur together, e.g. pain is often simultaneously unpleasant and aversive.

  3. ^

    Shea et al. (2014) summarized some of the older literature on animal metacognition, highlighting species for which there was evidence of metacognition:

    There is compelling evidence that non-human animals are more likely to seek additional information [76, 77], to opt out of making decisions [4, 5, 78, 79, 80], and to make lower post-decision wagers [67, 81] under conditions in which a human observer would describe them as uncertain; for example, when the animal is required to make a difficult rather than an easy visual discrimination, or to remember an event over a long rather than a short interval. Some recent studies of monkeys [67], rats [5, 78], and pigeons [79] have also indicated, using transfer tests and single neuron recording, that this type of metacognitive behaviour can be regulated by internal rather than external cues; for example, that it covaries more precisely with neural signals from the orbitofrontal cortex or the supplementary eye fields than with external stimulus values.

    Since then, there have been more studies with primates, pigeons and rats providing stronger evidence for metacognition in them, and a consensus seems to be forming that many behaviours indicative of metacognition in some other animals are not explainable entirely by associative learning (Beran, 2019). There’s also some evidence for uncertainty monitoring in particular in honeybees, ants (Waldhorn et al., 2019), a species of shrimp, spider crabs and crayfish (Fischer, 2022 (table)), as well as curiosity in zebrafish and octopuses (Fischer, 2022 (table)).

    However, it’s unclear whether or not such behaviour can still be explainable in first-order terms (Beran, 2019). It may not matter for our purposes, as Proust (2019) writes:

    A number of researchers define “metacognition” as “knowing what one knows.” Others define it more broadly as a set of abilities allowing an individual to control and monitor his/​her own cognitive activity” – where “cognitive activity” is taken to mean “activity with an informational goal.” Developmental, neuroscientific and comparative studies, however, show that cognitive agents can pursue informational goals and reliably monitor them without representing their own mental states as mental states: they enjoy “procedural” metacognition.

  4. ^

    Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in Sánchez-Suárez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3. Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that it’s the anxiety itself that they’re discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants. Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, “jet lag”, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.

    However, Mason and Lavery (2022) caution:

    But could such results merely reflect a “blindsight-like” guessing: a mere discrimination response that need not reflect underlying awareness? After all, as we have seen for S.P.U.D. subjects, decerebrated pigeons can use colored lights as DSs (128), and humans can use subliminal visual stimuli as DSs [e.g., (121)]. We think several refinements could reduce this risk.

  5. ^

    Affective/​emotional mirror neurons have been found in rat brains (Carrilo et al., 2019, Wu et al., 2023), and their helping behaviour is reduced by anxiolytics (Bartal et al., 2016, Bartal & Mason, 2018), suggesting that it is indeed driven in part by negative affect or aversion. Various other explanations have been ruled out as necessary for helping behaviour in rats (Resnick, 2016, Bartal & Mason, 2018, Cox & Reichel, 2020). For a recent review supporting empathy in rats, see Mason, 2021. For a critical opinion article, claiming citation bias in the literature, see Blystad, 2021, but my understanding is that the apparently allegedly uncited literature for alternative explanations has in fact been addressed by studies with better controls, e.g. preventing social reinforcement/​contact with the helped animal, like Bartal & Mason, 2018 and Cox & Reichel, 2020.

  6. ^

    There’s some evidence for regret in rodents (Steiner & Redish, 2014 (explained in Bissonette et al., 2014) and Sweis et al., 2018), and given that rats seem concerned with the distress of other rats, it wouldn’t surprise me to find them also feel regret with similar experimental designs, but where the reward is instead reduced distress in other rats.

    It wouldn’t surprise me to find gratitude in animals prone to social bonding, affection and helping driven by emotional contagion, or just to motivate selective reciprocation and cooperation.

  7. ^

    Silverstein (2017)’s version of normative self-governance is characterized in terms of responding to and recognizing normative reasons in the right way. However, I suspect Silverstein (2017) applies this account too narrowly, perhaps assuming an account of belief that requires language. Silverstein (2017) writes:

    In my panicked flight, I do not even register the fact that the sounding of the fire alarm counts in favour of my leaving the premises: my phobic reaction involves no awareness of the normative force of the fact that prompts that reaction.

    Instead, the conscious feeling of fear itself is exactly one way you can register the fact that the sounding of the fire alarm counts in favour of your leaving. The way it affects your attention, your readiness for action, or even your actions could be other ways of registering the normative force.

  8. ^

    I’d guess more animals could be trained to associate unpleasantness, pleasure, desire and aversion all together and generalize between them, and we might call that a concept of a reason, but if belief-like preferences require actually having such a concept, then this wouldn’t help those animals who aren’t trained in the right way. Furthermore, this concept would not play the right functional role, e.g. it wouldn’t be used in everyday decision-making.

  9. ^

    Haidt and Joseph (2004) write:

    Virtues, on this understanding, are closely connected to the intuitive system. A virtuous person is one who has the proper automatic reactions to ethically relevant events and states of affairs, for example, another person’s suffering, an unfair distribution of a good, a dangerous but necessary mission. Part of the appeal of virtue theory has always been that it sees morality as embodied in the very structure of the self, not merely as one of the activities of the self. Even Aristotle supposed that in developing the virtues we acquire a second nature, a refinement of our basic nature, an alteration of our automatic responses.

    One of the crucial tenets of virtue theory is that the virtues are acquired inductively, that is, through the acquisition, mostly in childhood but also throughout the life course, of many examples of a virtue in practice. Often these examples come from the child’s everyday experience of construing, responding, and getting feedback, but they also come from the stories that permeate the culture. Each of these examples contains information about a number of aspects of the situation, including the protagonists’ motivations, the protagonists’ state of being (suffering, disabled, hostile, rich, etc.), the categorization of the situation, and the evaluation of the outcome offered by more experienced others. Only over time will the moral learner recognize what information is important to retain and what can be safely disregarded.

  10. ^

    Athanassoulis (2010) writes:

    Modern virtue ethics takes its inspiration from the Aristotelian understanding of character and virtue. Aristotelian character is, importantly, about a state of being. It’s about having the appropriate inner states. For example, the virtue of kindness involves the right sort of emotions and inner states with respect to our feelings towards others. Character is also about doing. Aristotelian theory is a theory of action, since having the virtuous inner dispositions will also involve being moved to act in accordance with them. Realizing that kindness is the appropriate response to a situation and feeling appropriately kindly disposed will also lead to a corresponding attempt to act kindly.

  11. ^

    Frankish (2021) said on the topic of artificial consciousness:

    It’s trying to get people to care about what matters. We kind of all have a pretty good grasp on the things that matter and don’t matter. But some people care less than others. If you could point to certain things in the world and say, look, it’s in a state that is intrinsically bad labeled by the universe, as Finn said, well, still, someone might say, well, I don’t care that it’s been labeled by the universe. And I don’t know what you could say to them about that then I don’t care what the universe has labeled it. You’ve got to care. And caring isn’t a matter of, I don’t think, having a certain set of beliefs. It’s a matter of living in a certain being, a certain person. I’m a virtue ethicist. I’m not fess up.

    I suppose I think a virtuous person probably would have a certain amount of concern for things like the robot dogs.

    and

    How do we do it? I guess trial and error. We have to live it through. I don’t think there’s like a cheat sheet that can tell us how to do it. I don’t think there are any sets of principles that can tell us how to do it. We’ve got to find a way of living in the world that we feel happy with, that we’re comfortable with. What if that starts to fractionate? What if different groups of us feel comfortable with different things? I don’t know. This is messy. This is horrible. This is part of being a self aware creature that we reflect on how we should live in the world and how we should react. And evolution has given us a good deal of freedom.

    It’s hardwired a lot of stuff into us, but it’s also made as reflective and made us able to second guess ourselves and to worry about whether we’re doing things right. It’s going to be really messy, I’ll tell you. Okay. Something here that does worry me. I suspect that once we do get pretty complex AIS, and actually I think it’s going to be a long time before we get anything that’s really a really serious candidate for the sort of ethical concern that we give even to other mammals.

    But once we do get those sort of creatures, I think there are going to be people who will say, I don’t care how complex and sensitive it is and how rich its psychology is, it’s just a piece of machinery and we can do what we want to it because they have this conception of the inner light conception of consciousness, and they can’t believe that the inner lights are on in this thing that’s not in this non biological thing. And they’re going to say, we can just treat them as we want. I think that’s a good I don’t think that’s a good line. I think it’s I’d much rather be guided by my gut feeling about these things.

    I mean, gut feeling based on a good deal of interaction with them and fairly complex understanding of what they’re capable of than on some abstract theoretical principle like that.

  12. ^

    Humans may have multiple global preferences simultaneously, like reportable standardly conceived life satisfaction and our current hedonic states. However, I already find it plausible that we have different kinds of welfare that matter intrinsically, even if there turns out to be no good way to weigh them against one another.

    Or, our hedonic states don’t count as global for us, because our life satisfaction supersedes them as more comprehensive, based on further considerations. Other animals’ hedonic states and felt desires may be as comprehensive as global preferences can get for them, and what individuals could count in their global preferences would grow with their relevant cognitive capacities.

  13. ^

    Plant (2020, p. 13) writes, on behalf of a dog Fido:

    The problem with this move is that Fido cannot judge his life as he is. For him to be able to judge his life, it would require changing him into a very different sort of being, at which point this cognitively enhanced pooch would not be Fido assessing Fido’s life, but some other entity evaluating its own life. This is analogous to the scenario where humans ask if, from our own perspective, we would like to live as some animal. This is a different question from asking whether the animal, from its perspective, enjoys its own life.

    I find this objection pretty decisive. Even if we only consider the things the real Fido cares about or would normally care about, this still doesn’t tell us much about how to weigh them on his behalf. But maybe it’s fine if it’s just vague. It’s probably vague for humans, too, because a great number of possible circumstances could affect our judgements even if we don’t explicitly consider them, like our current mood, what we just ate, what we’ve thought about recently, anything we’ve just been primed with ahead of time. However, I don’t expect a human’s hypothetical life satisfaction to be objectionably sensitive to these details. I’d guess Fido’s hypothetical life satisfaction would be, but I’m not sure.

    Plant (2020, pp. 13-14) also writes:

    A further problem, if we accept the suggestion that things which can’t, in fact, evaluate their lives can nevertheless be welfare subjects is that it is too permissive. It would lead to an opposite problem of having too many subjects. For instance, if Mountain Everest could judge its existence, in the relevant sense of ‘count’, then presumably it would have some views on what was good or bad for it, and so would count as a welfare subject too.

    I don’t think this is a problem if we only allow the hypothetical augmented Mount Everest to consider what the real Mount Everest already cares about, which is presumably nothing.

  14. ^

    However, if going through a depiction or description of their life or a possible future would result in any emotional responses (hedonic states and/​or felt desires) in response to the content, it would probably result in a sequence of emotions, responding sequentially to the parts of the depiction or description. It’s not clear there’s any one moment we should pick to reflect their global preference here. Maybe the last, but that could be like judging a movie only by your feelings at the end of it.

  15. ^

    We could consider behaviours and behavioural dispositions in general, but this seems too trivial to me to warrant any significant concern. For example, electrons tend to avoid one another, so we might interpret them as having preferences to be away from each other. Choice-based preferences (and perhaps other kinds of welfare here) could be understood as far more sophisticated versions of such basic behavioural preferences, but choice seems like a morally significant line to draw. For more on how simple behaviours in fundamental physics might indicate welfare, see Tomasik, 2014-2020.

  16. ^

    Or, I prefer choosing the apple to choosing the orange for more indirect reasons related to my choice.

  17. ^

    Haggard (2005) summarized the neuroscience of intentions:

    Instead, recent findings suggest that the conscious experience of intending to act arises from preparation for action in frontal and parietal brain areas. Intentional actions also involve a strong sense of agency, a sense of controlling events in the external world. Both intention and agency result from the brain processes for predictive motor control, not merely from retrospective inference.

  18. ^

    The primary visual cortex seems necessary for (conscious) visual sensation in humans and other mammals (Blindsight—Wikipedia, Petruno, 2013), but it’s unclear if it’s necessary for (conscious) visual imagery (e.g. Kosslyn & Thompson, 2003 and Bridge et al., 2012).

Crossposted to LessWrong (0 points, 0 comments)