Detecting Morally Significant Pain in Nonhumans: Some Philosophical Difficulties

This es­say is a pro­ject of Re­think Pri­ori­ties. It was writ­ten by Ja­son Schukraft, with con­tri­bu­tions from Peter Hur­ford, Max Carpen­dale, and Mar­cus A. Davis.

Other hu­mans merit moral con­cern. We think many non­hu­mans merit moral con­cern too. But how do we know? And which non­hu­mans? Chim­panzees? Chick­ens? Bum­ble­bees? Pro­to­zoa? Room­bas? Rocks? Where and how do we draw a line?

What would it take to jus­tifi­ably be­lieve that some non­hu­man ex­pe­riences pain (or plea­sure) in a morally sig­nifi­cant way?[1] This is a tough ques­tion, but it is in­cred­ibly im­por­tant to get right. Hu­mans con­sti­tute a very, very small frac­tion of the an­i­mal king­dom. If other ver­te­brate an­i­mals ex­pe­rience morally sig­nifi­cant pain, then much of our en­gage­ment with these an­i­mals is deeply im­moral. If in­ver­te­brate an­i­mals ex­pe­rience morally sig­nifi­cant pain, then, given the sheer num­ber of in­ver­te­brates,[2] an al­most in­com­pre­hen­si­ble amount of morally sig­nifi­cant suffer­ing oc­curs be­yond the ken of nor­mal hu­man at­ten­tion. And if the ca­pac­ity to ex­pe­rience morally sig­nifi­cant pain is not re­stricted to or­ganic en­tities, then hu­man civ­i­liza­tions of the fu­ture may be ca­pa­ble of pro­duc­ing ex­po­nen­tially more sen­tient en­tities than presently ex­ist.

On the other hand, if many, most, or all non­hu­mans do not ex­pe­rience morally sig­nifi­cant pain, then it could be a waste of re­sources to try to change their con­di­tion. Given that there are mil­lions of hu­mans cur­rently ex­pe­rienc­ing morally sig­nifi­cant pain (for whom these re­sources would be a great aid), the op­por­tu­nity cost of wast­ing time, tal­ent, and money on non­hu­mans ap­pears tremen­dous.

Figur­ing out where and whether to al­lo­cate re­sources to help non­hu­mans is of sig­nifi­cant in­ter­est to Re­think Pri­ori­ties. This post is our first in a se­ries on morally sig­nifi­cant pain in in­ver­te­brates. We fo­cus on in­ver­te­brates for two rea­sons: (1) We are already rea­son­ably con­fi­dent that mam­mals, birds, rep­tiles, am­phibi­ans, and most fish feel morally sig­nifi­cant pain,[3] and hence must be in­cluded in our moral calcu­la­tions, but we are un­sure if more dis­tantly re­lated an­i­mals war­rant similar con­cern, and (2) The sub­ject of in­ver­te­brate sen­tience, though re­cently gain­ing trac­tion both in the sci­en­tific liter­a­ture and the effec­tive al­tru­ism com­mu­nity, ap­pears ne­glected rel­a­tive to the sub­ject’s po­ten­tial im­port. In fu­ture posts we look at which fea­tures might be rele­vant for de­ter­min­ing whether an en­tity is ca­pa­ble of ex­pe­rienc­ing pain. We also pre­sent a de­tailed table out­lin­ing the dis­tri­bu­tion of these fea­tures through­out the an­i­mal king­dom.[4]

Of course, we rec­og­nize that delineat­ing the phy­lo­ge­netic dis­tri­bu­tion of morally sig­nifi­cant pain is an ex­traor­di­nar­ily com­plex and difficult task, one that we are ex­tremely un­likely to solve out­right. To put it mildly, much more re­search, at vir­tu­ally ev­ery level of the prob­lem, is needed. Nev­er­the­less, the ur­gency of the is­sue com­pels us to ad­dress it now, be­fore all the po­ten­tially rele­vant ev­i­dence is in. As grant­mak­ers and char­ity en­trepreneurs, we do not have the lux­ury to wait. We must de­cide how to al­lo­cate re­sources now, in our cur­rent epistem­i­cally in­com­plete state. Our goal in this se­ries of posts is to de­ter­mine, to the best of our abil­ities and within rea­son­able fund­ing and time con­straints, what we should think about morally sig­nifi­cant pain in in­ver­te­brates, given the cur­rent state of the ev­i­dence.

To that end, we be­gin with a re­view of the philo­soph­i­cal difficul­ties in­her­ent in the de­tec­tion of morally sig­nifi­cant pain in non­hu­mans. We dis­cuss eight con­cep­tu­ally se­quen­tial steps, alongside their at­ten­dant difficul­ties, needed to iden­tify morally sig­nifi­cant pain in non­hu­mans.[5] The first three steps con­cern de­tect­ing pain; the other five steps con­cern de­ter­min­ing whether (and to what ex­tent) the pain is morally sig­nifi­cant.[6]

The Prob­lem of Other Minds

Start with your­self. You ex­pe­rience plea­sure and pain. You can be as con­fi­dent of this fact as you can be of any fact. Why? You have di­rect in­tro­spec­tive ac­cess to at least some of your phe­nom­e­nal states. But there is an asym­me­try be­tween you and ev­ery­thing else. You can­not know by di­rect in­tro­spec­tion that some­one else is phe­nom­e­nally con­scious.[7] If you are jus­tified in be­liev­ing that other en­tities ex­pe­rience pains and plea­sures, it must be by some differ­ent epistemic strat­egy. Solip­sism is the view that one’s mind is the only mind that ex­ists.[8] If we are to jus­tifi­ably be­lieve that some non­hu­man ex­pe­riences pain, we must first over­come the challenge of solip­sism.

Although philoso­phers dis­agree about the ap­pro­pri­ate re­s­olu­tion, ro­bust solip­sism has few, if any, con­tem­po­rary defen­ders. The idea that other hu­mans ex­pe­rience plea­sure and pain is very cen­tral to our web of be­liefs. Any the­ory that would wage war against such a cen­tral be­lief had bet­ter come loaded with pow­er­ful am­mu­ni­tion. It is gen­er­ally held that tra­di­tional ar­gu­ments in fa­vor of solip­sism are in­ca­pable of pro­vid­ing such am­mu­ni­tion.[9]

Analog­i­cal Ar­gu­ment and In­fer­ence to the Best Explanation

The most com­mon re­sponse to solip­sism takes the form of an in­fer­ence to the best ex­pla­na­tion.[10] One be­gins with an ex­am­i­na­tion of one’s own be­hav­ior. For ex­am­ple: when I cut my hand, I cry out, I move my hand away from the sharp ob­ject, and I later treat the wound with a clean bandage. Then one con­sid­ers the be­hav­ior of other hu­mans: they also cry out when cut and at­tend in similar ways to similar wounds.[11] There are a va­ri­ety of hy­pothe­ses which, if true, could ex­plain this be­hav­ior. Per­haps they are so­phis­ti­cated robots pro­grammed to be­have as I do. But the sim­plest and best ex­pla­na­tion of the be­hav­ior of other hu­mans is that they feel pain like I do.[12]

Of course, this ex­pla­na­tion might be mis­taken, and we might come to know it is mis­taken. If I ex­am­ined the heads of many fel­low hu­mans and in each case found not a brain but a crude ar­tifi­cial de­vice re­ceiv­ing sig­nals from a robotics fac­tory, that would con­sti­tute a defeater for my prior ex­pla­na­tion. I would then no longer be able to ra­tio­nally en­dorse the view that other hu­mans have men­tal states like I do. In­fer­ence to the best ex­pla­na­tion tells us that, in the ab­sence of defeaters, we are li­censed to pre­fer the sim­plest ex­pla­na­tion of a phe­nomenon.[13]

In­fer­ence to the best ex­pla­na­tion is re­lated to, but dis­tinct from, ar­gu­ment by anal­ogy. The ba­sic struc­ture of an analog­i­cal ar­gu­ment is as fol­lows (where E1 is the source do­main and E2 is the tar­get do­main):

(1) En­tity E1 has some prop­er­ties P1 … Pn
(2) En­tity E2 has the same prop­er­ties P1 … Pn
(3) En­tity E1 has some fur­ther prop­erty Pn+1

(4) There­fore, en­tity E2 likely has the same prop­erty Pn+1

Analog­i­cal ar­gu­ments are by their na­ture in­duc­tive. The wider the in­fer­en­tial base upon which to base an in­duc­tion, the bet­ter the in­duc­tive ar­gu­ment. But pain is a pri­vate men­tal state, so when it comes to pain, we each have an in­duc­tive base of one (namely, our­selves). In­duc­tive in­fer­ences from an in­duc­tive base of one gen­er­ally aren’t sound. So we prob­a­bly don’t know that oth­ers ex­pe­rience pain by analog­i­cal rea­son­ing alone.[14]

In­fer­ence to the best ex­pla­na­tion, by con­trast, is ab­duc­tive. Ab­duc­tive ar­gu­ments are non-de­duc­tive like tra­di­tional in­duc­tive ar­gu­ments, but, un­like tra­di­tional in­duc­tive ar­gu­ments which are, allegedly, jus­tified em­piri­cally, ab­duc­tive ar­gu­ments are jus­tified a pri­ori. We are jus­tified in us­ing in­duc­tion be­cause, as a mat­ter of con­tin­gent fact, in­duc­tion has worked well in the past.[15] In­stances of ab­duc­tive rea­son­ing, in con­trast, are gen­er­ally held to in­stan­ti­ate prin­ci­ples of ra­tio­nal­ity, which, if they are known at all, are known a pri­ori.

In­fer­ence to the best ex­pla­na­tion can also be ap­plied to non­hu­mans.[16] If a class of non­hu­mans ex­hibits pain be­hav­ior__[17]__ suffi­ciently similar to hu­mans, then, in the ab­sence of defeaters, we are li­censed to pre­fer the ex­pla­na­tion that they feel pain to al­ter­nate ex­pla­na­tions. But what counts as suffi­ciently similar? And what counts as a defeater?

Con­sider similar­ity first. One worry is that the be­hav­ior of phy­lo­ge­net­i­cally dis­tant an­i­mals (to say noth­ing of in­or­ganic en­tities) is so alien that the be­hav­ior can­not even be ac­cu­rately de­scribed with­out re­sort­ing to prob­le­matic an­thro­po­mor­phiz­ing. Even when we can ac­cu­rately de­scribe the be­hav­ior of, say, in­ver­te­brates with­out in­ap­pro­pri­ately an­thro­po­mor­phiz­ing them, it’s un­clear how much similar­ity an­tecedently to ex­pect. Differ­ent species of an­i­mal, af­ter all, are differ­ent. To take a triv­ial ex­am­ple: most of the time, when hu­mans are in pain, they gri­mace. But the hard ex­oskele­ton of an in­sect does not al­low for gri­mac­ing. Does this differ­ence provide a small bit of ev­i­dence that in­sects don’t feel pain? Pre­sum­ably not. But that doesn’t mean that gri­mac­ing is ir­rele­vant. Con­sider an­other ex­am­ple: many times when a hu­man is in pain, she cries out. Again, ow­ing to anatom­i­cal differ­ences, we shouldn’t ex­pect this fea­ture to be wide­spread in in­ver­te­brates, even if they do feel pain. But farm an­i­mal vo­cal­iza­tion has re­cently been taken to be a good met­ric of an­i­mal welfare in pigs, cows, and chick­ens.

The gen­eral les­son here is that there is no set of fea­tures which is uni­ver­sally rele­vant for the de­tec­tion of pain in non­hu­mans. Even if pain ex­pe­riences are wide­spread through­out the an­i­mal king­dom, the ex­treme di­ver­sity of liv­ing or­ganisms sug­gests that pain ex­pe­riences might be ex­pressed in be­hav­iorally and neu­ro­biolog­i­cally dis­tinct ways.

The same prob­lem ap­plies to po­ten­tial defeaters. It was once widely thought that a neo­cor­tex is re­quired for con­scious ex­pe­rience.[18] Thus, it was thought, any crea­ture which lacked a neo­cor­tex thereby lacked con­scious ex­pe­rience.[19] No mat­ter how similar the be­hav­ior, the ab­sence of a neo­cor­tex in a crea­ture served as a defeater for the view that that crea­ture ex­pe­rienced pain.

To­day the pic­ture is more com­pli­cated. For starters, ev­i­dence is emerg­ing that, even in hu­mans, a neo­cor­tex is not re­quired for con­scious ex­pe­rience.[20] More im­por­tantly, the ab­sence of a neo­cor­tex doesn’t im­ply that there aren’t ho­molo­gous cells perform­ing the same role in other crea­tures.[21] The point to ap­pre­ci­ate here is that the bar for jus­tifi­ably be­liev­ing that some neu­rolog­i­cal fea­ture is a nec­es­sary con­di­tion on con­scious ex­pe­rience is quite high. Neu­rolog­i­cal differ­ences surely are rele­vant, but, in the ab­sence of a gen­eral the­ory of con­scious­ness, the de­gree to which they can be de­ci­sive is limited.

There is a fur­ther, more fun­da­men­tal limi­ta­tion to in­ves­ti­gat­ing con­scious­ness em­piri­cally. Although pain states are as­so­ci­ated (at least in hu­mans) with var­i­ous phys­iolog­i­cal re­sponses, such as ele­vated heart­beat and in­creased res­pi­ra­tion, pain can­not be defined in terms of these re­sponses. It’s nat­u­ral to sup­pose that my ex­pe­rience of pain ex­plains why my heart starts beat­ing faster and my res­pi­ra­tion quick­ens. If pain just is ele­vated heart­beat and in­creased res­pi­ra­tion (or what­ever other phys­iolog­i­cal re­sponses one fa­vors), then we lose this nat­u­ral ex­pla­na­tion. More im­por­tantly, if we define pain in purely phys­iolog­i­cal terms, we miss the moral sig­nifi­cance of pain. Pain is in­trin­si­cally morally bad (ce­teris paribus) not be­cause it causes or is iden­ti­cal to cer­tain phys­iolog­i­cal re­sponses. Pain is bad be­cause it feels bad. Mea­sured phys­i­cal re­sponses can provide ev­i­dence that an en­tity ex­pe­riences the felt bad­ness of pain, and that ev­i­dence can be de­ci­sive, but we should not con­fuse ev­i­dence of a phe­nomenon with the phe­nomenon it­self. To in­ves­ti­gate the phe­nomenon of con­scious­ness di­rectly, we prob­a­bly ought to turn to philos­o­phy.

Ap­ply­ing a Gen­eral The­ory of Consciousness

Deter­min­ing whether an en­tity is phe­nom­e­nally con­scious is prob­a­bly not a strictly sci­en­tific en­deavor. At some point, some difficult philo­soph­i­cal the­o­riz­ing might be needed to help us make ap­pro­pri­ate at­tri­bu­tions of con­scious­ness. So sup­pose all the em­piri­cal data is in, and it’s still un­clear whether a cer­tain non­hu­man is con­scious. To set­tle the ques­tion, it would be nice to ap­peal to a well-jus­tified gen­eral the­ory of con­scious­ness. Below, I briefly ex­am­ine three broad fam­i­lies of views about the re­la­tion­ship be­tween mind and mat­ter: du­al­ism, phys­i­cal­ism, and a hy­brid the­ory. But first, I out­line some pe­cu­liar ini­tial difficul­ties we face be­fore em­bark­ing on a quest for a the­ory of mind. Col­lec­tively, these sub­sec­tions show that un­cer­tainty in philos­o­phy of mind will at some point prob­a­bly in­fect our cre­dences about which non­hu­mans ex­pe­rience morally sig­nifi­cant pain.

The Com­mon Ground Problem

The­o­ries of con­scious­ness, like all philo­soph­i­cal the­o­ries, be­gin with cer­tain pre-the­o­retic in­tu­itions about the sub­ject mat­ter. Th­ese pre-the­o­retic in­tu­itions in­clude back­ground frame­work as­sump­tions about roughly how wide­spread con­scious­ness is likely to be across phyla. If fi­nal the­o­ries be­gin with rad­i­cally differ­ent start­ing as­sump­tions, com­par­ing the the­o­ries won’t re­ally be helpful. There has to be suffi­cient com­mon ground in or­der for ro­bust the­ory-com­par­i­son to be pos­si­ble. But there’s some ev­i­dence that this com­mon ground is lack­ing in the­o­ries of con­scious­ness.[22] Ex­ist­ing the­o­ries of con­scious­ness, from the world’s top re­searchers, span the board from so-called “higher-or­der the­o­ries,” which, due to their metarep­re­sen­ta­tional re­quire­ments on con­scious­ness, seem to deny con­scious­ness to ba­bies and dogs, to panpsy­chism, which at­tributes some de­gree of con­scious­ness not just to plants and uni­cel­lu­lar or­ganisms but also to pro­tons and elec­trons. One might have thought that these con­se­quences could serve as re­duc­tios on their re­spec­tive the­o­ries, but ap­par­ently this is not the case.[23] So the field must al­low an un­usu­ally di­verse range of ini­tial as­sump­tions.[24] This makes ad­ju­di­cat­ing be­tween com­pet­ing the­o­ries in philos­o­phy of mind par­tic­u­larly hard.

The Causal Sta­tus of Con­scious­ness: Dualism

Every the­ory of con­scious­ness must grap­ple with the causal sta­tus of con­scious­ness. Epiphe­nom­e­nal­ism is the view that men­tal events are causally in­ert. Ac­cord­ing to the epiphe­nom­e­nal­ist, pains and plea­sures ex­ist, but they are non­phys­i­cal states. Be­cause the phys­i­cal world is causally closed, these non­phys­i­cal states have no causal power.[25] All con­scious ex­pe­rience could be sub­tracted from the world, and it would not make any phys­i­cal differ­ence.

Ac­cord­ing to epiphe­nom­e­nal­ism, con­scious ex­pe­rience doesn’t have a causal pro­file. If con­scious ex­pe­rience doesn’t have a causal pro­file, then em­piri­cally in­ves­ti­gat­ing fea­tures which are allegedly in­dica­tive of con­scious ex­pe­rience is prob­a­bly a waste of time. If I cut my finger and cry out im­me­di­ately there­after, my cry is not caused by an ex­pe­rience of pain. So my cry is not ev­i­dence of pain, at least not in the straight­for­ward way we nor­mally take it to be.[26] The same goes for more com­pli­cated phys­i­cal fea­tures, such as brain size, opi­ate sen­si­tivity, or long-term be­hav­ior mod­ifi­ca­tion to avoid nox­ious stim­uli.

One mo­ti­va­tion for epiphe­nom­e­nal­ism is in­tu­itions about so-called “phe­nom­e­nal zom­bies.”[27] A phe­nom­e­nal zom­bie is a crea­ture whose be­hav­ior and phys­i­cal struc­ture, down to the atomic level, is iden­ti­cal to that of a nor­mal hu­man be­ing but who lacks any con­scious ex­pe­rience. David Chalmers claims that phe­nom­e­nal zom­bies are phys­i­cally pos­si­ble.[28] If phe­nom­e­nal zom­bies are phys­i­cally pos­si­ble, then, even if they are non­ac­tual, men­tal states must be causally in­ert.

Epiphe­nom­e­nal­ism is a nat­u­ral con­se­quence of many du­al­is­tic the­o­ries of mind.[29] It seems true that any­thing that can cause a phys­i­cal event must it­self be a phys­i­cal event. There are also ar­gu­ments to the effect that men­tal states are non­phys­i­cal. If those two claims are true, epiphe­nom­e­nal­ism seems nigh on in­evitable.

If epiphe­nom­e­nal­ism is true, it will be very difficult, if not im­pos­si­ble, to de­ter­mine whether an en­tity (aside from one­self) is phe­nom­e­nally con­scious. Cer­tainly no amount of em­piri­cal in­for­ma­tion will set­tle the ques­tion. A com­plete ac­count of the psy­chophys­i­cal laws could per­haps do the trick, but it’s un­clear how we could come to jus­tifi­ably be­lieve that we have such an ac­count. Re­lat­edly, epiphe­nom­e­nal­ism seems to un­der­cut the force of the in­fer­ence to the best ex­pla­na­tion strat­egy for re­spond­ing to solip­sism. If epiphe­nom­e­nal­ism is true, then men­tal states do not ex­plain the be­hav­ior of other hu­mans. At best I can in­fer that other hu­mans have brain states similar to mine, but I am no longer jus­tified in sup­pos­ing that they are con­scious.

Emer­gen­tism: A Hy­brid Theory

Emer­gent prop­er­ties are con­sti­tuted by more fun­da­men­tal en­tities yet are novel or ir­re­ducible with re­spect to them. (In sim­pler terms, the whole is greater than the sum of its parts.) Emer­gen­tism is a po­si­tion in philos­o­phy of mind that seeks to pre­serve the in­tu­ition that men­tal events and phys­i­cal events are dis­tinct with­out en­tailing epiphe­nom­e­nal­ism. On this view, con­scious­ness is an emer­gent prop­erty of the brain. Some­times this point is put in epistemic terms: con­scious­ness su­per­venes on con­stituent parts of the brain, but com­plete knowl­edge of all the brain’s con­stituent parts would not en­able us to jus­tifi­ably in­fer the ex­is­tence of con­scious­ness. (If we could so in­fer, then con­scious­ness would be re­ducible to brain states, not emer­gent from them.)[30]

Emer­gen­tism leaves us in a bet­ter epistemic po­si­tion than epiphe­nom­e­nal­ism. Be­cause men­tal states and func­tional brain states are nec­es­sar­ily con­nected (phe­nom­e­nal zom­bies are phys­i­cally im­pos­si­ble on this view), we can po­ten­tially em­ploy in­fer­ence to the best ex­pla­na­tion to de­ter­mine whether some non­hu­man is con­scious. Still, it’s not clear how well emer­gen­tism fun­da­men­tally avoids the prob­lem of epiphe­nom­e­nal­ism. Ac­cord­ing to the emer­gen­tist, al­though men­tal states and brain states are nec­es­sar­ily con­nected, they are meta­phys­i­cally dis­tinct: no amount of neu­ro­scien­tific knowl­edge could ex­plain how the brain gives rise to con­scious­ness. The con­nec­tion be­tween brain states and men­tal states has, in the words of Hem­pel and Op­pen­heim (1948), “a mys­te­ri­ous qual­ity of ab­solute un­ex­plain­abil­ity.”[31] Of course, just be­cause a phe­nomenon can­not be ex­plained in terms of neu­ro­science doesn’t mean that the phe­nomenon can’t be ex­plained at all. It may be pos­si­ble to ex­plain how the brain gives rise to con­scious­ness in terms of sub­stan­tive prin­ci­ples of meta­phys­i­cal ground­ing. Un­for­tu­nately, these prin­ci­ples seem as difficult to as­cer­tain as the psy­chophys­i­cal laws that the epiphe­nom­e­nal­ist pur­ports to ex­ist. Thus, this view seems to leave us in a similarly prob­le­matic epistemic po­si­tion.

Se­man­tic In­de­ter­mi­nacy: Physicalism

In con­trast to the nonre­duc­tive emer­gen­tism out­lined above, re­duc­tive phys­i­cal ac­counts of the mind hold that men­tal states straight­for­wardly re­duce to phys­i­cal states. Although there are many ar­gu­ments against re­duc­tive phys­i­cal­ism, re­hears­ing them here is less helpful than con­sid­er­ing what is im­plied by the truth of the view.

Con­scious­ness, even if it is a purely phys­i­cal fea­ture of the world, is not a sim­ple phe­nomenon. It’s un­likely that we will learn that con­scious­ness re­duces to a sin­gle fea­ture of the brain. It’s much more plau­si­ble to sup­pose that con­scious­ness is some com­plex bun­dle of phys­i­cal fea­tures. Given the com­plex­ity of con­scious­ness, it’s also im­plau­si­ble to sup­pose that we will be able to de­scribe con­scious­ness in terms of nec­es­sary and suffi­cient con­di­tions. If re­duc­tive phys­i­cal­ism is true, con­scious­ness is much more likely to be a cluster con­cept. Fi­nally, it seems im­plau­si­ble that these fea­tures would be co­ex­ten­sive across the an­i­mal king­dom. Some en­tities, such as hu­mans, might pos­sess all the fea­tures. Some en­tities, such as plants, might pos­sess none of the fea­tures. And some en­tities, such as sea hares, might pos­sess some but not all of the fea­tures.[32] Thus, if re­duc­tive phys­i­cal­ism is true, then at some point on the phy­lo­ge­netic tree, it will prob­a­bly be se­man­ti­cally in­de­ter­mi­nate whether a given species is con­scious. We might know all the phys­i­cal facts about the species and know the cor­rect the­ory of mind, and yet still not be able to say defini­tively whether a crea­ture is con­scious. This raises a difficult ques­tion: what is the moral sta­tus of a crea­ture for whom it is se­man­ti­cally in­de­ter­mi­nate that that crea­ture is con­scious?

The Un­pleas­ant­ness of Pain

I turn now away from the ques­tion of whether some non­hu­mans ex­pe­rience to pain to the ques­tion of the moral sig­nifi­cance of that pain, sup­pos­ing it ex­ists. As we’ll see, we need not think that all pains are equally morally sig­nifi­cant. In­deed, we might rea­son­ably con­clude that some pains ought to be ig­nored com­pletely in our moral calcu­la­tions.

Sup­pose we as­sign some mod­er­ately high cre­dence to the claim that cer­tain non­hu­mans, oc­to­puses say, ex­pe­rience pain. What might these pain-ex­pe­riences be like? In par­tic­u­lar, we would want to know whether oc­to­puses ex­pe­rience the un­pleas­ant­ness of pain. It might seem like an an­a­lytic truth that pain is un­pleas­ant, but there is ac­tu­ally good em­piri­cal ev­i­dence to sug­gest this is not nec­es­sar­ily so. Hu­mans with pain asym­bo­lia re­port ex­pe­rienc­ing pain with­out the pain be­ing un­pleas­ant. This dis­so­ci­a­tion can also be in­duced phar­ma­colog­i­cally, no­tably with mor­phine.[33]

It’s pos­si­ble that pain asym­bo­lia pa­tients are con­cep­tu­ally con­fused and that pain is nec­es­sar­ily un­pleas­ant. But it’s also pos­si­ble that pain is a multi-di­men­sional ex­pe­rience, the un­pleas­ant­ness of which is only one di­men­sion. Be­cause the un­pleas­ant­ness of pain al­most always ac­com­pa­nies the other di­men­sions, we may be mis­led into think­ing the var­i­ous di­men­sions of pain are nec­es­sar­ily co­ex­ten­sive. To analo­gize: one might have thought that pains had to be lo­cal­ized in some part of one’s body, at least vaguely so. But phan­tom limb pain shows that this is not the case.

The un­pleas­ant­ness of pain is what makes pain ex­pe­riences non-in­stru­men­tally bad.[34] Thus, pain ex­pe­riences may not be morally sig­nifi­cant sim­plic­ter. They may be morally sig­nifi­cant only when they are ac­com­panied by the usual (in hu­mans, at least) nega­tively valenced phe­nomenol­ogy.

Ac­count­ing for the un­pleas­ant­ness of pain has been a re­cent topic of in­ter­est in both philos­o­phy and neu­ro­science. Take philos­o­phy first. Although there has lately been a pro­lifer­a­tion of sub­tly differ­ent the­o­ries, two broad strate­gies stand out.[35] There are de­sire-the­o­retic ac­counts of pain’s un­pleas­ant­ness, and there are eval­u­a­tive ac­counts of pain’s un­pleas­ant­ness. Ac­cord­ing to most de­sire-the­o­retic ac­counts of pain’s un­pleas­ant­ness, a pain’s un­pleas­ant­ness con­sists in the pain-bearer hav­ing an in­trin­sic de­sire that the pain not oc­cur. Ac­cord­ing to many eval­u­a­tive ac­counts of pain’s un­pleas­ant­ness, a pain’s un­pleas­ant­ness con­sists in the pain rep­re­sent­ing that the bod­ily dam­age that the pain rep­re­sents is bad for you. There’s a lot to un­pack in those defi­ni­tions, but for our pur­poses the only im­por­tant as­pect to note is that both broad strate­gies in­voke sec­ond-or­der thoughts: in the one in­stance, a sec­ond-or­der de­sire; in the other, a sec­ond-or­der rep­re­sen­ta­tion. It seems un­likely that cog­ni­tively un­so­phis­ti­cated rep­tiles, am­phibi­ans, and fish—to say noth­ing of most in­ver­te­brates—are ca­pa­ble of en­ter­tain­ing sec­ond-or­der thoughts.[36]

Per­haps, how­ever, in­ves­ti­gat­ing the un­pleas­ant­ness of pain is bet­ter con­ceived as an em­piri­cal mat­ter. In that case, we should turn to the neu­ro­science. Here, again, we find difficul­ties. Scien­tists are be­gin­ning to sus­pect there are two func­tion­ally dis­tinct pain path­ways, the lat­eral and the me­dial.[37] The lat­eral path­way is re­spon­si­ble for rep­re­sent­ing the in­ten­sity of the pain, the lo­ca­tion of the pain, and the modal­ity of the pain.[38] The me­dial path­way rep­re­sents the de­gree of un­pleas­ant­ness of the pain. Im­por­tantly, the me­dial path­way is me­di­ated by the an­te­rior cin­gu­late cor­tex, a part of the neo­cor­tex, which, as we’ve already seen, is unique to mam­mals. So here again we have some ev­i­dence that non-mam­malian an­i­mals do not ex­pe­rience morally sig­nifi­cant pain.

Again, how­ever, the pic­ture is com­pli­cated. Pain is a very effec­tive teach­ing tool. (In­deed, this ap­pears to be the evolu­tion­ary role of pain.) Stud­ies show that rats and mon­keys with dam­aged an­te­rior cin­gu­late cor­tices dis­play al­most none of the typ­i­cal pain-learn­ing be­hav­iors of their un­dam­aged con­speci­fics. It seems that it is un­pleas­ant pain that is the effec­tive teach­ing tool. If non-mam­malian an­i­mals ex­hibit many of the same pain-learn­ing be­hav­iors as mam­mals—and there is good rea­son to think that they do—then that is some ev­i­dence that they are ca­pa­ble of ex­pe­rienc­ing the un­pleas­ant­ness of pain. Once again, we can’t rule out the pos­si­bil­ity that there are ho­molo­gous brain struc­tures at work rep­re­sent­ing the felt bad­ness of pain.

The Phenom­e­nal In­ten­sity of Pain

Some pains hurt more than oth­ers. Call this di­men­sion the phe­nom­e­nal in­ten­sity of pain. Ce­teris paribus, the greater the phe­nom­e­nal in­ten­sity of a pain, the greater its moral sig­nifi­cance. If some non­hu­mans do ex­pe­rience pain, how in­tense might their pain be?

The first thing to note is that re­ported phe­nom­e­nal in­ten­si­ties of pain, as stud­ied in hu­mans, cor­re­late very poorly with ex­ter­nal fac­tors.[39] Even un­der op­ti­mal con­di­tions, a small in­crease in voltage or tem­per­a­ture can dou­ble the re­ported phe­nom­e­nal in­ten­sity of elec­tric shock or heat-in­duced stim­u­lus. In­deed, phe­nom­e­nal in­ten­sity can be sys­tem­at­i­cally ma­nipu­lated com­pletely in­de­pen­dently of ex­ter­nal stim­uli, via hyp­notic sug­ges­tion. On the other hand, the phe­nom­e­nal in­ten­sity of hu­man pain cor­re­lates al­most perfectly with the firing rates of neu­rons in the parts of the brain in­volved in the spe­cific type of pain. If we could get a han­dle on ho­molo­gous firing rates in non­hu­man an­i­mals, we might have a bet­ter idea of the in­ten­sity of their pain.[40]

Another way to po­ten­tially get a han­dle on the phe­nom­e­nal in­ten­sity of non­hu­man pain is to con­sider again the evolu­tion­ary role that pain plays. Pain teaches us which stim­uli are nox­ious, how to avoid those stim­uli, and what we ought to do to re­cover from in­jury. Be­cause in­tense pain can be dis­tract­ing, an­i­mals in in­tense pain are at a se­lec­tive dis­ad­van­tage com­pared to con­speci­fics not in in­tense pain. Thus, we might ex­pect evolu­tion to se­lect for crea­tures with pains just phe­nom­e­nally in­tense enough (on av­er­age) to play the pri­mary in­struc­tive role of pain. Hu­mans are the most cog­ni­tively so­phis­ti­cated an­i­mals on the planet, the an­i­mals most likely to pick up on pat­terns in sig­nals only weakly con­veyed. Less cog­ni­tively so­phis­ti­cated an­i­mals gen­er­ally re­quire stronger sig­nals for pat­tern-learn­ing. If pain is the sig­nal, then we might rea­son­ably ex­pect the phe­nom­e­nal in­ten­sity of pain to cor­re­late in­versely with cog­ni­tive so­phis­ti­ca­tion. If that’s the case, hu­mans ex­pe­rience (on av­er­age) the least in­tense pain in all the an­i­mal king­dom.[41]

A fi­nal con­sid­er­a­tion in­volves not the phe­nom­e­nal in­ten­sity of pain but its phe­nom­e­nal ex­ten­sion (that is, its felt du­ra­tion). Due to neu­rolog­i­cal differ­ences, phe­nom­e­nal ex­ten­sion might not be di­rectly com­pa­rable across species. Con­sider brain-pro­cess­ing speed and rates of sub­jec­tive ex­pe­rience, both loosely defined. An­i­mals with faster metabolisms and smaller body sizes tend, ac­cord­ing to some met­rics, to pro­cess in­for­ma­tion faster. Thus, there is some rea­son to think that smaller an­i­mals have, in gen­eral, faster sub­jec­tive ex­pe­riences. So a hum­ming­bird might ex­pe­rience one minute of ob­jec­tive time__[42]__ as longer, in some ro­bust, non-sub­jec­tive sense of the term, than a hu­man would. If that’s true, then a given hum­ming­bird and a given hu­man ex­pe­rienc­ing a pain of the same phe­nom­e­nal in­ten­sity would not, ce­teris paribus, suffer equally dur­ing the same ob­jec­tive span of time. The hum­ming­bird would suffer more. Hence, we should not naively equate the phe­nom­e­nal ex­ten­sion of pain with its du­ra­tion ex­pressed in ob­jec­tive time. The take­away here is that the moral sig­nifi­cance of pain might be re­lated in im­por­tant ways to an en­tity’s pro­cess­ing speed. Such con­cerns would in­crease ex­po­nen­tially if we ever cre­ated ar­tifi­cial minds ca­pa­ble of con­scious ex­pe­rience. As with other ar­eas, more re­search is needed.

De­grees of Consciousness

The moral sig­nifi­cance of pain might also de­pend on the ex­tent to which an en­tity is aware of (the un­pleas­ant­ness of) the pain it is in. This is a sub­tle claim which re­quires some un­pack­ing. First, dis­t­in­guish aware of from aware that. I’m not here as­sert­ing that the moral sig­nifi­cance of pain re­quires that a pain-bearer be aware that it is in pain.[43] To be aware that one is in pain, one must pos­sess the con­cept pain. It seems plau­si­ble that a crea­ture might ex­pe­rience pain with­out pos­sess­ing the con­cept pain. The ex­tent to which one can be aware of a pain is the ex­tent to which one can at­tend to a pain. It is the ex­tent to which one is con­scious of a pain. And if con­scious­ness comes in de­grees, as many neu­ro­scien­tists be­lieve,[44] then the ex­tent to which one can be aware of pain also comes in de­grees, po­ten­tially in a morally sig­nifi­cant way.

There are sev­eral mun­dane ways in which con­scious­ness can be said to come in de­grees. An en­tity that is con­scious might be con­scious all the time or only part of the time. (Hu­mans, for ex­am­ple, are un­con­scious dur­ing dream­less sleep and when they un­dergo gen­eral anaes­the­sia.) For an en­tity that is cur­rently con­scious, con­scious­ness might span many or few modal­ities. (Some crea­tures are sen­si­tive to differ­ences in light, sound, tem­per­a­ture, pres­sure, smell, bod­ily ori­en­ta­tion, and mag­netic field. Other crea­tures are sen­si­tive to fewer sen­sory modal­ities.) For an en­tity that is cur­rently con­scious of a given sen­sory modal­ity, that modal­ity might be coarse-grained or fine-grained. (Within the light modal­ity, some crea­tures are only sen­si­tive to differ­ences in bright­ness, while other crea­tures are sen­si­tive to a wide swath of the elec­tro­mag­netic spec­trum.)

There is a more fun­da­men­tal sense in which it might be true that con­scious­ness comes in de­grees. One of the most strik­ing fea­tures of con­scious­ness is its unity. When I step out­side my door, I ex­pe­rience the hum of dis­tant ma­chin­ery, the gray haze of fog, and the smell of fresh cut grass as el­e­ments of a unified and densely in­te­grated rep­re­sen­ta­tion of re­al­ity. Sounds, sights, and smells are all ex­pe­rienced as part of the same global workspace. This sort of in­te­grated rep­re­sen­ta­tion may provide for more open-ended be­hav­ioral re­sponses than a com­pa­rable amount of in­for­ma­tion pre­sented in iso­lated streams. If that’s true, then one of the evolu­tion­ary func­tions of con­scious­ness may be to in­te­grate in­for­ma­tion.

Ac­cord­ing to the In­te­grated In­for­ma­tion The­ory of con­scious­ness, con­scious­ness just is suit­ably in­te­grated in­for­ma­tion. When the effec­tive in­for­ma­tional con­tent of a sys­tem, math­e­mat­i­cally defined in light of the sys­tem’s causal pro­file, is greater than the sum of the in­for­ma­tional con­tent of its parts, the sys­tem is said to carry in­te­grated in­for­ma­tion. In­te­grated in­for­ma­tion of the rele­vant source is con­scious, whether that in­te­gra­tion oc­curs in the brain or in a two-di­men­sional graph. Be­cause in­te­gra­tion comes in de­grees, so too does con­scious­ness.

In­tu­itively, we might think that crea­tures like cock­roaches and oc­to­puses in­te­grate in­for­ma­tion to a lesser de­gree than hu­mans.[45] Headless cock­roaches, for ex­am­ple, can be trained to avoid elec­tric shocks. Oc­to­puses trained to dis­crim­i­nate be­tween hori­zon­tal and ver­ti­cal rec­t­an­gles us­ing only one eye were un­able to dis­crim­i­nate be­tween the shapes us­ing the other eye.[46] One nat­u­ral in­ter­pre­ta­tion of these re­sults is that al­though cock­roaches and oc­to­puses are adept at de­tect­ing and re­spond­ing to var­i­ous stim­uli, the de­gree to which that in­for­ma­tion is cen­trally pro­cessed is limited, at least com­pared to hu­mans.

If a the­ory of this sort is cor­rect—and In­te­grated In­for­ma­tion The­ory is of­ten con­sid­ered the lead­ing sci­en­tific the­ory of con­scious­ness—then differ­ent en­tities will pos­sess differ­ent amounts of con­scious­ness. Although it is un­clear what a claim of this sort even means, it is plau­si­ble that the moral sig­nifi­cance of pain will de­pend in part on the amount of con­scious­ness that the en­tity un­der­go­ing the pain pos­sesses.

Mo­ral Dig­nity and Pain

He­donism is the view (roughly) that the only things that mat­ter morally are pains and plea­sures.[47] If he­do­nism is true, then the (un­pleas­ant) pains and (pleas­ant) plea­sures of non­hu­mans mat­ter ac­cord­ing to their phe­nom­e­nal in­ten­si­ties and the ex­tent to which the crea­tures are aware of them. But if he­do­nism is false, then then there may be rea­sons to re­gard those non­hu­man pains as less morally sig­nifi­cant than hu­man pains.[48] Even if some non­hu­man ex­pe­riences the un­pleas­ant­ness of pain to the same phe­nom­e­nal in­ten­sity and with the same aware­ness as a neu­rotyp­i­cal adult hu­man, there still might be some differ­ence be­tween the non­hu­man and the hu­man which miti­gates the moral sig­nifi­cance of the non­hu­man’s pain.

Let us take just one ex­am­ple.[49] Per­sonal au­ton­omy is the abil­ity to, in some sense, gov­ern one­self. Au­tonomous agents live their lives ac­cord­ing to rea­sons that are their own, and they act ac­cord­ing to mo­ti­va­tions largely free from dis­tort­ing ex­ter­nal forces. Au­tonomous agents pos­sess the ca­pac­ity to re­flec­tively en­dorse their com­mit­ments and change those com­mit­ments when they are found to be defi­cient. The value of per­sonal au­ton­omy fea­tures promi­nently in much of mod­ern Western ethics, and it fa­mously was given cen­tral place in Im­manuel Kant’s moral philos­o­phy. If per­sonal au­ton­omy is non-in­stru­men­tally valuable, we might rate the pain of au­tonomous agents as worse, ce­teris paribus, than the pain of non-au­tonomous en­tities, es­pe­cially if the pain in­terferes some­how with the agent’s au­ton­omy. Be­cause per­sonal au­ton­omy re­quires self-re­flec­tion, many non­hu­man an­i­mals are not plau­si­ble can­di­dates for in­stan­ti­at­ing this value.[50] Thus, ce­teris paribus, their pain may mat­ter less.

Be­cause eth­i­cal the­o­riz­ing is so hard, we should par­ti­tion our cre­dences over a fairly wide range of plau­si­ble nor­ma­tive the­o­ries.[51] This par­ti­tion need not be equal, but it should as­sign some non-neg­ligible cre­dence even to views strongly at odds with one’s preferred the­ory. No one ought to be cer­tain, even in the mere col­lo­quial sense of ‘cer­tain,’ that con­se­quen­tial­ism or de­on­tol­ogy is false.

Reflec­tive Equilibrium

All eth­i­cal the­o­riz­ing in­volves some de­gree of re­flec­tive equil­ibrium. We have in­tu­itions about par­tic­u­lar cases and also in­tu­itions about gen­eral prin­ci­ples. When we for­mu­late a gen­eral prin­ci­ple, we try to cap­ture as many case in­tu­itions as we can. Some­times, if we are con­fi­dent in a gen­eral prin­ci­ple, we are will­ing to ad­just our judg­ments in in­di­vi­d­ual cases. Other times, how­ever, our in­di­vi­d­ual judg­ments are strong enough that they con­sti­tute coun­terex­am­ples to the gen­eral prin­ci­ple.[52]

When our in­tu­itions about case judg­ments con­flict with our in­tu­itions about gen­eral prin­ci­ples, we must de­cide which to priv­ilege and to what de­gree. Ac­cord­ing to the ter­minol­ogy of Rod­er­ick Chisholm (1973), the philo­soph­i­cal par­tic­u­larist priv­ileges case judg­ments over gen­eral prin­ci­ples when en­gag­ing in re­flec­tive equil­ibrium. The philo­soph­i­cal methodist priv­ileges gen­eral prin­ci­ples over case judg­ments when en­gag­ing in re­flec­tive equil­ibrium.[53]

Let’s ex­plore a po­ten­tial con­flict. Sup­pose you be­lieve that the con­scious ex­pe­rience of (un­pleas­ant) pain is always morally sig­nifi­cant, at least to a small de­gree. This is a gen­eral prin­ci­ple. Sup­pose you also be­lieve that given the choice be­tween the life of a hu­man child and the lives of a trillion ants, the morally cor­rect ac­tion, ce­teris paribus, is to save the hu­man child. This is a case judg­ment. Next, sup­pose you come to as­sign, on the ba­sis of solid em­piri­cal and philo­soph­i­cal ev­i­dence, a small but non-neg­ligible chance to the propo­si­tion that ants ex­pe­rience morally sig­nifi­cant pain. Be­cause of the sheer num­ber of ants, the amount of ex­pected ant suffer­ing in the world will be quite high. Ame­lio­rat­ing ant suffer­ing sud­denly looks like one of the most im­por­tant is­sues in the world. This, to say the least, is a sur­pris­ing re­sult.

How, if at all, should you re­vise your judg­ment about whether to save the trillion ants or the sin­gle hu­man child? If you do re­vise your judg­ment, can you provide an er­ror the­ory for why the ini­tial judg­ment was mis­taken? If you don’t re­vise your judg­ment, does that un­der­cut the gen­eral prin­ci­ple? Should you aban­don your prin­ci­ple? Or maybe re­fine it? (Per­haps the ag­gre­ga­tion of pain does not con­sist of mere ad­di­tion. Or per­haps rel­a­tively small in­stances of pain never sum to rel­a­tively and suffi­ciently big ones.)

Some peo­ple may re­gard it as ob­vi­ous that one should re­vise one’s ini­tial case judg­ment in light of the new in­for­ma­tion about ant con­scious­ness. Per­haps, but one ought also to be care­ful not to be pushed down fric­tion­less slopes with­out proper back­stops in place. Here we be­gin to ap­proach “Pas­cal’s Mug­ging” ter­ri­tory. For in­stance: should one as­sign a non-zero cre­dence to the propo­si­tion that plants feel pain? Prob­a­bly. After all, panpsy­chism might be true. But there are far, far more plants than ants. Even with an ex­tremely low cre­dence that plants ex­pe­rience pain (and I’ll re­mind you that some very smart peo­ple en­dorse panpsy­chism), ex­pected plant suffer­ing will prob­a­bly dom­i­nate ex­pected ant suffer­ing by sev­eral or­ders of mag­ni­tude. Now it looks like ame­lio­rat­ing plant suffer­ing is the most im­por­tant is­sue in the world.[54]

It’s true that we could con­tinue to ad­just our cre­dences down­ward un­til we avoid this re­sult, but that some­how feels like cheat­ing. After all, cre­dences are just some­thing we have; they are not the sort of thing we get to set di­rectly. One might re­ply: “I don’t have in­fal­lible epistemic ac­cess to all my cre­dences. I know that po­ten­tial an­i­mal suffer­ing is more im­por­tant than po­ten­tial plant suffer­ing. I use this in­for­ma­tion to in­fer that my cre­dence must be how­ever low it must be in or­der to avoid the re­sult that ex­pected plant suffer­ing is greater than ex­pected an­i­mal suffer­ing.”

This re­sponse suc­ceeds up to a point, but ul­ti­mately it is un­satis­fy­ing. Sup­pose we dis­cover that we un­der­counted the num­ber of plants by some 100 quadrillion. (After all, what counts as a “plant” is a some­what slip­pery no­tion.) Then one would have to ad­just one’s cre­dence again. At some point these ad­just­ments be­gin to look ad hoc. A bet­ter de­scrip­tion of what’s go­ing on looks like this: there are some propo­si­tions the en­tail­ment of which serve as a re­duc­tio ad ab­sur­dum on the the­ory that en­tails them. That plant suffer­ing mat­ters more than an­i­mal suffer­ing is one such propo­si­tion. But if we can use the plant-suffer­ing propo­si­tion as a re­duc­tio on the the­ory which en­tails it, why can’t we use the ant-suffer­ing propo­si­tion as a re­duc­tio on the the­ory which en­tails it. After all, didn’t we start with a strong in­tu­ition that a trillion ant lives are no more im­por­tant than a sin­gle hu­man life?

The gen­eral point here is not that any par­tic­u­lar propo­si­tion about suffer­ing is ab­surd or that we should be­gin our eth­i­cal the­o­riz­ing with any par­tic­u­larly strong views on the worth of ant-lives ver­sus hu­man-lives. The only point I’m try­ing to make is that bring­ing one’s the­ory into re­flec­tive equil­ibrium can be hard. Some­times there is sim­ply no non-ques­tion-beg­ging method to per­suade an in­ter­locu­tor that the equil­ibrium she has set­tled on is worse than the equil­ibrium you have set­tled on.

Direc­tions for Fu­ture Work

To re­cap: I’ve dis­cussed eight con­cep­tu­ally se­quen­tial steps needed to iden­tify morally sig­nifi­cant pain in non­hu­mans. The eight steps are:

  1. Deter­mine that other minds ex­ist.

  2. Check to see if the non­hu­man en­tity in ques­tion en­gages in pain be­hav­ior. If so, check to see if there are any defeaters for the ex­pla­na­tion that the en­tity in ques­tion feels pain.

  3. Ap­ply one’s best the­ory of con­scious­ness to see what it says about the like­li­hood that the en­tity in ques­tion feels pain.

  4. As­sum­ing that the en­tity feels pain, check to see if it ex­pe­riences the felt bad­ness of pain.

  5. Deter­mine the phe­nom­e­nal in­ten­sity and phe­nom­e­nal ex­ten­sion of the pain.

  6. Deter­mine the de­gree to which the en­tity is aware of the pain.

  7. Deter­mine the en­tity’s moral stand­ing rel­a­tive to other en­tities which ex­pe­rience pain.

  8. Check to see if your fi­nal re­sult con­sti­tutes a re­duc­tio on the whole pro­cess.

There is a tremen­dous amount of un­cer­tainty, both em­piri­cal and moral, sur­round­ing the is­sue of non­hu­man pain. Be­cause the sub­ject is so com­plex, we should as­cribe some cre­dence to views which hold that phe­nom­e­nal con­scious­ness is rare out­side hu­mans and also as­cribe some cre­dence to views which hold that phe­nom­e­nal con­scious­ness, though com­mon, is not ter­ribly morally sig­nifi­cant.

Nonethe­less, my per­sonal view is that even af­ter fold­ing all this un­cer­tainty into our calcu­la­tions, we are still left with the re­sult that we should take non­hu­man pain much more se­ri­ously than the av­er­age poli­cy­maker does. There are good rea­sons to think that many non­hu­mans feel pain and that this pain is morally sig­nifi­cant. Th­ese non­hu­mans do not have a voice in policy de­bate and they do not have a vote. They are pow­er­less to stop the harms we in­flict on them, and they are pow­er­less to ask us for help. They are not just sys­tem­at­i­cally mis­treated; their suffer­ing is al­most wholly ig­nored.

One of the best ways to help these crea­tures is to re­duce the un­cer­tain­ties sur­round­ing the is­sue of non­hu­man pain. To that end, Re­think Pri­ori­ties has been work­ing on an am­bi­tious pro­ject to an­a­lyze and cat­a­logue 60+ fea­tures po­ten­tially rele­vant to phe­nom­e­nal con­scious­ness and morally sig­nifi­cant pain. (A pro­ject of this sort was sug­gested by Luke Muehlhauser in his 2017 Re­port on Con­scious­ness and Mo­ral Pa­tient­hood.) We aim to care­fully define each fea­ture and ex­plain why and to what de­gree it might be rele­vant to con­scious­ness. We have se­lected 17 rep­re­sen­ta­tive species from across the an­i­mal king­dom and are cur­rently scour­ing the sci­en­tific liter­a­ture to see whether and to what ex­tent each species ex­hibits each of the fea­tures. Some of the species are in­tu­itively con­scious (e.g., cows), while oth­ers are in­tu­itively not (e.g., ne­ma­todes). In be­tween are a host of in­ter­est­ing edge cases, like honey bees and oc­to­puses. All this in­for­ma­tion will even­tu­ally be com­piled into an eas­ily search­able database. Of course, the pro­ject won’t defini­tively set­tle whether honey bees or oc­to­puses ex­pe­rience morally sig­nifi­cant pain. Nonethe­less, our hope is that the database will be­come an in­valuable re­source for fu­ture con­scious­ness re­search. In our next es­say, we ex­plain this ap­proach in more de­tail.


J.P. An­drew, Eli­jah Arm­strong, Kim Cud­ding­ton, Mar­cus A. Davis, Neil Dul­laghan, Sam Fox Krauss, Peter Hur­ford, David Moss, Katie Plem­mons, and Daniela R. Wald­horn pro­vided helpful com­ments on this es­say.


[^1]: As we’ll see, it’s not enough to demon­strate that non­hu­mans ex­pe­rience pain. There are a num­ber of ways in which non­hu­man pain might be less morally sig­nifi­cant than hu­man pain, even to the point that non­hu­man pain fails to be morally sig­nifi­cant at all. Non­hu­man pain might just feel differ­ent (along di­men­sions elab­o­rated be­low) in a way which ren­ders the pain less morally press­ing.

[^2]: To take just one group of arthro­pods, there are some­thing like a quin­til­lion in­sects al­ive at any given mo­ment, a num­ber which bog­gles the mind. See C.B. Willi­ams. 1964. Pat­terns in the Balance of Na­ture and Re­lated Prob­lems in Quan­ti­ta­tive Biol­ogy. Aca­demic Press, Lon­don: 324.

[^3]: Elas­mo­branch fish (i.e., car­tilag­i­nous fish, such as sharks) may be an ex­cep­tion. See, in­ter alia, Ewan Smith and Gary Lewin. 2009. “No­ci­cep­tors: A Phy­lo­ge­netic View.” Jour­nal of Com­par­a­tive Phys­iol­ogy A Vol. 195, Is­sue 12: 1096.

[^4]: For com­par­i­son, we also in­clude some non-an­i­mals, such as plants and pro­tists.

[^5]: This list is not ex­haus­tive. Most no­tably, we’ll set aside difficult ques­tions in metaethics. For ex­am­ple, if moral nihilism is true, then there are no moral facts, and thus no crea­tures ex­pe­rience morally sig­nifi­cant pain, in­clud­ing hu­mans.

[^6]: Note that not ev­ery step is equally prob­le­matic.

[^7]: I here set aside cer­tain con­cep­tu­ally pos­si­ble but non-ac­tual fan­ciful de­vices, such as brain-to-brain hookups.

[^8]: This is meta­phys­i­cal solip­sism, and the de­scrip­tion is not tech­ni­cally cor­rect (it’s a nec­es­sary but not suffi­cient part of the full view). One could be­lieve that one’s mind is the only one which ex­ists with­out thereby be­ing a solip­sist, if, say, one were the sole sur­vivor of some apoc­a­lyp­tic catas­tro­phe.

[^9]: The am­mu­ni­tion metaphor is adapted from Anil Gupta. 2006. Em­piri­cism and Ex­pe­rience. Oxford Univer­sity Press: 178.

[^10]: See, in­ter alia, An­drew Melnyk. 1994. “In­fer­ence to the Best Ex­pla­na­tion and Other Minds.” Aus­tralasian Jour­nal of Philos­o­phy, 72: 482–91 for a dis­cus­sion of the is­sue.

[^11]: Ob­vi­ously, this is a sim­plifi­ca­tion. The be­hav­ioral similar­i­ties run much deeper.

[^12]: Other ex­pla­na­tions are more com­pli­cated be­cause they raise more ques­tions than they re­solve. Why, for in­stance, would some­one cre­ate so­phis­ti­cated robots pro­grammed to be­have as I do?

[^13]: It’s im­por­tant to note that one can pre­fer an ex­pla­na­tion with­out fully be­liev­ing the ex­pla­na­tion. If there are nu­mer­ous plau­si­ble ex­pla­na­tions, the best ex­pla­na­tion might only war­rant a cre­dence of .2. For ex­am­ple, it’s con­sis­tent to have a fairly low cre­dence in the claim that in­ver­te­brates feel pain and yet think that that ex­pla­na­tion of their be­hav­ior is more likely than any other ex­pla­na­tion of their be­hav­ior. See Michael Tye. 2017. Tense Bees and Shell-Shocked Crabs. Oxford Univer­sity Press: 68.

[^14]: To put the point an­other way, an­cient hunter-gath­er­ers were jus­tified in be­liev­ing their fel­low hu­mans felt pain, but they didn’t know any­thing about phys­iolog­i­cal or neu­rolog­i­cal similar­ity. See Tye 2017: 53-56 for more on this point.

[^15]: This way of for­mu­lat­ing the jus­tifi­ca­tory base leads to the well-known prob­lem of in­duc­tion, which I here gen­tly set aside.

[^16]: See, e.g., Tye 2017, es­pe­cially chap­ter 5.

[^17]: Here, “pain be­hav­ior” doesn’t mean “be­hav­ior caused by pain.” Rather, it is con­ve­nient short­hand for “be­hav­ioral pat­terns, that, in hu­mans, are caused by pain.”

[^18]: This is not an ad hoc view. Func­tional imag­ing stud­ies show that, in hu­mans, there is a cor­re­la­tion be­tween the phe­nom­e­nal in­ten­sity of pain and ac­tivity in the an­te­rior cin­gu­late cor­tex and the so­matosen­sory cor­tex. See Dev­in­sky, O., Mor­rell, M. J., & Vogt, B. A. 1995. “Con­tri­bu­tions of An­te­rior Cin­gu­late Cor­tex to Be­havi­our.” Brain: A Jour­nal of Neu­rol­ogy, 118(1), 279-306.

[^19]: The neo­cor­tex is only found in mam­malian brains.

[^20]: See Merker, B. 2007. “Con­scious­ness with­out a Cere­bral Cor­tex: A challenge for Neu­ro­science and Medicine.” Be­hav­ioral and Brain Sciences, 30(1), 63-81. It should be noted that this claim only ap­plies to chil­dren born with­out a neo­cor­tex. Adults with dam­aged neo­cor­tices re­main com­pletely veg­e­ta­tive.

[^21]: Jarvis ED, Gün­türkün O, Bruce L, Csillag A, Karten H, Kuen­zel W, et al. 2005. “Avian brains and a new un­der­stand­ing of ver­te­brate brain evolu­tion.” Na­ture Re­views. Neu­ro­science. 6 (2): 151–9. See Tye 2017: 78-84 for a philo­soph­i­cal dis­cus­sion.

[^22]: This is ac­tu­ally fairly rare in philos­o­phy. Episte­mol­o­gists and ethi­cists of­ten de­velop rad­i­cally differ­ent the­o­ries on the ba­sis of roughly the same com­mon ground. The only other com­pa­rable ex­am­ple that comes to mind in philos­o­phy is mere­ol­ogy.

[^23]: See Sch­witzgebel (forth­com­ing) “Is There Some­thing It Is Like to Be a Gar­den Snail” for more on the com­mon ground prob­lem.

[^24]: An al­ter­nate ex­pla­na­tion holds that the panpsy­chist and higher-or­der the­o­rist be­gin with the same start­ing as­sump­tions, but the the­o­ret­i­cal virtues of their re­spec­tive the­o­ries lead them to re­mark­ably differ­ent con­clu­sions. I am du­bi­ous of this ex­pla­na­tion.

[^25]: Thomas Huxley is per­haps the most fa­mous philo­soph­i­cal pro­po­nent of epiphe­nom­e­nal­ism. See his (1874) “On the Hy­poth­e­sis that An­i­mals Are Au­tomata.” Vic­to­rian Re­view Vol. 35, No. 1, pp. 50-52.

[^26]: This is too quick. In weird cases an ac­tion can be ev­i­dence for some state of af­fairs with­out bear­ing any causal re­la­tion­ship to that state of af­fairs. But the gen­eral in-text point stands.

[^27]: See David Chalmers. 1996. The Con­scious Mind. Oxford Univer­sity Press: pp. 94-99 for the canon­i­cal dis­cus­sion. Note also that Chalmers hedges on whether his view en­tails true epiphe­nom­e­nal­ism. He ad­mits his view en­tails “some­thing like epiphe­nom­e­nal­ism” (150, em­pha­sis in the origi­nal).

[^28]: Ibid. Although phe­nom­e­nal zom­bies get a lot of press, they are, ac­cord­ing to Chalmers at least, inessen­tial to his broader ar­gu­ments.

[^29]: The ex­cep­tion is so-called “in­ter­ac­tion­ist du­al­ism.” Descartes is prob­a­bly the most fa­mous in­ter­ac­tion­ist du­al­ist. He be­lieved that non­phys­i­cal men­tal states af­fect the brain via the pineal gland.

[^30]: See C.D. Broad. 1925. The Mind and Its Place in Na­ture. Lon­don: Ke­gan Paul: 125.

[^31]: Carl G. Hem­pel and Paul Op­pen­heim. 1948. “Stud­ies in the Logic of Ex­pla­na­tion.” Philos­o­phy of Science 15, no. 2 (Apr., 1948): 119.

[^32]: Even if the fea­tures were co­ex­ten­sive in the an­i­mal king­dom, it seems phys­i­cally pos­si­ble to de­liber­ately de­sign an en­tity which pos­sessed some but not all of the fea­tures.

[^33]: It is im­por­tant to em­pha­size that, as best we can tell, pain asym­bo­lia pa­tients ex­pe­rience some­thing over and above mere no­ci­cep­tion. No­ci­cep­tors are spe­cial re­cep­tors used by the body to de­tect po­ten­tially harm­ful stim­uli. Many crea­tures, in­clud­ing, for ex­am­ple, the round­worm C. el­e­gans, pos­sess no­ci­cep­tors. Mere no­ci­cep­tion does not have an at­ten­dant phe­nomenol­ogy, whereas pain asym­bo­lia pa­tients do re­port a con­scious ex­pe­rience—just not an un­pleas­ant one.

[^34]: There are many ways in which pain ex­pe­riences are in­stru­men­tally good: they alert us to po­ten­tial dam­age, they aid in our re­cov­ery from such dam­age, and they en­able us to learn to avoid such dam­age in the fu­ture. We shouldn’t wish to be com­pletely with­out pain, for hu­mans born in such a con­di­tion (known as gen­eral con­gen­i­tal anal­ge­sia) al­most always die young.

[^35]: See David Bain. 2017. “Why Take Painkil­lers?Nous for a re­cent rep­re­sen­ta­tive en­try in the de­bate.

[^36]: Of course, we might be over­es­ti­mat­ing the cog­ni­tive so­phis­ti­ca­tion re­quired for sec­ond-or­der men­tal states, es­pe­cially if we drop the as­sump­tion that higher-or­der cog­ni­tion must be propo­si­tional.

[^37]: See Adam Shriver. 2006. “Mind­ing Mam­mals.” Philo­soph­i­cal Psy­chol­ogy Vol. 19: 433-442 (es­pe­cially sec. 2) for an ac­cessible overview.

[^38]: “The modal­ity of the pain” refers to the type of pain (e.g., a “cut­ting” pain, a “burn­ing” pain, a “throb­bing” pain).

[^39]: See Adam Pautz. 2014. “The Real Trou­ble with Phenom­e­nal Ex­ter­nal­ism: New Em­piri­cal Ev­i­dence for a Brain-Based The­ory of Con­scious­ness.” in: Brown R. (eds) Con­scious­ness In­side and Out: Phenomenol­ogy, Neu­ro­science, and the Na­ture of Ex­pe­rience. Stud­ies in Brain and Mind, vol 6. Springer, Dor­drecht, es­pe­cially sec 2.3 for an overview.

[^40]: There has been some ini­tial re­search along these lines in mon­keys, but, per­haps for ob­vi­ous rea­sons, the sub­ject is not widely stud­ied.

[^41]: It should be noted that, more so than other sec­tions, this para­graph is en­tirely spec­u­la­tive.

[^42]: For the stick­lers out there, I am well aware that Ein­stein taught us there is no good sense to the term “ob­jec­tive time.” I of course as­sume here that the hum­ming­bird and the hu­man are in the same refer­ence frame.

[^43]: Of course, cer­tain sec­ond-or­der thoughts about pain might them­selves be morally sig­nifi­cant. A hu­man who is aware that she is in pain might also come to be­lieve that the pain is just or un­just.

[^44]: For the record, most philoso­phers dis­agree.

[^45]: It should be stressed that ac­cord­ing to In­te­grated In­for­ma­tion The­ory, in­for­ma­tion in­te­gra­tion is defined math­e­mat­i­cally, so these in­tu­itive ex­am­ples may not stand up to greater scrutiny.

[^46]: Failures of so-called “in­ter-oc­u­lar trans­fer” have also been found in birds, fish, rep­tiles and am­phibi­ans. See G Val­lor­ti­gara, L.J Rogers, A Bisazza. “Pos­si­ble evolu­tion­ary ori­gins of cog­ni­tive brain lat­er­al­iza­tion.” Brain Re­search Re­views Vol. 30: 164-175 for a sci­en­tific dis­cus­sion. See Peter God­frey-Smith. 2016. Other Minds: The Oc­to­pus, The Sea, and the Deep Ori­gins of Con­scious­ness. New York: Far­rar, Straus and Giroux: 84-87 for a philo­soph­i­cal dis­cus­sion.

[^47]: This is eth­i­cal he­do­nism. Psy­cholog­i­cal he­do­nism is the (de­scrip­tive) view that only plea­sures and pain mo­ti­vate us.

[^48]: It should be em­pha­sized that it does not fol­low from the falsity of he­do­nism that pains and plea­sures are morally in­signifi­cant. If he­do­nism is false, it is al­most cer­tainly be­cause there are other things which are valuable as well.

[^49]: Ad­di­tional ex­am­ples could be drawn from cer­tain the­is­tic tra­di­tions in which God (allegedly) gives hu­mans do­minion over (other) an­i­mals.

[^50]: On this sub­ject Kant writes: “The fact that the hu­man be­ing can have the rep­re­sen­ta­tion ‘I’ raises him in­finitely above all the other be­ings on earth. By this he is a per­son.. that is, a be­ing al­to­gether differ­ent in rank and dig­nity from things, such as ir­ra­tional an­i­mals, with which one may deal and dis­pose at one’s dis­cre­tion.” “An­thro­pol­ogy from a Prag­matic Point of View (1798)” in 2007. An­thro­pol­ogy, His­tory, and Ed­u­ca­tion. (Cam­bridge Edi­tion of the Works of Im­manuel Kant). Robert Louden and Gunter Zol­ler (eds. and trans.). Cam­bridge Univer­sity Press: 239. Some claim that Kant drew the wrong in­fer­ence from his own the­ory. See Chris­tine Kors­gaard. 2018. Fel­low Crea­tures: Our Obli­ga­tions to Other An­i­mals. Oxford Univer­sity Press, es­pe­cially Part Two: “Im­manuel Kant and the An­i­mals.”

[^51]: See, in­ter alia, Will MacAskill. 2016. “Nor­ma­tive Uncer­tainty as a Vot­ing Prob­lem.” Mind, Vol. 125: 967-1004 for more on nor­ma­tive un­cer­tainty.

[^52]: In the words of Nel­son Good­man: “[R]ules and par­tic­u­lar in­fer­ences al­ike are jus­tified by be­ing brought into agree­ment with each other. A rule is amended if it yields an in­fer­ence we are un­will­ing to ac­cept; an in­fer­ence is re­jected if it vi­o­lates a rule we are un­will­ing to amend.” Fact, Fic­tion, and Fore­cast. Har­vard Univer­sity Press (1955): 61-2.

[^53]: The Prob­lem of the Cri­te­rion. Mar­quette Univer­sity Press: 15.

[^54]: Some mem­bers of the effec­tive al­tru­ism com­mu­nity go fur­ther, posit­ing that “atomic move­ments, elec­tron or­bits, pho­ton col­li­sions, etc. could col­lec­tively de­serve sig­nifi­cant moral weight.”