Comparisons of Capacity for Welfare and Moral Status Across Species

Ex­ec­u­tive Summary

Effec­tive al­tru­ism aims to al­lo­cate re­sources so as to pro­mote the most good in the world. To achieve the most effi­cient al­lo­ca­tion of re­sources, we need to be able to com­pare in­ter­ven­tions that tar­get differ­ent species, in­clud­ing hu­mans, cows, chick­ens, fish, lob­sters, and many oth­ers.

Com­par­ing cause ar­eas and in­ter­ven­tions that tar­get differ­ent species re­quires a com­par­i­son in the moral value of differ­ent an­i­mals (in­clud­ing hu­mans). An­i­mals differ in their cog­ni­tive, emo­tional, so­cial, be­hav­ioral, and neu­rolog­i­cal fea­tures, and these differ­ences are po­ten­tially morally sig­nifi­cant. Ac­cord­ing to many plau­si­ble philo­soph­i­cal the­o­ries, such differ­ences af­fect (1) an an­i­mal’s ca­pac­ity for welfare, which is the range of how good or bad an an­i­mal’s life can be, and/​or (2) an an­i­mal’s moral sta­tus, which is the de­gree to which an an­i­mal’s welfare mat­ters morally.

The­o­ries of welfare are tra­di­tion­ally di­vided into three cat­e­gories: (1) he­do­nis­tic the­o­ries, ac­cord­ing to which welfare is the bal­ance of ex­pe­rienced plea­sure and pain, (2) de­sire-fulfill­ment the­o­ries, ac­cord­ing to which welfare is the de­gree to which one’s de­sires are satis­fied, and (3) ob­jec­tive list the­o­ries, ac­cord­ing to which welfare is the ex­tent to which one at­tains non-in­stru­men­tal goods like hap­piness, virtue, wis­dom, friend­ship, knowl­edge and love. Most plau­si­ble the­o­ries of welfare sug­gest differ­ences in ca­pac­ity for welfare among an­i­mals, though the ex­act differ­ences and their mag­ni­tudes de­pend on the de­tails of the the­o­ries and on var­i­ous em­piri­cal facts.

A cen­tral ques­tion in the liter­a­ture on moral sta­tus is whether moral sta­tus ad­mits of de­grees. The uni­tar­ian view, en­dorsed by the likes of Peter Singer, says ‘no.’ The hi­er­ar­chi­cal view, en­dorsed by the likes of Shelly Ka­gan, says ‘yes.’ If moral sta­tus ad­mits of de­grees, then the higher the sta­tus of a given an­i­mal, the more value there is in a given unit of welfare ob­tain­ing for that an­i­mal. Sta­tus-ad­justed welfare, which is welfare weighted by the moral sta­tus of the an­i­mal for whom the welfare ob­tains, is a use­ful com­mon cur­rency both uni­tar­i­ans and hi­er­ar­chists can use to frame de­bates.

Differ­ent the­o­ries en­tail differ­ent de­ter­mi­nants of ca­pac­ity for welfare and moral sta­tus, though there is some over­lap among po­si­tions. Ac­cord­ing to most plau­si­ble views, differ­ences in ca­pac­ity for welfare and moral sta­tus are de­ter­mined by some sub­set of differ­ences in things like: in­ten­sity of valenced ex­pe­riences, self-aware­ness, gen­eral in­tel­li­gence, au­ton­omy, long-term plan­ning, com­mu­nica­tive abil­ity, af­fec­tive com­plex­ity, self-gov­er­nance, ab­stract thought, cre­ativity, so­cia­bil­ity, and nor­ma­tive eval­u­a­tion.

Un­der­stand­ing differ­ences in ca­pac­ity for welfare and moral sta­tus could sig­nifi­cantly af­fect the way we wish to al­lo­cate re­sources among in­ter­ven­tions and cause ar­eas. For in­stance, some groups of an­i­mals that ex­hibit tremen­dous di­ver­sity, such as fish or in­sects, are of­ten treated as if all mem­bers of the group have the same moral sta­tus and ca­pac­ity for welfare. Fur­ther in­ves­ti­ga­tion could com­pel us to pri­ori­tize some of the species in these groups over oth­ers. More gen­er­ally, if fur­ther in­ves­ti­ga­tion sug­gested we have been over­es­ti­mat­ing the moral value of mam­mals or ver­te­brates com­pared to the rest of the an­i­mal king­dom, we might be com­pel­led to redi­rect many re­sources to in­ver­te­brates or non-mam­mal ver­te­brates. To un­der­stand the im­por­tance of these con­sid­er­a­tions, we must first de­velop a broad con­cep­tual frame­work for think­ing about this is­sue.

Mo­ral Weight Series

  1. Com­par­i­sons of Ca­pac­ity for Welfare and Mo­ral Sta­tus Across Species

  2. How to Mea­sure Ca­pac­ity for Welfare and Mo­ral Status

  3. The Sub­jec­tive Ex­pe­rience of Time: Welfare Implications

  4. Does Crit­i­cal Flicker-Fu­sion Fre­quency Track the Sub­jec­tive Ex­pe­rience of Time?

In­tro­duc­tion and Context

This post is the first in Re­think Pri­ori­ties’ se­ries about com­par­ing ca­pac­ity for welfare and moral sta­tus across differ­ent groups of an­i­mals. The pri­mary goal of this se­ries is to im­prove the way re­sources are al­lo­cated within the effec­tive an­i­mal ad­vo­cacy move­ment in the medium-to-long-term. A sec­ondary goal is to im­prove the al­lo­ca­tion of re­sources be­tween hu­man-fo­cused cause ar­eas and non­hu­man-an­i­mal-fo­cused cause ar­eas. In this first post I lay the con­cep­tual frame­work for the rest of the se­ries, out­lin­ing differ­ent the­o­ries of welfare and moral sta­tus and the re­la­tion­ship be­tween the two. In the sec­ond en­try in the se­ries, I com­pare two method­olo­gies for mea­sur­ing ca­pac­ity for welfare and moral sta­tus. In the third en­try in the se­ries, I ex­plain what the sub­jec­tive ex­pe­rience of time is, why it mat­ters, and why it’s plau­si­ble that there are morally sig­nifi­cant differ­ences in the sub­jec­tive ex­pe­rience of time across species. In the fourth en­try in the se­ries, I ex­plore crit­i­cal flicker-fu­sion fre­quency as a po­ten­tial proxy for the sub­jec­tive ex­pe­rience of time. In the fifth, sixth, and sev­enth en­tries in the se­ries, I in­ves­ti­gate vari­a­tion in the char­ac­ter­is­tic range of in­ten­sity of valenced ex­pe­rience across species.

The Com­par­i­son Problem

The effec­tive al­tru­ism (EA) move­ment aims to al­lo­cate re­sources effi­ciently among in­ter­ven­tions. Com­par­ing in­ter­ven­tions across cause ar­eas re­quires com­par­ing the rel­a­tive value of hu­man lives (or in­ter­ests or ex­pe­riences) against the lives (or in­ter­ests or ex­pe­riences) of non­hu­man an­i­mals. Within the an­i­mal welfare cause area, effi­ciently al­lo­cat­ing re­sources re­quires com­par­ing the rel­a­tive value of the lives (or in­ter­ests or ex­pe­riences) of many differ­ent types of an­i­mals. Hu­mans di­rectly ex­ploit a huge va­ri­ety of an­i­mals: pigs, cows, goats, sheep, rab­bits, hares, mice, rats, chick­ens, turkeys, quail, ducks, geese, frogs, tur­tles, her­ring, an­chovies, carp, tilapia, milk­fish, cat­fish, eels, oc­to­puses, squid, crabs, shrimp, bees, silk­worms, lac bugs, cochineal, black sol­dier flies, meal­worms, crick­ets, snails, earth­worms, ne­ma­todes, and many oth­ers.[1] Count­ing some­what con­ser­va­tively, there are at least 33 or­ders of an­i­mals, across 13 classes and 6 phyla, that hu­mans di­rectly ex­ploit in large num­bers.[2] The effec­tive an­i­mal ad­vo­cacy (EAA) move­ment has limited re­sources, and it must choose how to al­lo­cate these scarce re­sources among these differ­ent an­i­mals, most of whom are treated mis­er­ably by hu­mans.[3] Since we can’t (yet) help all these an­i­mals, we must de­cide which an­i­mals to pri­ori­tize. Some­times these pri­ori­ti­za­tion ques­tions will be guided by prac­ti­cal con­cerns, like the de­gree to which an in­ter­ven­tion is tractable or the de­gree to which a cer­tain strat­egy will af­fect the long-run prospects of the move­ment. Ul­ti­mately, though, prac­ti­cal con­cerns ought to be guided by the an­swer to a much more fun­da­men­tal ques­tion: What is the ideal[4] al­lo­ca­tion of re­sources among differ­ent groups of an­i­mals?

Even if prac­ti­cal con­cerns con­tinue to dom­i­nate our strate­gic de­ci­sions in the near-term, un­der­stand­ing the ideal al­lo­ca­tion of re­sources could change our es­ti­mates of the ex­pected value of differ­ent meta-in­ter­ven­tions. Sup­pose, for ex­am­ple, that we come to be­lieve both that farmed in­sects de­serve about 13 of EAA re­sources and that prac­ti­cal limi­ta­tions mean that we can cur­rently only ded­i­cate about 1/​300th of EAA re­sources to farmed in­sects. If that were the case, then the ex­pected value of over­com­ing these limi­ta­tions—ei­ther by work­ing on moral cir­cle ex­pan­sion or fund­ing new char­i­ties or re­search­ing new in­ter­ven­tions or what­ever—would be quite high. If, how­ever, we come to be­lieve that farmed in­sects de­serve 1/​299th of EAA re­sources but prac­ti­cal limi­ta­tions mean that we can cur­rently only ded­i­cate 1/​300th of EAA re­sources to farmed in­sects, then the ex­pected value of over­com­ing these limi­ta­tions would be much lower. Even if we are far from an ideal world, it’s still im­por­tant to know what an ideal world looks like so we can plot the best path to get there.

Com­par­a­tive Mo­ral Value

To an­swer the fun­da­men­tal ques­tion, we need to be able to com­pare the moral value of differ­ent types of an­i­mals. There are two non-ex­clu­sive ways an­i­mals could char­ac­ter­is­ti­cally differ in in­trin­sic moral value: (1) cer­tain an­i­mals could have a greater ca­pac­ity for welfare than oth­ers and (2) cer­tain an­i­mals could have a higher moral sta­tus than oth­ers. Below, I sketch a con­cep­tual frame­work for think­ing about ca­pac­ity for welfare and moral sta­tus. In the sec­ond en­try in the se­ries, I an­a­lyze how best to ac­tu­ally mea­sure ca­pac­ity for welfare and moral sta­tus, given the cur­rent state of our sci­en­tific knowl­edge and sci­en­tific toolset.

Although ca­pac­ity for welfare and moral sta­tus are re­lated, it’s im­por­tant to keep the two con­cepts con­cep­tu­ally dis­tinct—else we will be apt to over- or un­der­es­ti­mate the moral value of a given ex­pe­rience, in­ter­est, or life. In my ex­pe­rience, many con­ver­sa­tions that pur­port to be about moral sta­tus are ac­tu­ally about ca­pac­ity for welfare. For that rea­son, I ini­tially dis­cuss the two con­cepts sep­a­rately. How­ever, on some the­o­ries of moral sta­tus, ca­pac­ity for welfare is a con­trib­u­tor to moral sta­tus. So ul­ti­mately it might make more sense to think about com­par­a­tive moral value in terms of sta­tus-ad­justed welfare, which is welfare weighted by the moral sta­tus of the crea­ture for whom the welfare ob­tains. I dis­cuss sta­tus-ad­justed welfare af­ter the ca­pac­ity for welfare and moral sta­tus sec­tions.

In what fol­lows, I in­tend to adopt as the­ory-neu­tral an ap­proach as pos­si­ble. I ex­plore the im­pli­ca­tions of a num­ber of differ­ent plau­si­ble view­points in or­der to high­light the col­lec­tion of fea­tures that might be rele­vant to com­par­ing ca­pac­ity for welfare and moral sta­tus across an­i­mals. There are very few knock­down ar­gu­ments in this area of philos­o­phy and thus we should all be keenly aware of our un­cer­tainty. When mak­ing cross-species com­par­i­sons of welfare and moral sta­tus, the best we can do is take note of where the recom­men­da­tions of differ­ent the­o­ries over­lap and where they di­verge. In­cor­po­rat­ing this knowl­edge will hope­fully al­low us to build in­ter­ven­tions that are suffi­ciently ro­bust in the face of our un­cer­tainty.

Ca­pac­ity for Welfare

Ca­pac­ity for welfare is how good or bad a sub­ject’s life can go. One is a welfare sub­ject if and only if things can be non-in­stru­men­tally good or bad for it. Pos­i­tive welfare is that which is non-in­stru­men­tally good for some sub­ject; nega­tive welfare is that which is non-in­stru­men­tally bad for some sub­ject.[5] A sub­ject’s ca­pac­ity for welfare is the to­tal range be­tween a sub­ject’s max­i­mum pos­i­tive welfare and min­i­mum nega­tive welfare.[6] Ca­pac­ity for welfare should be dis­t­in­guished from re­al­ized welfare. If ca­pac­ity for welfare is how good or bad a crea­ture’s life can go, then re­al­ized welfare is how good or bad a crea­ture’s life ac­tu­ally goes. Crea­tures with a greater ca­pac­ity for welfare have the po­ten­tial to make a greater per cap­ita differ­ence to the world’s over­all re­al­ized welfare stock.

Syn­chronic welfare is welfare at a par­tic­u­lar time. Di­achronic welfare is welfare over time. The fact that one crea­ture has a greater ca­pac­ity for syn­chronic welfare than some other crea­ture does not en­tail that the crea­ture also has a greater ca­pac­ity for di­achronic welfare. If one were an­a­lyz­ing differ­ences in to­tal welfare over the course of a life­time (di­achronic welfare), differ­en­tial lifes­pans would need to be taken into ac­count. Crea­tures with longer lifes­pans have longer to amass welfare. So even if a given crea­ture’s ca­pac­ity for welfare at any one time is lower than some other crea­ture, if the former crea­ture lives longer than the lat­ter, it may be able to ac­crue more welfare. (So hold­ing lifes­pans fixed, a greater ca­pac­ity for syn­chronic welfare does en­tail a greater ca­pac­ity for di­achronic welfare.[7]) The anal­y­sis be­low con­cerns syn­chronic welfare. Syn­chronic welfare is the more fun­da­men­tal con­cept, and it is eas­ier to in­ves­ti­gate, so noth­ing is lost by this sim­plifi­ca­tion. In prac­tice, though, when we want to com­pare lives saved across species, we will have to ac­count for differ­en­tial lifes­pans in or­der to es­ti­mate to­tal welfare over the course of a life­time and so we will ap­peal to di­achronic welfare.

Ca­pac­ity for welfare is how good or bad a sub­ject’s life can go. But it’s im­por­tant to note that there is no sin­gle con­cept ca­pac­ity for welfare. One can gen­er­ate mul­ti­ple con­cepts de­pend­ing on how one in­ter­prets the modal force of the ‘can’ in ‘how good or bad a sub­ject’s life can go.’ Take some ac­tual pig con­fined to a ges­ta­tion crate on a fac­tory farm. We can per­haps imag­ine a meta­phys­i­cally pos­si­ble (but phys­i­cally im­pos­si­ble) world in which a god grants this pig her free­dom and gives her the abil­ity to rea­son like a su­per­in­tel­li­gent ma­chine. If rea­son­ing abil­ities gen­er­ally raise ca­pac­ity for welfare,[8] then, in a very broad sense of ‘can,’ this pig’s life can go very well in­deed. On the other hand, if we sim­ply ask how good or bad the ac­tual pig’s life can go, given that she will spend her whole life in a ges­ta­tion crate, then, in a nar­row sense of ‘can,’ her life can only go very poorly. The first sense of ‘can’ is ob­vi­ously too broad: the mere meta­phys­i­cal pos­si­bil­ity of vast pig welfare doesn’t tell us any­thing about how to treat ac­tual pigs. The sec­ond sense of ‘can’ is ob­vi­ously too nar­row: we think it a tragedy that the pig is con­fined pre­cisely be­cause her life can go much bet­ter.

To re­main a use­ful con­cept in prac­tice, ca­pac­ity for welfare must be rel­a­tivized so that it en­com­passes all and only the nor­mal vari­a­tion of species-typ­i­cal an­i­mals. In other words, the con­cept must be re­stricted so as to ex­clude pos­si­bil­ities in which a sub­ject’s ca­pac­ity for welfare is un­nat­u­rally raised or low­ered. To see why, con­sider that with the right sort of ad­vanced ge­netic en­g­ineer­ing, it may be pos­si­ble to breed a pig that is, in essence, a su­per­plea­sure ma­chine. That is, with the right ar­tifi­cial brain al­ter­a­tions, per­haps we can cre­ate a pig that ex­pe­riences plea­sures that are or­ders of mag­ni­tude greater than the plea­sures that any crea­ture (pig or oth­er­wise) has ex­pe­rienced be­fore.[9] But even if such a sce­nario were phys­i­cally pos­si­ble, it would not tell us any­thing about the moral value of nor­mal pigs in the cir­cum­stances in which we ac­tu­ally find them.[10] Peter Vallen­tyne makes much the same point by dis­t­in­guish­ing ca­pac­ity from po­ten­tial. He writes, “In­stead of fo­cus­ing on the po­ten­tial for well-be­ing, we should, I be­lieve, fo­cus on the ca­pac­ity for well-be­ing. A ca­pac­ity is some­thing that can be re­al­ized now, whereas a po­ten­tial is some­thing that can be re­al­ized only at some later time af­ter the ca­pac­ity is de­vel­oped. Thus, for ex­am­ple, most nor­mal adults now have the po­ten­tial to play a sim­ple piece on the pi­ano (i.e. af­ter much prac­tice to de­velop their ca­pac­i­ties), but only a few adults now have the ca­pac­ity to do so” (Vallen­tyne 2007: 228). In this par­lance, even if a pig has the po­ten­tial for ex­treme, god-like plea­sure, that po­ten­tial does not af­fect the pig’s ca­pac­ity for plea­sure (and thus does not af­fect the pig’s ca­pac­ity for welfare).[11]

In some­what for­mal terms, the ca­pac­ity for welfare for some sub­ject, S, is de­ter­mined by the range of welfare val­ues S[12] ex­pe­riences in some proper sub­set of phys­i­cally pos­si­ble wor­lds. How wide or nar­row we should cir­cum­scribe the set of rele­vant pos­si­ble wor­lds will be con­tentious, but in gen­eral we should be guided by con­sid­er­a­tions of prac­ti­cal­ity. If we cir­cum­scribe the rele­vant pos­si­ble wor­lds as tightly as pos­si­ble, then only the ac­tual world will re­main in the set, and ca­pac­ity for welfare will col­lapse to ac­tual welfare. Ob­vi­ously, that is too nar­row. But if we draw the line too far in modal space, we will in­clude some modally dis­tant pos­si­ble wor­lds in which S ex­pe­riences ab­nor­mally large or small welfare val­ues be­cause S has been un­nat­u­rally al­tered or stim­u­lated. Th­ese re­mote pos­si­bil­ities are gen­er­ally ir­rele­vant to re­source al­lo­ca­tion—at least in the medium-term—so those wor­lds should not af­fect a sub­ject’s ca­pac­ity for welfare. We want to cir­cum­scribe the set of pos­si­ble wor­lds so that it in­cludes all and only nor­mal vari­a­tion in the welfare val­ues of species-typ­i­cal an­i­mals.[13]

There are two non-ex­clu­sive ways ca­pac­ity for welfare might be a de­ter­mi­nant of an an­i­mal’s char­ac­ter­is­tic moral value. The first is di­rect. Ca­pac­ity for welfare might be one of the fac­tors that de­ter­mines an an­i­mal’s moral sta­tus. I’ll save dis­cus­sion of this po­ten­tial role for the sec­tion on moral sta­tus. Another way ca­pac­ity for welfare might shape char­ac­ter­is­tic moral value is in­di­rect. On this view, there’s noth­ing in­trin­si­cally valuable about ca­pac­ity for welfare. All that mat­ters is welfare it­self. But be­cause an­i­mals with a greater ca­pac­ity for welfare are in a po­si­tion to make a greater con­tri­bu­tion to the world’s welfare—ei­ther pos­i­tive or nega­tive—they de­serve more of our at­ten­tion.[14] This po­si­tion is usu­ally sup­ple­mented by the claim that an­i­mals with a greater ca­pac­ity for welfare tend, in fact, to at­tain more valuable goods and more dis­valuable bads: their highs are higher, their lows, lower. Im­por­tantly, the claim that an­i­mals with a higher ca­pac­ity for welfare have the po­ten­tial to ex­pe­rience more valuable goods and more dis­valuable bads is a con­cep­tual truth. But the claim that an­i­mals with a higher ca­pac­ity for welfare tend to ex­pe­rience more valuable goods and dis­valuable bads is a con­tin­gent em­piri­cal as­ser­tion. It could be the case that some types of an­i­mals have a large ca­pac­ity for welfare but in fact only os­cillate within a nar­row range.[15] When eval­u­at­ing in­ter­ven­tions, it is im­per­a­tive that po­ten­tial welfare gains and losses are com­pared, not merely the ca­pac­ity for welfare of the an­i­mals tar­geted. Ca­pac­ity for welfare tells us how high or low such gains or losses could be. And if ca­pac­ity for welfare is cor­re­lated with dis­po­si­tion to welfare, it tells us even more. Thus, it is plau­si­bly the case that the greater an an­i­mal’s ca­pac­ity for welfare, the more good we can typ­i­cally do by im­prov­ing its life.

Vari­abil­ism vs. Invariabilism

Be­fore trac­ing the im­pli­ca­tions of differ­ent con­cep­tions of welfare, we must first ask if the same con­cep­tion of welfare is ap­pli­ca­ble to all an­i­mals. Welfare vari­abil­ism is the view that the ba­sic con­stituents of welfare may differ across differ­ent sub­jects of welfare. (For ex­am­ple, for one type of an­i­mal, welfare may con­sist in the bal­ance of plea­sure over pain; for an­other type of an­i­mal, welfare may con­sist in the satis­fac­tion of de­sires.) Welfare in­vari­abil­ism is the view that the same ba­sic the­ory of welfare is true for all sub­jects of welfare.[16]

On ini­tial in­spec­tion, welfare vari­abil­ism ap­pears to be the more in­tu­itive view. Richard Kraut cap­tures the com­mon sense be­hind the vari­abil­ist po­si­tion fairly well. He notes that “when we think about the good of an­i­mals, our thoughts vary ac­cord­ing to the kind of an­i­mal we have in mind. We must ask what is good for a mem­ber of this species or that, and the an­swer to that ques­tion will not nec­es­sar­ily be uniform across all species. Un­im­peded fly­ing is good—that is, good for birds. Although plea­sure is good for ev­ery an­i­mal ca­pa­ble of feel­ing it, the kinds of plea­sure that are good for an an­i­mal will de­pend on the kind of an­i­mal it is. And the stim­u­la­tion of the plea­sure cen­ters of an an­i­mal’s brain may, on bal­ance, be very bad for it if it pre­vents the an­i­mal from get­ting what it needs and en­gag­ing in the kinds of be­hav­ior that con­sti­tute a healthy life for a mem­ber of its kind” (Kraut 2007: 89).

How­ever, a lit­tle re­flec­tion re­veals that vari­abil­ism is far from the in­tu­itive view it pur­ports to be. For a start, it’s un­clear what could ground the ap­pli­ca­bil­ity of a the­ory of welfare to some an­i­mals but not oth­ers. Sup­pose that the ca­pac­ity for unim­peded flight is a con­stituent of a bird’s welfare but not a fish’s welfare.[17] How could we ex­plain this alleged fact? A nat­u­ral thought is that fly­ing is good for a bird but not for a fish. But that an­swer doesn’t work in this con­text. Re­call that the con­stituents of an an­i­mal’s welfare are those things that are non-in­stru­men­tally good for it. So we can’t ex­plain the claim that fly­ing is non-in­stru­men­tally good for a bird but not a fish by ap­peal­ing to the very claim that fly­ing is non-in­stru­men­tally good for a bird but not a fish.

Rather than ap­peal­ing di­rectly to the claim that fly­ing is good for a bird but not for a fish, we might in­stead ap­peal to cer­tain facts about the na­ture of birds and fish. Birds must reach high places to mate, they must sur­vey the ground from high dis­tances to find food, they must take to the air to avoid preda­tors, and so on.[18] None of these claims are true of fish. Here, how­ever, we must re­mem­ber the defi­ni­tion of welfare: pos­i­tive welfare is that which is non-in­stru­men­tally good for some sub­ject. If unim­peded flight is only good for birds in virtue of what it al­lows birds to ac­com­plish, then it is not non-in­stru­men­tally good. In­deed, even though fish and birds are very differ­ent types of crea­tures, it seems they both benefit from a similar good, namely unim­peded move­ment, and it is this fact that ex­plains why birds benefit from unim­peded flight.[19] Of course, unim­peded move­ment is not it­self a very plau­si­ble can­di­date for a non-in­stru­men­tal good. An­i­mals move in or­der to do other things, such as eat, mate, or play—gen­er­al­iz­ing a bit, we might say that they move in or­der satisfy de­sires, seek plea­sures, and avoid pains—and it is the abil­ity to par­take of these sorts of ac­tivi­ties which more plau­si­bly con­tribute to an an­i­mal’s welfare.[20]

Welfare in­vari­abil­ism is not com­mit­ted to the claim that the con­stituents of welfare are ac­cessible to all welfare sub­jects. As I show be­low, some the­o­ries of welfare posit welfare con­stituents that cer­tain non­hu­man an­i­mals plau­si­bly can­not ob­tain. The­o­ret­i­cal con­tem­pla­tion, for in­stance, may be a con­stituent of welfare, but it is not an ac­tivity in which fish are likely to en­gage.[21] If some el­e­ments of welfare are in­ac­cessible to some an­i­mals but not oth­ers, then welfare in­vari­abil­ism can re­cover some of the in­tu­itive pull of welfare vari­abil­ism. When we think about the welfare of an­i­mals, it is im­por­tant that we spec­ify the type of an­i­mal un­der dis­cus­sion. The rea­son isn’t that cer­tain the­o­ries of welfare ap­ply to some an­i­mals and not oth­ers; the rea­son is that some welfare con­stituents are available to some an­i­mals but not oth­ers. If we want to im­prove the welfare of some an­i­mal, we need to know which welfare goods an an­i­mal is ca­pa­ble of ap­pre­ci­at­ing.

If welfare is a unified con­cept and if welfare is a morally sig­nifi­cant cat­e­gory across species, it seems as if in­vari­abil­ism is the bet­ter op­tion. In­vari­abil­ism is the sim­pler view, and it avoids the ex­plana­tory pit­falls of vari­abil­ism at lit­tle in­tu­itive cost. While we should cer­tainly leave open the pos­si­bil­ity that vari­abil­ism is the cor­rect view, in what fol­lows I will as­sume in­vari­abil­ism.[22]

The­o­ries of Welfare and Their Ca­pac­ity Implications

Deter­min­ing the ideal al­lo­ca­tion of re­sources among differ­ent types of an­i­mals will re­quire mak­ing com­par­i­sons of welfare across dis­parate groups of an­i­mals. Mak­ing com­par­i­sons of welfare across dis­parate groups of an­i­mals will re­quire, among other things, un­der­stand­ing the con­stituents of welfare for differ­ent an­i­mals. In this sec­tion I dis­cuss in broad strokes the man­ner in which differ­ent the­o­ries of welfare pos­tu­late differ­ences in ca­pac­ity for welfare. (I here set aside the prac­ti­cal difficulty of ac­tu­ally de­vel­op­ing em­piri­cally-re­li­able met­rics for mea­sur­ing ca­pac­ity for welfare. I take up this difficulty in the sec­ond en­try in the se­ries.)

Tra­di­tion­ally, the­o­ries of welfare are di­vided into three cat­e­gories: he­do­nis­tic the­o­ries, de­sire-fulfill­ment the­o­ries, and ob­jec­tive list the­o­ries.[23] Ac­cord­ing to he­do­nis­tic the­o­ries of welfare, welfare is the bal­ance of ex­pe­rienced plea­sure and pain.[24] Ac­cord­ing to de­sire-fulfill­ment the­o­ries of welfare, welfare is the de­gree to which one’s de­sires are satis­fied.[25] Ac­cord­ing to ob­jec­tive list the­o­ries of welfare, welfare con­sists of the achieve­ment, cre­ation, in­stan­ti­a­tion, or pos­ses­sion of cer­tain ob­jec­tive goods, such as love, knowl­edge, free­dom, virtue, beauty, friend­ship, jus­tice, wis­dom, or hap­piness.[26]

Eval­u­at­ing the im­pli­ca­tions of these three fam­i­lies of the­o­ries for non­hu­man an­i­mals is not easy, in no small part due to the large in­ter­nal vari­a­tion within the fam­i­lies of the­o­ries, the de­tails of which would take us too far afield from the pre­sent topic.[27] Nonethe­less, some gen­eral re­marks can illu­mi­nate the man­ner in which a the­ory of welfare can bear on differ­ences in ca­pac­ity for welfare across species. There are two non-ex­clu­sive ways an­i­mals might differ in their ca­pac­ity for welfare: they might differ with re­spect to the num­ber of welfare con­stituents they can at­tain, or they might differ with re­spect to the de­gree to which they can at­tain those welfare con­stituents. An an­i­mal that can at­tain more kinds of welfare goods and more of those goods will have a higher ca­pac­ity for welfare than an an­i­mal that lacks ac­cess to as many and as much.

On some the­o­ries of welfare, cer­tain welfare con­stituents will be in­ac­cessible to many non­hu­man an­i­mal welfare sub­jects.[28] This fact is most ob­vi­ous for ob­jec­tive list the­o­ries. The ba­sic idea is that “the range of forms and lev­els of well-be­ing that are in prin­ci­ple ac­cessible to an in­di­vi­d­ual is de­ter­mined by that in­di­vi­d­ual’s cog­ni­tive and emo­tional ca­pac­i­ties and po­ten­tials. The more limited an in­di­vi­d­ual’s ca­pac­i­ties are, the more re­stricted his or her range of well-be­ing will be. There are forms and peaks of well-be­ing ac­cessible to in­di­vi­d­u­als with highly de­vel­oped cog­ni­tive and emo­tional ca­pac­i­ties that can­not be at­tained by in­di­vi­d­u­als with lower ca­pac­i­ties” (McMa­han 1996: 7). Sup­pose that one be­lieves that the con­stituents of welfare are varied and in­clude love, friend­ship, knowl­edge, free­dom, virtue, wis­dom, and plea­sure. A species-typ­i­cal adult hu­man be­ing can ex­pe­rience any of these goods. For many non­hu­man an­i­mals, how­ever, differ­ences in ca­pac­i­ties will ren­der some of these goods unattain­able. Oc­to­puses are soli­tary crea­tures and thus plau­si­bly will never ex­pe­rience true friend­ship or love. If the­o­ret­i­cal con­tem­pla­tion is a re­quire­ment for wis­dom, then frogs plau­si­bly will never ex­pe­rience true wis­dom. If moral agency is a re­quire­ment for virtue, fish plau­si­bly can­not be vir­tu­ous. Hence, if some form of ob­jec­tive list the­ory is cor­rect, and the con­stituents of welfare are as philoso­phers have gen­er­ally de­scribed them,[29] then many non­hu­man an­i­mals will have a lower ca­pac­ity for welfare than species-typ­i­cal adult hu­man be­ings.[30]

He­donists of a cer­tain stripe might also hold that some welfare con­stituents are in­ac­cessible to non­hu­man an­i­mals. Ac­cord­ing to tra­di­tional ac­counts of he­do­nism, the value of a given plea­surable ex­pe­rience is the product of the ex­pe­rience’s in­ten­sity and its du­ra­tion. How­ever, the he­do­nist John Stu­art Mill added a third com­po­nent to this calcu­la­tion: the qual­ity of the plea­sure. Mill dis­t­in­guished so-called higher plea­sures from so-called lower plea­sures. Ac­cord­ing to Mill, both hu­mans and non­hu­man an­i­mals can ex­pe­rience lower plea­sures, but only hu­mans have ac­cess to higher plea­sures. Higher plea­sures make a greater con­tri­bu­tion to welfare than lower plea­sures and for this rea­son Mill fa­mously con­tended that “It is bet­ter to be a hu­man be­ing dis­satis­fied than a pig satis­fied; bet­ter to be Socrates dis­satis­fied than a fool satis­fied. And if the fool, or the pig, are of a differ­ent opinion, it is be­cause they only know their own side of the ques­tion. The other party to the com­par­i­son knows both sides” (Mill 1861: chap­ter 2).[31]

Even if a the­ory of welfare holds that its welfare con­stituents are ac­cessible to all welfare sub­jects, hu­man and non­hu­man al­ike, it might be the case that an­i­mals char­ac­ter­is­ti­cally differ with re­spect to the de­gree to which they can at­tain those welfare con­stituents. Take he­do­nism, for ex­am­ple. Sup­pose one re­jects Mill’s dis­tinc­tion be­tween higher and lower plea­sures so that the value of a plea­surable ex­pe­rience is just the product of its in­ten­sity and du­ra­tion. It could be the case that differ­ences in so­cial, emo­tional, or psy­cholog­i­cal ca­pa­bil­ities af­fect the char­ac­ter­is­tic in­ten­sity of plea­surable (and painful) ex­pe­riences.[32] (Differ­ences in neu­roanatomy might even af­fect the char­ac­ter­is­tic du­ra­tion of an­i­mal ex­pe­riences.[33]) Many philoso­phers be­lieve that differ­ences in ca­pac­i­ties af­fect the char­ac­ter­is­tic phe­nom­e­nal range of ex­pe­rience. For ex­am­ple, Peter Singer writes, “There are many ar­eas in which the su­pe­rior men­tal pow­ers of nor­mal adult hu­mans make a differ­ence: an­ti­ci­pa­tion, more de­tailed mem­ory, greater knowl­edge of what is hap­pen­ing and so on. Th­ese differ­ences ex­plain why a hu­man dy­ing from can­cer is likely to suffer more than a mouse” (Singer 2011: 52). Peter Vallen­tyne writes, “The typ­i­cal hu­man ca­pac­ity for well-be­ing is much greater than the typ­i­cal mouse ca­pac­ity for well-be­ing. Part of well-be­ing (what makes a life go well) is the pres­ence of plea­sure and the ab­sence of pain. The typ­i­cal hu­man ca­pac­ity for pain and plea­sure is no less than that of mice, and pre­sum­ably much greater, since we have, it seems plau­si­ble, more of the rele­vant sorts of neu­rons, neu­ro­trans­mit­ters, re­cep­tors, etc. In ad­di­tion, our greater cog­ni­tive ca­pac­i­ties am­plify the mag­ni­tude of pain and plea­sure” (Vallen­tyne 2007: 213).[34]

There are, how­ever, coun­ter­vailing con­sid­er­a­tions. While it’s true that so­phis­ti­cated cog­ni­tive abil­ities some­times am­plify the mag­ni­tude of pain and plea­sure, those same abil­ities can also act to sup­press the in­ten­sity of pain and plea­sure.[35] When I go to the doc­tor for a painful pro­ce­dure, I know why I’m there. I know that the pro­ce­dure is worth the pain, and per­haps most im­por­tantly, I know that the pain is tem­po­rary. When my dog goes to the vet for a painful pro­ce­dure, she doesn’t know why she’s there or whether the pro­ce­dure is worth the pain, and she has no idea how long the pain will last.[36] It seems in­tu­itively clear that in this case su­pe­rior cog­ni­tive abil­ity re­duces rather than am­plifies the painful ex­pe­rience.[37]

Another way to po­ten­tially get a han­dle on the phe­nom­e­nal in­ten­sity of non­hu­man ex­pe­rience is to con­sider the evolu­tion­ary role that pain plays. Pain teaches us which stim­uli are nox­ious, how to avoid those stim­uli, and what we ought to do to re­cover from in­jury. Be­cause in­tense pain can be dis­tract­ing, an­i­mals in in­tense pain seem to be at a se­lec­tive dis­ad­van­tage com­pared to con­speci­fics not in in­tense pain. Thus, we might ex­pect evolu­tion to se­lect for crea­tures with pains just phe­nom­e­nally in­tense enough (on av­er­age) to play the pri­mary in­struc­tive role of pain. Hu­mans are among the most cog­ni­tively so­phis­ti­cated an­i­mals on the planet, plau­si­bly the an­i­mals most likely to pick up on pat­terns in sig­nals only weakly con­veyed. In gen­eral, less cog­ni­tively so­phis­ti­cated an­i­mals prob­a­bly re­quire stronger sig­nals for pat­tern-learn­ing. If pain is the sig­nal, then we might rea­son­ably ex­pect the phe­nom­e­nal in­ten­sity of pain to cor­re­late in­versely with cog­ni­tive so­phis­ti­ca­tion.[38] If that’s the case, hu­mans might ex­pe­rience (on av­er­age) the least in­tense pain in all the an­i­mal king­dom.

Th­ese con­sid­er­a­tions are im­por­tant and of­ten over­looked, but ul­ti­mately they are or­thog­o­nal to the cur­rent dis­cus­sion. The ques­tion is not whether differ­ences in char­ac­ter­is­tics con­tribute to the re­al­iza­tion of more or less welfare but whether these differ­ences con­tribute to the ca­pac­ity for more or less welfare. I think the an­swer to the lat­ter ques­tion is clearer than the an­swer to the former. Ad­vanced so­cial, emo­tional, and in­tel­lec­tual com­plex­ity opens up new di­men­sions of plea­sure and suffer­ing that widen the range of ex­pe­rience. Martha Nuss­baum puts the point this way: “More com­plex forms of life have more and more com­plex ca­pa­bil­ities to be blighted, so they can suffer more and differ­ent types of harm. Level of life is rele­vant not be­cause it gives differ­ent species differ­en­tial worth per se, but be­cause the type and de­gree of harm a crea­ture can suffer varies with its form of life” (Nuss­baum 2004: 309). For ex­am­ple, the com­bi­na­tion of phys­i­cal and emo­tional tor­ture plau­si­bly gen­er­ates the pos­si­bil­ity of greater over­all pain than phys­i­cal tor­ture alone. Con­versely, the com­bi­na­tion of phys­i­cal and emo­tional in­ti­macy plau­si­bly gen­er­ates the pos­si­bil­ity (whether typ­i­cally re­al­ized or not) of greater over­all plea­sure than phys­i­cal in­ti­macy alone.[39] Analo­gous con­sid­er­a­tions ap­ply to ob­jec­tive list the­o­ries. Such the­o­ries pos­tu­late that differ­ences in so­cial, emo­tional, and cog­ni­tive ca­pac­i­ties af­fect the de­gree to which many in­trin­sic goods can be ob­tained.

De­sire-fulfill­ment the­o­ries also ap­pear to pre­dict differ­ences in ca­pac­ity for welfare. Some au­thors have ar­gued that be­cause “[h]uman de­sires are more nu­mer­ous and more com­plex than those of non­hu­mans” (Crisp 2003: 760), species-typ­i­cal adult hu­mans have a greater ca­pac­ity for welfare than non­hu­man an­i­mals. This ar­gu­ment can be challenged on sev­eral fronts. First, it’s not ob­vi­ous why cog­ni­tive, af­fec­tive, or so­cial so­phis­ti­ca­tion should af­fect the num­ber of de­sires an an­i­mal has. For ev­ery flower in the meadow, a honey bee might have a strong de­sire to visit that par­tic­u­lar flower. Th­ese de­sires would all be of the same type, but they would be nu­mer­ous. Se­cond, it’s not clear what the re­la­tion­ship is be­tween welfare and num­ber of satis­fied de­sires. Derek Parfit (1984: 497) offers an ob­jec­tion to the sim­ple view ac­cord­ing to which welfare in­creases sum­ma­tively by satis­fied de­sires. An ad­dict might ex­pe­rience a strong de­sire to take her drug of choice ev­ery few min­utes and satisfy that de­sire. But even if the ad­dict’s life con­tains many more satis­fied de­sires than the non-ad­dict, it seems the non-ad­dict leads a bet­ter life. Third, even grant­ing that hu­mans have many com­plex de­sires and the more de­sires one has the higher one’s ca­pac­ity for welfare, de­sire strength still needs to be ac­counted for. A pray­ing man­tis’s de­sire to mate might be stronger than any de­sire hu­mans ever ex­pe­rience. To­gether, these con­sid­er­a­tions cast some doubt on the claim that de­sire-fulfill­ment the­o­ries of welfare are com­mit­ted to the po­si­tion that hu­mans gen­er­ally have a greater ca­pac­ity for welfare than non­hu­man an­i­mals. Th­ese con­sid­er­a­tions don’t, how­ever, sug­gest that ca­pac­ity for welfare is uniform across all an­i­mals. It’s un­cer­tain which char­ac­ter­is­tics af­fect de­sire strength, num­ber, and com­plex­ity, but what­ever those char­ac­ter­is­tics are, it’s plau­si­ble that they vary across species.

The bot­tom line is that most (though not all) plau­si­ble the­o­ries of welfare sug­gest differ­ences in ca­pac­ity for welfare among an­i­mals.[40] The ex­act differ­ences and their mag­ni­tudes de­pend on the de­tails of the the­o­ries and on var­i­ous em­piri­cal facts. For our pur­poses, what’s im­por­tant is that many (though not all) of the fea­tures that plau­si­bly in­fluence ca­pac­ity for welfare also re­cur in the liter­a­ture on moral sta­tus, dis­cussed be­low. The over­lap be­tween fea­tures that are rele­vant to ca­pac­ity for welfare and fea­tures that are rele­vant to moral sta­tus some­times begets con­cep­tual con­fu­sion that hin­ders clear think­ing on this com­pli­cated topic. But the over­lap also makes the em­piri­cal in­ves­ti­ga­tion of prop­er­ties rele­vant to the ideal al­lo­ca­tion of re­sources among an­i­mals some­what sim­pler.

Mo­ral Status

We turn now to moral sta­tus and be­gin with some ba­sic defi­ni­tions. An en­tity has moral stand­ing[41] if and only if it has some in­trin­sic moral worth (no mat­ter how small).[42] The in­ter­ests of an en­tity with moral stand­ing must be con­sid­ered in (ideal) moral de­liber­a­tion; the in­ter­ests of an en­tity with moral stand­ing can­not (morally) be ig­nored, though its in­ter­ests can be over­rid­den by the in­ter­ests of other en­tities with moral stand­ing. Put an­other way, an en­tity with moral stand­ing can be wronged. You can dam­age a coffee mug, but you can’t wrong a coffee mug (though by dam­ag­ing the coffee mug you might wrong its owner).

Philoso­phers have gen­er­ally pro­posed two fea­tures which might, ei­ther in­de­pen­dently or in con­junc­tion, con­fer moral stand­ing: sen­tience and agency. Sen­tience in this con­text is the ca­pac­ity for valenced ex­pe­rience or, more sim­ply, the abil­ity to feel plea­sures and pains.[43] Agency in this con­text is the ca­pac­ity to pos­sess de­sires, plans, and prefer­ences.[44] Al­most cer­tainly, all sen­tient agents have moral stand­ing.[45] It’s likely that sen­tience is suffi­cient on its own for moral stand­ing, though that view is just slightly more con­tro­ver­sial. The view that agency on its own is also suffi­cient for moral stand­ing is more con­tro­ver­sial still and hangs on sub­stan­tive dis­agree­ments about the na­ture of agency.[46]

Defin­ing moral sta­tus is trick­ier.[47] David DeGrazia writes, “Mo­ral sta­tus is the de­gree (rel­a­tive to other be­ings) of moral re­sis­tance to hav­ing one’s in­ter­ests—es­pe­cially one’s most im­por­tant in­ter­ests—thwarted,” adding “A and B have equal moral sta­tus, in the rele­vant sense, if and only if they de­serve equal treat­ment” (DeGrazia 1991: 74). Thomas Dou­glas writes, “To say that a be­ing has a cer­tain moral sta­tus is, on this view, roughly to say that it has what­ever in­trin­sic non-moral prop­er­ties give rise to cer­tain ba­sic moral pro­tec­tions,” adding “[o]ther things be­ing equal, a be­ing with higher moral sta­tus will en­joy stronger and/​or broader ba­sic rights or claims than a be­ing of lesser moral sta­tus” (Dou­glas 2013: 476). And Shelly Ka­gan writes, “The cru­cial idea re­mains this: other things be­ing equal, the greater the sta­tus of a given in­di­vi­d­ual, the more value there is in any given unit of welfare ob­tain­ing for that in­di­vi­d­ual” (Ka­gan 2019: 109). For our pur­poses, I’ll let moral sta­tus be the de­gree to which the in­ter­ests of an en­tity with moral stand­ing must be weighed in (ideal) moral de­liber­a­tion or the de­gree to which the ex­pe­riences of an en­tity with moral stand­ing mat­ter morally.

Strictly speak­ing, moral sta­tus is a prop­erty of in­di­vi­d­u­als. How­ever, in both the philo­soph­i­cal liter­a­ture on the sub­ject and in in­for­mal dis­cus­sions, it’s com­mon for au­thors to as­cribe moral sta­tus to species. One might speak of the moral sta­tus of cows or chick­ens. Mo­ral sta­tus is as­cribed to higher tax­o­nomic ranks too. One might speak of the moral sta­tus of oc­to­puses (an or­der) or the moral sta­tus of in­sects (a whole class). Mo­ral sta­tus is even as­cribed to groups that lack a tax­o­nomic cor­re­late, like fish. (‘Fish’ is a ger­ry­man­dered group­ing of three evolu­tion­ar­ily dis­tinct classes.[48])

In all these cases, as­crip­tion of moral sta­tus to a tax­o­nomic group is non-literal. Tax­o­nomic groups are ab­stract en­tities. They are nei­ther sen­tient nor au­tonomous. They don’t have moral stand­ing, let alone moral sta­tus.[49] An as­crip­tion of some level of moral sta­tus to ants, say, is short­hand for one of three things. It might mean that all (or per­haps the vast ma­jor­ity of) ants have the ex­act same moral sta­tus. (This is more plau­si­ble if there are rel­a­tively few lev­els of moral sta­tus.) It might re­fer to the av­er­age (ei­ther mean or me­dian) moral sta­tus of ants. Or it might sig­nify the moral sta­tus of a ‘species-typ­i­cal’ ant, which may come apart from the av­er­age moral sta­tus of ac­tual ants. In ei­ther case, the as­crip­tion may be re­stricted to species-typ­i­cal adult mem­bers of the group or it may ap­ply to all in­di­vi­d­u­als within the taxon.

De­grees of Mo­ral Status

A cen­tral ques­tion in the liter­a­ture on moral sta­tus is whether moral sta­tus ad­mits of de­grees. There are two main po­si­tions with re­gard to this ques­tion: (1) the uni­tar­ian view, ac­cord­ing to which there are no de­grees of moral sta­tus and (2) the hi­er­ar­chi­cal view, ac­cord­ing to which the equal in­ter­ests/​ex­pe­riences of two crea­tures will count differ­ently (morally) if the crea­tures have differ­ing moral sta­tuses.

Peter Singer is a rep­re­sen­ta­tive pro­po­nent of the uni­tar­ian view.[50] Singer writes, “Pain and suffer­ing are bad and should be pre­vented or min­i­mized, ir­re­spec­tive of the race, sex or species of the be­ing that suffers. How bad a pain is de­pends on how in­tense it is and how long it lasts, but pains of the same in­ten­sity and du­ra­tion are equally bad, whether felt by hu­mans or an­i­mals” (Singer 2011: 53). This view fol­lows from what Singer calls the prin­ci­ple of equal con­sid­er­a­tion of in­ter­ests, which en­tails that “the fact that other an­i­mals are less in­tel­li­gent than we are does not mean that their in­ter­ests may be dis­counted or dis­re­garded” (Singer 2011: 49). How­ever, as Singer and other uni­tar­i­ans are quick to stress, even though in­tel­li­gence doesn’t con­fer any ad­di­tional in­trin­sic value on a crea­ture, it’s not as if cog­ni­tive so­phis­ti­ca­tion is morally ir­rele­vant. Re­call the Singer quote dis­cussed above: “There are many ar­eas in which the su­pe­rior men­tal pow­ers of nor­mal adult hu­mans make a differ­ence: an­ti­ci­pa­tion, more de­tailed mem­ory, greater knowl­edge of what is hap­pen­ing and so on. Th­ese differ­ences ex­plain why a hu­man dy­ing from can­cer is likely to suffer more than a mouse” (Singer 2011: 52). So for Singer and other uni­tar­i­ans, even though mice and hu­mans have the same moral sta­tus, it doesn’t fol­low that hu­mans and mice have the same ca­pac­ity for welfare. Hence, alle­vi­at­ing hu­man and mice suffer­ing may not have equal moral im­por­tance. Hu­mans are cog­ni­tively, so­cially, and emo­tion­ally more com­plex than mice, so in many cases it will make sense to pri­ori­tize hu­man welfare over mice welfare.

Shelly Ka­gan is a rep­re­sen­ta­tive pro­po­nent of the hi­er­ar­chi­cal view.[51] He writes, “A hi­er­ar­chi­cal ap­proach to nor­ma­tive ethics emerges rather nat­u­rally from two plau­si­ble thoughts. First, the var­i­ous fea­tures that un­der­lie moral stand­ing come in de­grees, so that some in­di­vi­d­u­als have these fea­tures to a greater ex­tent than oth­ers do (or in more de­vel­oped or more so­phis­ti­cated forms). Se­cond, ab­sent some spe­cial ex­pla­na­tion for why things should be oth­er­wise, we would ex­pect that those who do have those fea­tures to a greater ex­tent would, ac­cord­ingly, count more from the moral point of view. When we put these two thoughts to­gether they con­sti­tute what is to my mind a rather com­pel­ling (if ab­stract) ar­gu­ment for hi­er­ar­chy” (Ka­gan 2019: 279). The ba­sic idea is that moral stand­ing is grounded in the ca­pac­ity for welfare and the ca­pac­ity for ra­tio­nal choice. Plau­si­bly, some an­i­mals have a greater ca­pac­ity for welfare and ra­tio­nal choice than oth­ers. If pos­sess­ing the ca­pac­ity for welfare and ra­tio­nal choice con­fers moral sta­tus, then the pos­ses­sion of those ca­pac­i­ties to a greater de­gree should con­fer more moral sta­tus.

The ques­tion of whether moral sta­tus ad­mits of de­grees also in­ter­sects with the ques­tion of dis­tri­bu­tion of re­al­ized welfare among an­i­mals. Tat­jana Višak (2017: 15.5.1 and 15.5.2) ar­gues that any welfare the­ory that pre­dicts large differ­ences in re­al­ized welfare be­tween hu­mans and non­hu­man an­i­mals must be false be­cause, given a com­mit­ment to pri­ori­tar­i­anism[52] or egal­i­tar­i­anism,[53] such a the­ory of welfare would im­ply that we ought to di­rect re­sources to an­i­mals that are al­most as well-off as they pos­si­bly could be. For ex­am­ple, sup­pose for the sake of ar­gu­ment that a mouse’s ca­pac­ity for welfare maxes out at 10 on some ar­bi­trary scale and a hu­man’s ca­pac­ity for welfare maxes out at 100 on the same scale. If there is a hu­man be­ing that cur­rently scores 10 out of 100 and a mouse that cur­rently scores 9 out of 10, pri­ori­tar­i­anism and egal­i­tar­i­anism im­ply, all else equal, that we ought to in­crease the welfare of the mouse be­fore in­creas­ing the welfare of the hu­man. Even for those of us who care about mouse welfare, this seems in­tu­itively like the wrong re­sult. After all, the mouse is do­ing al­most as well as it pos­si­bly could be, whereas the hu­man is fal­ling well short of her nat­u­ral po­ten­tial.

Ka­gan agrees that this re­sult is in­tu­itively un­ac­cept­able. He writes, “I find it im­pos­si­ble to take se­ri­ously the sug­ges­tion that this in­equal­ity is, in and of it­self, morally ob­jec­tion­able—that the mere fact mice are worse off than us is morally prob­le­matic, and so we are un­der a press­ing moral obli­ga­tion to cor­rect this in­equal­ity. Yet that does seem to be the con­clu­sion that is forced upon us if we em­brace both egal­i­tar­i­anism and uni­tar­i­anism” (Ka­gan 2019: 65). Rather than fault the­o­ries of welfare that pre­dict un­equal dis­tri­bu­tions of welfare, Ka­gan in­vokes de­grees of moral sta­tus to re­solve the con­flict of in­tu­itions.[54] By ad­just­ing level of welfare to ac­count for moral sta­tus, Ka­gan’s po­si­tion de­liv­ers the ver­dict that pri­ori­tar­i­anism and egal­i­tar­i­anism need not nec­es­sar­ily pri­ori­tize a mouse’s welfare over a hu­man’s welfare, even if the mouse’s welfare is lower in ab­solute terms than the hu­man’s welfare.[55]

Ul­ti­mately, from a prac­ti­cal stand­point, the differ­ence be­tween the uni­tar­ian ap­proach and the hi­er­ar­chi­cal ap­proach may not be very deep. It might be thought that al­though the hi­er­ar­chi­cal ap­proach can coun­te­nance pri­ori­tiz­ing an­i­mals ac­cord­ing to their moral value, the uni­tar­ian ap­proach can­not. As we’ve already seen, how­ever, that’s not the case. A uni­tar­ian like Singer be­lieves that similar pains count similarly, no mat­ter if it’s a mouse or a hu­man that ex­pe­riences the pains. But it doesn’t fol­low from this claim that mice lives have the same moral value as hu­man lives. In­deed, there is broad con­sen­sus among uni­tar­i­ans that mice lives don’t have the same value as hu­man lives. Pro­po­nents of both camps agree that some an­i­mals are more valuable than oth­ers.

For in­stance, Martha Nuss­baum, a uni­tar­ian, writes, “Al­most all eth­i­cal views of an­i­mal en­ti­tle­ments hold that there are morally rele­vant dis­tinc­tions among forms of life. Killing a mosquito is not the same sort of thing as kil­ling a chim­panzee” (Nuss­baum 2004: 308). Eliz­a­beth Har­man, an­other uni­tar­ian, makes a similar point: “Con­sider a healthy adult per­son’s sud­den painless death in the prime of life and a cat’s sud­den painless death in the prime of life. Both of these deaths de­prive their sub­jects of fu­ture hap­piness. But the per­son’s death harms the per­son in many ways that the cat’s death does not harm the cat. The per­son’s fu­ture plans and de­sires about the fu­ture are thwarted. The shape of the per­son’s life is very differ­ent from the way he would want it to be. The per­son is de­prived of the op­por­tu­nity to come to terms with his own death and to say good­bye to his loved ones. None of these harms are suffered by the cat. There­fore, the per­son is more harmed by his death than the cat is harmed by its death” (Har­man 2003: 180). Even Singer ad­mits, “When we come to con­sider the value of life, we can­not say quite so confi­dently that a life is a life and equally valuable, whether it is a hu­man life or an an­i­mal life. It would not be speciesist to hold that the life of a self-aware be­ing, ca­pa­ble of ab­stract thought, of plan­ning for the fu­ture, of com­plex acts of com­mu­ni­ca­tion and so on, is more valuable than the life of a be­ing with­out these ca­pac­i­ties” (Singer 2011: 53).[56]

In this re­spect, the uni­tar­ian view is hardly dis­t­in­guish­able from the hi­er­ar­chi­cal view. Jean Kazez, a pro­po­nent of the hi­er­ar­chi­cal ap­proach, writ­ers, “If a life goes well or badly based (at least partly) on the way ca­pac­i­ties are ex­er­cised, then what is built-in value, more pre­cisely? It’s nat­u­ral to think of it in terms of ca­pac­i­ties them­selves. The more valuable of two lives is the one that could amount to more, over a life­time, if both in­di­vi­d­u­als had a chance to ‘be all that you can be.’ If ca­pac­i­ties are what give value to a life, then to com­pare an­i­mal and hu­man lives, we must com­pare an­i­mal and hu­man ca­pac­i­ties” (Kazez 2010: 86). In broad out­line, the traits, fea­tures, and psy­cholog­i­cal ca­pa­bil­ities that, for the pro­po­nent of the hi­er­ar­chi­cal view, de­ter­mine moral sta­tus, are the same sorts of traits, fea­tures, and psy­cholog­i­cal ca­pa­bil­ities that do the heavy-lift­ing for the uni­tar­ian in en­sur­ing there is an or­der­ing of ca­pac­ity for welfare. In­deed, this con­nec­tion is, for the hi­er­ar­chy pro­po­nent, no ac­ci­dent. Ka­gan writes, “So lives that are more valuable by virtue of in­volv­ing a greater ar­ray of goods, or more valuable forms of those goods, will re­quire a greater ar­ray of psy­cholog­i­cal ca­pac­i­ties, or at least more ad­vanced ver­sions of those ca­pac­i­ties. [...] More ad­vanced ca­pac­i­ties make pos­si­ble more valuable forms of life, and the more ad­vanced the ca­pac­i­ties, the higher the moral sta­tus grounded in the pos­ses­sion of those very ca­pac­i­ties” (Ka­gan 2019: 121). So if asked how to al­lo­cate re­sources across dis­similar an­i­mal taxa, both views would ap­peal to the same gen­eral sorts of fea­tures, even if the un­der­ly­ing the­o­ret­i­cal role those fea­tures play in the re­spec­tive views is differ­ent.

What Deter­mines Mo­ral Status

Sup­pose for the mo­ment that moral sta­tus does ad­mit of de­grees. To un­der­stand where an­i­mals rank in terms of moral sta­tus, we must first un­der­stand why moral sta­tus differs across the an­i­mal king­dom. Ka­gan tells us that “if peo­ple have a higher moral sta­tus than an­i­mals do, then pre­sum­ably this is by virtue of hav­ing cer­tain fea­tures that an­i­mals lack or have in a lower de­gree. Similarly, if some an­i­mals have a higher sta­tus than oth­ers, then the former too must have some fea­tures that the lat­ter lack, or that the lat­ter have to an even lower de­gree” (Ka­gan 2019: 112). What are these fea­tures? Philoso­phers have pro­posed a long list of ca­pac­i­ties that plau­si­bly con­tribute to moral sta­tus. Ka­gan men­tions ab­stract thought, cre­ativity, long-term plan­ning, self-aware­ness, nor­ma­tive eval­u­a­tion, and self-gov­er­nance (Ka­gan 2019: 125-126). Kazez in­vokes in­tel­li­gence, au­ton­omy, cre­ativity, nur­tur­ing, skill, and re­silience (Kazez 2010: 93). DeGrazia cites cog­ni­tive, af­fec­tive, and so­cial com­plex­ity, moral agency, au­ton­omy, ca­pac­ity for in­ten­tional ac­tion, ra­tio­nal­ity, self-aware­ness, so­cia­bil­ity, and lin­guis­tic abil­ity (DeGrazia 2008: 193). None of these au­thors claim that their lists are ex­haus­tive.

Another idea is that ca­pac­ity for welfare it­self plays a large role in de­ter­min­ing moral sta­tus. Both Peter Vallen­tyne (2007: 228-230) and Ka­gan (2019: 279-284) have ar­gued that moral stand­ing is grounded in the ca­pac­ity for welfare and the ca­pac­ity for ra­tio­nal choice. Be­cause those ca­pac­i­ties ad­mit of de­grees, they ar­gue, moral sta­tus too must come in de­grees.[57] There are two pos­si­ble read­ings of these po­si­tions. One read­ing is that ca­pac­ity for welfare di­rectly de­ter­mines (at least in part) moral sta­tus. The other read­ing is that moral sta­tus is grounded in var­i­ous ca­pac­i­ties that also just so hap­pen to be rele­vant for de­ter­min­ing ca­pac­ity for welfare. The first in­ter­pre­ta­tion runs the risk of dou­ble-count­ing. Even be­fore con­sid­er­ing moral sta­tus, we can say that lives that con­tain more and more of non-in­stru­men­tal goods are more valuable than lives that con­tain fewer and less of those non-in­stru­men­tal goods. It’s not clear why those lives should gain ad­di­tional moral value—in virtue of a higher moral sta­tus—merely be­cause they were more valuable in the first place. For this rea­son, I think it makes more sense to think that ca­pac­ity for welfare does not play a di­rect role in de­ter­min­ing moral sta­tus, though many of the fea­tures rele­vant for welfare ca­pac­ity are also rele­vant for moral sta­tus.

Most, if not all, of the ca­pac­i­ties dis­cussed above come in de­grees. An an­i­mal can be more or less so­cia­ble, more or less in­tel­li­gent, and more or less cre­ative. So if two an­i­mals have all of these ca­pac­i­ties, but the first an­i­mal has the ca­pac­i­ties to a much greater ex­tent, the first an­i­mal will have a higher moral sta­tus. In Ka­gan’s words: “Psy­cholog­i­cal ca­pac­i­ties play a role in ground­ing one’s sta­tus. And sta­tuses differ, pre­cisely be­cause these ca­pac­i­ties seem to come in va­ri­eties that differ in terms of their com­plex­ity and so­phis­ti­ca­tion. That is to say, some types of an­i­mals have a greater ca­pac­ity for com­plex thought than oth­ers, or can ex­pe­rience deeper and more so­phis­ti­cated emo­tional re­sponses” (Ka­gan 2019: 113). Of course, even if we were con­fi­dent that philoso­phers had iden­ti­fied the full list of fea­tures rele­vant to the de­ter­mi­na­tion of moral sta­tus—and philoso­phers them­selves are not con­fi­dent that they have—many prob­lems would re­main.[58]

One prob­lem is whether and how to weight the fea­tures. Oc­to­puses are in­cred­ibly in­tel­li­gent, cre­ative crea­tures—but they are also deeply aso­cial. Ants are plau­si­bly much less in­tel­li­gent and cre­ative, but they tend to live in densely pop­u­lated mounds, with so-called su­per­colonies con­tain­ing mil­lions of in­di­vi­d­ual ants. Kazez frames the prob­lem this way: “There are many ca­pac­i­ties to which we as­sign pos­i­tive value, but we don’t always have a definite idea of their rel­a­tive val­ues. If we’re try­ing to rank bower birds, crows, and wolves, it de­pends what’s more valuable, artis­tic abil­ity (which fa­vors the bower bird) or sheer in­tel­li­gence (which fa­vors the crow) or so­cia­bil­ity (which fa­vors the wolf). We’re not go­ing to be able to put these three species on sep­a­rate rungs of a lad­der, in any par­tic­u­lar or­der, and nei­ther is the situ­a­tion quite as crisp as a straight­for­ward tie. We just don’t know how to as­sign them a place on the lad­der, rel­a­tive to each other” (Kazez 2010: 87-88).[59]

A fur­ther com­pli­ca­tion is what Har­man calls com­bi­na­tion effects: “A prop­erty might raise the moral sta­tus of one be­ing but not an­other, be­cause it might raise moral sta­tus only when com­bined with cer­tain other prop­er­ties” (Har­man 2003: 177-178). For ex­am­ple, it might be the case that a cer­tain de­gree of au­ton­omy is re­quired be­fore some proso­cial ca­pac­i­ties con­tribute to moral sta­tus. Maybe nur­tur­ing be­hav­ior that is en­tirely pre-pro­grammed and in­stinc­tive counts for less than love freely given. Honey bees and cows both care for their young, but if we think cows have a greater ca­pac­ity for ra­tio­nal choice than honey bees, then the same level of ju­ve­nile guardian­ship might raise the moral sta­tus of cows more than honey bees.[60]

There is also the ques­tion of whether moral sta­tus is con­tin­u­ous or dis­crete. If moral sta­tus is con­tin­u­ous, then on some ar­bi­trary scale (say 0 to 1), an in­di­vi­d­ual’s moral sta­tus can in the­ory take on any value. If moral sta­tus is dis­crete, then there are tiers of moral sta­tus. Ar­gu­ments can be mar­shalled for ei­ther po­si­tion. On the one hand, it seems as if many of the fea­tures that ground moral sta­tus—such as gen­eral in­tel­li­gence, cre­ativity, and so­cia­bil­ity—vary more-or-less con­tin­u­ously. Hence, even if for prac­ti­cal pur­poses we as­cribe moral sta­tus in tiers, we should ac­knowl­edge moral sta­tus’s un­der­ly­ing con­ti­nu­ity. On the other hand, con­ti­nu­ity of moral sta­tus raises a num­ber of in­tu­itive con­flicts. Many peo­ple have the in­tu­ition that hu­man ba­bies have the same moral sta­tus as hu­man adults de­spite the fact that adults are much more cog­ni­tively and emo­tion­ally so­phis­ti­cated than ba­bies.[61] Many peo­ple also have the in­tu­ition that severely cog­ni­tively-im­paired hu­mans, whose in­tel­lec­tual po­ten­tial has been per­ma­nently cur­tailed, have the same moral sta­tus as species-typ­i­cal hu­mans.[62] And many peo­ple have the in­tu­ition that nor­mal vari­a­tion in hu­man in­tel­lec­tual ca­pac­i­ties makes no differ­ence to moral sta­tus, such that as­tro­physi­cists don’t have a higher moral sta­tus than so­cial me­dia in­fluencers.[63] Th­ese in­tu­itions are eas­ier to ac­com­mo­date if moral sta­tus is dis­crete.[64]

A fur­ther ques­tion is, if moral sta­tus is dis­crete, how many tiers of moral sta­tus are there? Ka­gan con­jec­tures there are only about six lev­els of moral sta­tus (Ka­gan 2019: 293). He writes, “The idea here would be to have not only a rel­a­tively small num­ber of group­ings, but also a rel­a­tively easy way to as­sign a given an­i­mal to its rele­vant group. After all, it would hardly be fea­si­ble to ex­pect us to un­der­take a de­tailed in­ves­ti­ga­tion of a given an­i­mal’s spe­cific psy­cholog­i­cal ca­pac­i­ties each time we were go­ing to in­ter­act with one. This makes it al­most in­evitable that in nor­mal cir­cum­stances we will as­sign a given an­i­mal on the ba­sis of its species (or, more likely still, on the ba­sis of even larger, more gen­eral biolog­i­cal cat­e­gories)” (Ka­gan 2019: 294).[65] If there are only a hand­ful of tiers, get­ting the ex­act num­ber right is go­ing to be im­por­tant. A model on which there are five tiers of moral sta­tus could have dras­ti­cally differ­ent im­pli­ca­tions for how we should al­lo­cate re­sources than a model with seven tiers of moral sta­tus.

Fi­nally, if moral sta­tus is dis­crete, we need to know how much more valuable each tier is than the pre­ced­ing tier. Is it a lin­ear scale or log­a­r­ith­mic? Some­thing else en­tirely? Is the top tier only marginally bet­ter than the next-high­est tier? Is it twice as valuable? Ten times as valuable? Again, differ­ent an­swers to these ques­tions could have dras­ti­cally differ­ent im­pli­ca­tions for how we should al­lo­cate re­sources across an­i­mals.

Sta­tus-Ad­justed Welfare

As I’ve em­pha­sized, ca­pac­ity for welfare and moral sta­tus are dis­tinct con­cepts. Nonethe­less, they are closely re­lated, both in the­o­ret­i­cal and prac­ti­cal terms. In the­o­ret­i­cal terms, ca­pac­ity for welfare is po­ten­tially rele­vant for de­ter­min­ing moral sta­tus. In prac­ti­cal terms, any­one in­ter­ested in com­par­ing the moral value of differ­ent an­i­mals will have to grap­ple with both po­ten­tial differ­ences in ca­pac­ity for welfare and po­ten­tial differ­ences in moral sta­tus. It would be con­ve­nient, then, if there were a sin­gle term that could cap­ture both welfare and moral sta­tus. For­tu­nately, there is.

Sta­tus-ad­justed welfare is welfare weighted by the moral sta­tus of the crea­ture for whom the welfare ob­tains.[66] It’s calcu­lated by mul­ti­ply­ing quan­tity of welfare by some num­ber be­tween 0 and 1, with 1 be­ing the high­est moral sta­tus and 0 be­ing no moral stand­ing. Sta­tus-ad­justed welfare is neu­tral on the ques­tion of de­grees of moral sta­tus. Uni­tar­i­ans as­sign all crea­tures with moral stand­ing the same moral sta­tus, so for the uni­tar­ian, sta­tus-ad­justed welfare just col­lapses to welfare. Sta­tus-ad­justed welfare is a use­ful com­mon cur­rency both uni­tar­i­ans and hi­er­ar­chists can use to frame de­bates. Of two in­ter­ven­tions, all other things be­ing equal, both camps will pre­fer the in­ter­ven­tion that pro­duces the higher quan­tity of sta­tus-ad­justed welfare.

I be­gan this post by pos­ing a fun­da­men­tal ques­tion: what is the ideal al­lo­ca­tion of re­sources among differ­ent groups of an­i­mals? One good an­swer[67] is: what­ever al­lo­ca­tion max­i­mizes sta­tus-ad­justed welfare. Reflec­tion on sta­tus-ad­justed welfare might change the way we hope to al­lo­cate re­sources. It seems to me that much of the an­i­mal welfare move­ment’s al­loca­tive de­ci­sion-mak­ing im­plic­itly as­sumes an or­der­ing of an­i­mals by moral sta­tus or ca­pac­ity for welfare. Fish are ex­ploited in greater num­bers than mam­mals and birds, but fish are gen­er­ally per­ceived to be less cog­ni­tively and emo­tion­ally com­plex than mam­mals and birds and thus their in­ter­ests and ex­pe­riences are given less weight. Arthro­pods are ex­ploited in even greater num­bers than fish, but they are gen­er­ally per­ceived to be even less cog­ni­tively and emo­tion­ally com­plex and thus are af­forded even less weight. Th­ese judg­ments ap­pear to be largely in­tu­ition-driven, in­formed by nei­ther deep philo­soph­i­cal ru­mi­na­tion nor ro­bust em­piri­cal in­ves­ti­ga­tion. As such, most of these judg­ments are un­jus­tified (though not ex­actly ir­ra­tional). Maybe these judg­ments are true. Maybe they are not. More likely, they aren’t re­ally pre­cise enough to eval­u­ate. It’s one thing to say that mam­mals have a greater ca­pac­ity for welfare or higher moral sta­tus than fish. It’s an­other thing to say how much higher. Two times higher? Five times higher? A thou­sand times higher? If the goal is to max­i­mize sta­tus-ad­justed welfare, then the an­swer mat­ters.

Objections

The main con­tention of this post is that con­sid­er­a­tions of moral sta­tus and ca­pac­ity for welfare could change the way we wish to al­lo­cate re­sources among an­i­mals and be­tween hu­man and non-hu­man cause ar­eas. In this sec­tion I con­sider five ob­jec­tions to that con­tention.

Won’t In­ten­sity of Suffer­ing Swamp Con­cerns about Mo­ral Sta­tus and Ca­pac­ity for Welfare?

The con­di­tions in which var­i­ous an­i­mals are raised differ markedly. The life of a pas­ture-raised beef cow is very differ­ent and prob­a­bly much bet­ter than the life of a bat­tery-caged layer hen. Th­ese differ­ences need to be ac­counted for when eval­u­at­ing the cost-effec­tive­ness of an in­ter­ven­tion. All other things equal, an in­ter­ven­tion that re­duces the stock of fac­tory-farmed chick­ens is prob­a­bly more im­pact­ful than a similar in­ter­ven­tion that re­duces the stock of pas­ture-raised cows.[68] Of course, mea­sur­ing the com­par­a­tive suffer­ing of differ­ent types of an­i­mals is not always easy. Nonethe­less, it does ap­pear that we can get at least a rough han­dle on which prac­tices gen­er­ally in­flict the most pain, and sev­eral ex­perts have pro­duced ex­plicit welfare rat­ings for var­i­ous groups of farmed an­i­mals that seem to at least loosely con­verge.[69] Our un­der­stand­ing of moral sta­tus and ca­pac­ity for welfare is com­par­a­tively much weaker, and very few in­for­ma­tive, au­thor­i­ta­tive es­ti­mates of com­par­a­tive moral sta­tus have been pro­duced. The es­ti­mates that do ex­ist vary widely and the ranges are large.[70] Thus, ac­cord­ing to this ob­jec­tion, data on in­ten­sity of suffer­ing will gen­er­ally swamp our ten­ta­tive, un­cer­tain con­cerns about moral sta­tus and ca­pac­ity for welfare.

The first point to note about this ob­jec­tion is that it is merely a prac­ti­cal ob­jec­tion. If we did pos­sess re­li­able data on moral sta­tus and ca­pac­ity for welfare, noth­ing in this ob­jec­tion sug­gests that we should ig­nore it or that that in­for­ma­tion would in­evitably be less im­por­tant than in­ten­sity of suffer­ing con­sid­er­a­tions. It’s cer­tainly true that de­ter­min­ing com­par­a­tive moral value is a daunt­ing task. But daunt­ing is not the same as im­pos­si­ble. Deter­min­ing which an­i­mals are sen­tient is also a daunt­ing task, but it ap­pears pos­si­ble to at least make some progress on that ques­tion. Given a similar effort, it’s plau­si­ble that we could make progress on ques­tions of moral sta­tus and ca­pac­ity for welfare. Hence, even if it’s cur­rently the case that in­ten­sity of suffer­ing con­sid­er­a­tions swamp moral sta­tus and ca­pac­ity for welfare con­sid­er­a­tions in our de­ci­sion-mak­ing, there’s no rea­son this need always be the case.

Se­condly, it’s not so clear that we do pos­sess an ad­e­quate un­der­stand­ing of rel­a­tive suffer­ing among differ­ent groups of an­i­mals. There are a num­ber of ex­perts and an­i­mal welfare groups who have rated the welfare con­di­tions of farmed mam­mals and birds. Even if these rat­ings were gen­er­ally in agree­ment and gen­er­ally ac­cu­rate, they would only cover a small frac­tion of an­i­mals di­rectly ex­ploited by hu­mans. Aqua­cul­ture has ex­ploded over the last three decades,[71] and the an­i­mal welfare move­ment has only re­cently be­gun to grap­ple with the welfare im­pli­ca­tions of aqua­cul­ture’s rise. Still less at­ten­tion is de­voted to other species. More than 290 mil­lion farmed frogs are slaugh­tered ev­ery year for food. More than 2.9 billion farmed snails are slaugh­tered per year for food (plus more for their slime). And more than 22 billion cochineal bugs are slaugh­tered an­nu­ally just to pro­duce carmine dye.[72] Even if the num­bers+suffer­ing ap­proach is the right one, we still have a lot of work to do to un­der­stand the con­di­tions in which differ­ent groups of an­i­mals are raised.

Fi­nally, un­der­stand­ing differ­ences in ca­pac­ity for welfare is di­rectly rele­vant for de­ter­min­ing rel­a­tive suffer­ing across differ­ent groups of an­i­mals. Con­sider two wor­ri­some trends on the hori­zon. En­to­mophagy is steadily gain­ing wider ac­cep­tance, and as a re­sult, new in­sect farms are open­ing ev­ery year and old ones are ramp­ing up pro­duc­tion. On the other hand, the de­mand for oc­to­pus meat con­tinues to out­pace wild-caught sup­ply, and as a re­sult, groups in Spain and Ja­pan are de­vel­op­ing sys­tems to in­ten­sively farm oc­to­puses. It’s difficult to know in ad­vance which trend will pro­duce more suffer­ing. How­ever, if we had a bet­ter un­der­stand­ing of the differ­ences in ca­pac­ity for welfare be­tween in­sects and cephalopods, we might be able to make bet­ter pre­dic­tions.

Aren’t ca­pac­ity for welfare and moral sta­tus mul­ti­di­men­sional or ac­tion-rel­a­tive or con­text-sen­si­tive?

One worry is that ca­pac­ity for welfare and moral sta­tus might be sig­nifi­cantly more com­pli­cated than I have thus far pre­sented them. In the dis­cus­sion above, I have as­sumed a uni­di­men­sional anal­y­sis of both ca­pac­ity for welfare and moral sta­tus. That is, I have as­sumed that we can as­sign a sin­gle num­ber for an an­i­mal’s ca­pac­ity for welfare or moral sta­tus and then com­pare that num­ber to the num­bers of other an­i­mals. But if ei­ther ca­pac­ity for welfare or moral sta­tus is mul­ti­di­men­sional, mea­sur­ing and com­par­ing those items be­comes much more difficult.

If the ob­jec­tive list the­ory of welfare is cor­rect, then ca­pac­ity for welfare is al­most cer­tainly mul­ti­di­men­sional. Sup­pose one an­i­mal has a greater ca­pac­ity for plea­sure and friend­ship, and a differ­ent kind of an­i­mal has a greater ca­pac­ity for wis­dom and aes­thetic ap­pre­ci­a­tion. Which an­i­mal has a greater ca­pac­ity for welfare? If cer­tain goods are in­com­men­su­rable, there may not be an all-things-con­sid­ered an­swer. Mo­ral sta­tus also ap­pears plau­si­bly mul­ti­di­men­sional. The char­ac­ter­is­tics that philoso­phers have pro­posed con­tribute to moral sta­tus can plau­si­bly come apart. If both in­tel­li­gence and em­pa­thy con­tribute to moral sta­tus, how are we to com­pare crea­tures that score high on one but not the other?

It’s cer­tainly true that the mul­ti­di­men­sion­al­ity of ei­ther ca­pac­ity for welfare or moral sta­tus would com­pli­cate mea­sure­ment and com­par­i­son of sta­tus-ad­justed welfare. But I don’t think the ap­pro­pri­ate re­sponse to this po­ten­tial difficulty is to give up on in­ves­ti­gat­ing ca­pac­ity for welfare and moral sta­tus. If we were able to weight the var­i­ous di­men­sions of welfare or sta­tus, we could com­bine the weighted av­er­age of differ­ent di­men­sions into a sin­gle met­ric. Of course, if the var­i­ous di­men­sions are in­com­men­su­rable, the situ­a­tion is much trick­ier. How­ever, there is a rich philo­soph­i­cal liter­a­ture on in­com­men­su­rable val­ues, and sev­eral strate­gies for deal­ing with this prob­lem are at least in prin­ci­ple open to us. So the mul­ti­di­men­sion­al­ity of ca­pac­ity for welfare or moral sta­tus does not by it­self doom the use­ful­ness of sta­tus-ad­justed welfare.

A re­lated worry is that moral sta­tus might be con­text-sen­si­tive or ac­tion-rel­a­tive. James Rachels puts it this way: “There is no char­ac­ter­is­tic, or rea­son­ably small set of char­ac­ter­is­tics, that sets some crea­tures apart from oth­ers as mer­it­ing re­spect­ful treat­ment. That is the wrong way to think about the re­la­tion be­tween an in­di­vi­d­ual’s char­ac­ter­is­tics and how he or she may be treated. In­stead we have an ar­ray of char­ac­ter­is­tics and an ar­ray of treat­ments, with each char­ac­ter­is­tic rele­vant to jus­tify­ing some types of treat­ment but not oth­ers. If an in­di­vi­d­ual pos­sesses a par­tic­u­lar char­ac­ter­is­tic (such as the abil­ity to feel pain), then we may have a di­rect duty to treat it in a cer­tain way (not to tor­ture it), even if that same in­di­vi­d­ual does not pos­sess other char­ac­ter­is­tics (such as au­ton­omy) that would man­date other sorts of treat­ment (re­frain­ing from co­er­cion)” (Rachels 2004: 169). He con­cludes, “There is no such thing as moral stand­ing sim­plic­iter. Rather, moral stand­ing is always moral stand­ing with re­spect to some par­tic­u­lar mode of treat­ment. A sen­tient be­ing has moral stand­ing with re­spect to not be­ing tor­tured. A self-con­scious be­ing has moral stand­ing with re­spect to not be­ing hu­mil­i­ated. An au­tonomous be­ing has moral stand­ing with re­spect to not be­ing co­erced. And so on” (Rachels 2004: 170).[73]

I’m not sure Rachels is right, but his po­si­tion is rea­son­able and de­serves con­sid­er­a­tion. Yet even if his ba­sic idea is cor­rect, I don’t be­lieve the ob­jec­tion dooms the pro­ject. The idea that con­text helps shape which ac­tions are morally per­mis­si­ble is hardly novel or con­tro­ver­sial. For in­stance, adult hu­mans and hu­man in­fants both have moral stand­ing. But be­cause adults and in­fants pos­sess differ­ent char­ac­ter­is­tics, the same de­mand for au­ton­omy ren­ders differ­ent ac­tions morally ap­pro­pri­ate. In most cases, it would be wrong to re­strict an adult’s move­ment; in most cases, it would be wrong not to re­strict an in­fant’s move­ment. So I think it’s pos­si­ble to re­tain the no­tion that moral stand­ing is bi­nary, while ac­knowl­edg­ing that differ­ent char­ac­ter­is­tics call for differ­ent treat­ments.

Be­cause our un­der­stand­ing of moral sta­tus is so in­com­plete, Shelly Ka­gan urges us to adopt a prag­matic ap­proach to the topic. He ac­knowl­edges that it might be the case that “cer­tain ca­pac­i­ties are rele­vant for a given set of moral claims, while other ca­pac­i­ties are the ba­sis of differ­ent claims. If so, then a crea­ture with ad­vanced ca­pac­i­ties of the one kind, but less ad­vanced ca­pac­i­ties of the other, would have a rel­a­tively high moral sta­tus with re­gard to the first set of claims, but a low moral sta­tus with re­gard to the sec­ond set” (Ka­gan 2019: 114). How­ever, he be­lieves that “while we may some­day con­clude that it is an over­sim­plifi­ca­tion to think of sta­tus as fal­ling along a sin­gle di­men­sion, for the time be­ing, at least, I think we are jus­tified in mak­ing use of the sim­pler model” (Ka­gan 2019: 115). Since com­par­a­tive moral value is so ne­glected within the an­i­mal welfare move­ment, there may be sig­nifi­cant re­turns on rel­a­tively shal­low in­ves­ti­ga­tions of the sub­ject long be­fore we are stymied by com­pli­ca­tions like mul­ti­di­men­sion­al­ity.

Might Welfare Con­stituents or Mo­ral In­ter­ests Be Non-Ad­di­tive?

I have sug­gested that we should frame the value of in­ter­ven­tions in terms of sta­tus-ad­justed welfare. If we were to com­pare the value of an in­ter­ven­tion that tar­geted pigs with an in­ter­ven­tion that tar­geted silk­worms, we should con­sider not only the amount of welfare to be gained but also the moral sta­tus of the crea­tures who would gain the welfare. One way this strat­egy could be mis­taken—or at least sig­nifi­cantly more com­pli­cated—is if welfare or moral in­ter­ests are not straight­for­wardly ad­di­tive.

Sup­pose that he­do­nism is true and sup­pose that a silk­worm’s ca­pac­ity for plea­sure and pain is roughly one one-thou­sandth that of a pig’s. Does that mean that, all else equal, one thou­sand silk­worms at max­i­mum hap­piness are worth one pig at max­i­mum hap­piness? Not nec­es­sar­ily. It might be the case that the tiny plea­sures of the silk­worms never add up to the big plea­sure of the pig. The same might be the case for moral in­ter­ests. If silk­worms have a moral sta­tus one one-thou­sandth that of a pig’s, then, if moral in­ter­ests are non-ad­di­tive, it doesn’t fol­low that the in­ter­ests of a thou­sand silk­worms—not to be con­fined, say—are equal in value to the in­ter­est not to be con­fined of a sin­gle pig.

Jean Kazez puts the point this way: “The difficulty of the idea of an ex­change rate arises on any view about the value of lives, but most ob­vi­ously on the ‘ca­pac­ity’ view. The valuable ca­pac­i­ties you get in a chim­panzee life you never get in a squir­rel life, how­ever many squir­rels you add to­gether. And what you get in a hu­man life you never get in an au­rochs life, no mat­ter how many. That’s at least some rea­son to look askance at the no­tion of equitable trad­ing of lives for lives. Say that it’s just hap­piness that makes a life valuable. Pre­tend chim­panzees are ex­tremely happy, and squir­rels only slightly happy. It does not seem true that one chim­panzee life is worth some num­ber of squir­rel lives, if you just put enough to­gether. If you had to save one chim­panzee or a boat­load of squir­rels, it might make sense to save the chim­panzee; you might co­her­ently think that that will give one in­di­vi­d­ual a chance at a good life, which is bet­ter than there be­ing lots of fairly low-qual­ity lives” (Kazez 2010: 112).[74] Hence, if welfare con­stituents or moral in­ter­ests are non-ad­di­tive, we may not be able to use sta­tus-ad­justed welfare to com­pare in­ter­ven­tions.[75]

Although I grant that this po­si­tion has some ini­tial in­tu­itive ap­peal, I find it difficult to en­dorse—or, frankly, re­ally un­der­stand—upon re­flec­tion. For this po­si­tion to suc­ceed, there would have to ex­ist some sort of un­bridge­able value gap be­tween small in­ter­ests and big in­ter­ests. And while the mere ex­is­tence of such a gap is per­haps not so strange, the place­ment of the gap at any par­tic­u­lar point on a welfare or sta­tus scale seems un­jus­tifi­ably ar­bi­trary. It’s not clear what could ex­plain the fact that the slight hap­piness of a suffi­cient num­ber of squir­rels never out­weighs the large hap­piness of a sin­gle chim­panzee. If hap­piness is all that non-in­stru­men­tally mat­ters, as Kazez as­sumes for the sake of ar­gu­ment, we can’t ap­peal to any qual­i­ta­tive differ­ences in chim­panzee ver­sus squir­rel hap­piness.[76] (It’s not as if, for ex­am­ple, that chim­panzee hap­piness is de­served while squir­rel hap­piness is ob­tained un­fairly.) And how much hap­pier must chim­panzees be be­fore their hap­piness can defini­tively out­weigh the lesser hap­piness of other crea­tures? What about meerkats, who we might as­sume for the sake of ar­gu­ment are gen­er­ally hap­pier than squir­rels but not so happy as chim­panzees? There seems to be lit­tle prin­ci­pled ground to stand on. Hence, while we should ac­knowl­edge the pos­si­bil­ity of non-ad­di­tivity here, we should prob­a­bly as­sign it a fairly low cre­dence.

Isn’t Prob­a­bil­ity of Sen­tience already a Good Enough Proxy for Mo­ral Sta­tus and Ca­pac­ity for Welfare?

Ac­cord­ing to an­other ob­jec­tion, when we eval­u­ate the im­pact of var­i­ous in­ter­ven­tions, we should dis­count the welfare that would be gained by differ­ent kinds of an­i­mals by the prob­a­bil­ity that those kinds of an­i­mals are sen­tient.[77] Cows are plau­si­bly more likely to be sen­tient than fish; fish are plau­si­bly more likely to be sen­tient than in­sects, and so on. Hav­ing ad­justed for these differ­ences, no dis­counts for moral sta­tus or ca­pac­ity for welfare are nec­es­sary. An an­i­mal’s prob­a­bil­ity of sen­tience is already a good enough proxy for ca­pac­ity for welfare and moral sta­tus.

Two points are worth men­tion­ing in re­sponse. The first is that our un­cer­tainty about moral sta­tus and ca­pac­ity for welfare is much greater than our un­cer­tainty about which crea­tures are sen­tient. In his 2017 Re­port on Con­scious­ness and Mo­ral Pa­tient­hood, Luke Muehlhauser puts the is­sue this way: “In a cost-benefit frame­work, one’s es­ti­mates con­cern­ing the moral weight of var­i­ous taxa are likely more im­por­tant than one’s es­ti­mated prob­a­bil­ities of the moral pa­tient­hood of those taxa. This is be­cause, for the range of pos­si­ble moral pa­tients of most in­ter­est to us, it seems very hard to jus­tify prob­a­bil­ities of moral pa­tient­hood much lower than 1% or much higher than 99%. In con­trast, it seems quite plau­si­ble that the moral weights of differ­ent sorts of be­ings could differ by sev­eral or­ders of mag­ni­tude. Un­for­tu­nately, es­ti­mates of moral weight are trick­ier to make than, and in many senses de­pend upon, one’s es­ti­mates con­cern­ing moral pa­tient­hood” (Muehlhauser 2017: Ap­pendix Z7).[78] Ig­nor­ing ca­pac­ity for welfare and moral sta­tus means ig­nor­ing con­sid­er­a­tions that could dras­ti­cally al­ter the way differ­ent in­ter­ven­tions are val­ued.

Se­condly, it’s not clear if the rank­ing of an­i­mals by prob­a­bil­ity of sen­tience will map neatly onto the rank­ing of an­i­mals by moral sta­tus or ca­pac­ity for welfare. We might be un­cer­tain that in­sects are sen­tient but come to think that if they were sen­tient, they would have ex­tremely fast con­scious­ness clock speeds, mul­ti­ply­ing their sub­jec­tive ex­pe­riences per ob­jec­tive minute com­pared to large mam­mals. Con­se­quently, in a rank­ing of ex­pected sen­tience, in­sects might rank just be­low crus­taceans; but in a rank­ing of ex­pected moral value, in­sects might rank far above crus­taceans. So not only would us­ing sen­tience prob­a­bil­ities as proxy for moral sta­tus un­der­es­ti­mate our un­cer­tainty, such us­age might also mis­al­ign the way we would ideally like to pri­ori­tize species.

In short, I agree that when calcu­lat­ing the value of a par­tic­u­lar in­ter­ven­tion, we should dis­count the welfare gain at stake by the prob­a­bil­ity that the an­i­mals to be af­fected are sen­tient. But sen­tience is no sub­sti­tute for ca­pac­ity for welfare or moral sta­tus. Hence, we should dis­count for prob­a­bil­ity of sen­tience and moral sta­tus.

Doesn’t Sta­tus-Ad­justed Welfare Re­quire a Com­mit­ment to a Prob­le­matic Form of Mo­ral Real­ism?

Fi­nally, one might be con­cerned that moral sta­tus is just not a real thing. It’s very hard (though not quite im­pos­si­ble) to be an anti-re­al­ist with re­spect to sen­tience. Even if we can never re­li­ably ac­cess the fact, it seems like there is a fact of the mat­ter about whether or not a par­tic­u­lar an­i­mal feels plea­sures or pains. But it’s much eas­ier to ques­tion the na­ture of moral sta­tus and imag­ine that moral sta­tus is just a hu­man con­struct—that there’s no there there.

Nev­er­the­less, I think most of us are com­mit­ted to tak­ing sta­tus-ad­justed welfare se­ri­ously. If one is un­com­fortable with de­grees of moral sta­tus, uni­tar­i­anism is a live op­tion. Deny­ing that any crea­tures have moral sta­tus, how­ever, im­plies that there is no moral differ­ence be­tween harm­ing a per­son and harm­ing a coffee mug.[79] But most of us feel there is a moral differ­ence, and this differ­ence is ex­plained by the fact that the per­son has moral stand­ing and the coffee mug does not. One might also be wary of differ­ences in ca­pac­ity for welfare. If so, there are the­o­ries of welfare that can ac­com­mo­date this in­tu­ition, such that all welfare sub­jects have the same ca­pac­ity. But if one thinks in­ten­sity of valenced ex­pe­rience or cog­ni­tive so­phis­ti­ca­tion or af­fec­tive com­plex­ity con­tribute to welfare, then one ought to be open to the idea that differ­ent sorts of psy­cholog­i­cal and neu­rolog­i­cal ca­pa­bil­ities give rise to differ­ences in ca­pac­ity for welfare.

Of course, even if there is a fact of the mat­ter about moral sta­tus and ca­pac­ity for welfare, learn­ing these facts is go­ing to re­quire lots of em­piri­cal data about the rel­a­tive ca­pac­i­ties of differ­ent types of an­i­mals. Gather­ing the rele­vant data will prob­a­bly re­quire co­op­er­at­ing with a large swath of sci­en­tists. This co­op­er­a­tion might be hin­dered by the per­cep­tion that moral sta­tus and ca­pac­ity for welfare aren’t sci­en­tific prop­er­ties. Con­vinc­ing sci­en­tists to un­der­take ex­per­i­ments that will shed light on a prop­erty they might not think even ex­ists could be tough. It’s hard enough to get the rele­vant sci­en­tists in­ter­ested in in­ves­ti­gat­ing sen­tience. Won’t this talk of moral sta­tus and ca­pac­ity for welfare, the ob­jec­tion asks, scare away the very al­lies we need to re­solve our un­cer­tainty about sta­tus-ad­justed welfare?

Maybe. But biol­o­gists, neu­ro­scien­tists, and com­par­a­tive psy­chol­o­gists already in­ves­ti­gate many of the fea­tures we care about. If nec­es­sary, we could fund fur­ther work in this vein with­out refer­ence to com­par­a­tive moral value. Even if the in­ves­ti­ga­tion of some fea­tures would re­quire con­vinc­ing sci­en­tists to take sta­tus-ad­justed welfare se­ri­ously, that’s a prac­ti­cal difficulty, and lit­tle rea­son by it­self to stop think­ing about moral sta­tus and ca­pac­ity for welfare.

Conclusion

An­i­mals differ in all sorts of ways: their neu­ral ar­chi­tec­ture, their af­fec­tive com­plex­ity, their cog­ni­tive so­phis­ti­ca­tion, their so­cia­bil­ity. This vari­a­tion may give rise to differ­ences in phe­nom­e­nal ex­pe­rience, de­sire satis­fac­tion, ra­tio­nal agency, and other po­ten­tially morally im­por­tant traits and fea­tures. When we al­lo­cate re­sources be­tween hu­man and non-hu­man causes and among differ­ent non-hu­man an­i­mals, we are im­plic­itly mak­ing value judg­ments about the com­par­a­tive moral value of differ­ent species. Th­ese value judg­ments ought to be made ex­plicit, and they ought to be grounded in both the de­tails of our most plau­si­ble range of philo­soph­i­cal the­o­ries and the at­ten­dant rele­vant em­piri­cal facts. Although we should not be con­fi­dent in any par­tic­u­lar philo­soph­i­cal the­ory, if a plu­ral­ity of plau­si­ble the­o­ries sug­gest that psy­cholog­i­cal ca­pac­i­ties af­fect char­ac­ter­is­tic moral value, we should be sen­si­tive to those differ­ences when we al­lo­cate re­sources across in­ter­ven­tions and cause ar­eas that tar­get differ­ent an­i­mals. In this post I have at­tempted to de­velop a broad con­cep­tual frame­work for an­a­lyz­ing the im­pact and im­por­tance of ca­pac­ity for welfare and moral sta­tus. Much work re­mains to be done to make rea­son­ably pre­cise the mag­ni­tude of differ­ence that such con­sid­er­a­tions could make to our al­loca­tive de­ci­sion-mak­ing. Mea­sur­ing and com­par­ing ca­pac­ity for welfare and moral sta­tus is not go­ing to be easy. But mak­ing progress on this is­sue could greatly ad­vance our abil­ity to im­prove the world.

Credits

This es­say is a pro­ject of Re­think Pri­ori­ties. It was writ­ten by Ja­son Schukraft. Thanks to Mar­cus A. Davis, Neil Dul­laghan, Derek Foster, David Moss, Luke Muehlhauser, Jeff Sebo, and Saulius Šimčikas for helpful feed­back. If you like our work, please con­sider sub­scribing to our newslet­ter. You can see all our work to date here.

Works Cited

Akhtar, S. (2011). An­i­mal pain and welfare: Can pain some­times be worse for them than for us?. in Beauchamp & Frey (eds.) The Oxford Hand­book of An­i­mal Ethics, 495-518.

Bar-On, Y. M., Phillips, R., & Milo, R. (2018). The bio­mass dis­tri­bu­tion on Earth. Pro­ceed­ings of the Na­tional Academy of Sciences, 115(25), 6506-6511.

Broom, D. M. (2007). Cog­ni­tive abil­ity and sen­tience: which aquatic an­i­mals should be pro­tected?. Diseases of Aquatic Or­ganisms, 75(2), 99-108.

Car­l­son, E. (2000). Ag­gre­gat­ing harms — should we kill to avoid headaches? Theo­ria, 66(3), 246-255.

Car­ruthers, P. (2007). In­ver­te­brate minds: a challenge for eth­i­cal the­ory. The Jour­nal of Ethics, 11(3), 275-297.

Crisp, R. (2003). Equal­ity, pri­or­ity, and com­pas­sion. Ethics, 113(4), 745-763.

DeGrazia, D. (1991). The dis­tinc­tion be­tween equal­ity in moral sta­tus and de­serv­ing equal con­sid­er­a­tion. Between the Species, 7(2), 73-77.

DeGrazia, D. (2008). Mo­ral sta­tus as a mat­ter of de­gree? The South­ern Jour­nal of Philos­o­phy, 46(2), 181-198.

DeGrazia, D. (2016). Mo­dal per­son­hood and moral sta­tus: A re­ply to Ka­gan’s pro­posal. Jour­nal of Ap­plied Philos­o­phy, 33(1), 22-25.

Dou­glas, T. (2013). Hu­man en­hance­ment and supra-per­sonal moral sta­tus. Philo­soph­i­cal Stud­ies, 162(3), 473-497.

Fin­nis, J. (2011). Nat­u­ral law and nat­u­ral rights. Oxford Univer­sity Press.

Fletcher, G. (2013). A fresh start for the ob­jec­tive-list the­ory of well-be­ing. Utili­tas, 25(2), 206-220.

Fletcher, G. (2016a). The philos­o­phy of well-be­ing: An in­tro­duc­tion. Rout­ledge.

Fletcher, G. (2016b). Ob­jec­tive list the­ory. in G. Fletcher (ed) The Rout­ledge Hand­book of Philos­o­phy of Well-Be­ing. New York: Rout­ledge, pp. 148-160.

Har­man, E. (2003). The po­ten­tial­ity prob­lem. Philo­soph­i­cal Stud­ies, 114(1), 173-198.

Haus­man, D. M., & Wal­dren, M. S. (2011). Egal­i­tar­i­anism re­con­sid­ered. Jour­nal of Mo­ral Philos­o­phy, 8(4), 567-586.

Hooker, B. (2015). The el­e­ments of well-be­ing. Jour­nal of Prac­ti­cal Ethics, 3(1).

Hursthouse, R. (1999). On Virtue Ethics. Oxford Univer­sity Press.

Ka­gan, S. (2019). How to count an­i­mals, more or less. Oxford, UK: Oxford Univer­sity Press.

Kazez, J. (2010). An­i­malkind: What We Owe to An­i­mals. Wiley-Black­well.

Kraut, R. (2007). What is good and why: The ethics of well-be­ing. Har­vard Univer­sity Press.

Lin, E. (2014). Plu­ral­ism about well-be­ing. Philo­soph­i­cal Per­spec­tives, 28, 127-154.

Lin, E. (2017). Against welfare sub­jec­tivism. Noûs, 51(2), 354-377.

Lin, E. (2018). Welfare in­vari­abil­ism. Ethics, 128(2), 320-345.

Mayer­field, J. (1999). Suffer­ing and moral re­spon­si­bil­ity. Oxford Univer­sity Press.

McMa­han, J. (1996). Cog­ni­tive dis­abil­ity, mis­for­tune, and jus­tice. Philos­o­phy & Public Af­fairs, 25(1), 3-35.

Mill, J. S. (1861/​2016). Utili­tar­i­anism. In S.M. Cahn (ed) Seven mas­ter­pieces of philos­o­phy. Rout­ledge, pp. 337-383.

Muehlhauser, L. (2017). Re­port on con­scious­ness and moral pa­tient­hood. Open Philan­thropy Pro­ject.

Nor­wood, F. B., & Lusk, J. L. (2011). Com­pas­sion, by the pound: the eco­nomics of farm an­i­mal welfare. Oxford Univer­sity Press.

Nuss­baum, M. C. (2004). Beyond “Com­pas­sion and Hu­man­ity:” Jus­tice for Non­hu­man An­i­mals. in C. R. Sun­stein and M. Nuss­baum (eds) An­i­mal Rights: Cur­rent De­bates and New Direc­tions. Oxford: Oxford Univer­sity Press, pp. 299-320

Parfit, D. (1984). Rea­sons and per­sons. Oxford Univer­sity Press.

Parfit, D. (1997). Equal­ity and pri­or­ity. Ra­tio, 10(3), 202-221.

Rachels, J. (2004). Draw­ing Lines. in C. R. Sun­stein and M. Nuss­baum (eds) An­i­mal Rights: Cur­rent De­bates and New Direc­tions. Oxford Univer­sity Press, pp. 162–74.

Sachs, B. (2011). The sta­tus of moral sta­tus. Pa­cific Philo­soph­i­cal Quar­terly, 92(1), 87-104.

Sebo, J. (2018). The moral prob­lem of other minds. The Har­vard Re­view of Philos­o­phy, 25, 51-70.

Singer, P. (2011). Prac­ti­cal ethics, 3rd Edi­tion. Cam­bridge Univer­sity Press.

Tiberius, V. (2015). Pru­den­tial value. in I. Hirose and J. Ol­son (eds) The Oxford hand­book of value the­ory, pp. 158-174.

Vallen­tyne, P. (2007). Of mice and men: Equal­ity and an­i­mals. in N. Holtug and K. Lip­pert-Ras­mussen (eds.) Egal­i­tar­i­anism: New Es­says on the Na­ture and Value of Equal­ity, Oxford Univer­sity Press, pp. 211-238.

Van Den Hoogen, J., Geisen, S., Routh, D., Fer­ris, H., Traun­spurger, W., War­dle, D. A., … & Bard­gett, R. D. (2019). Soil ne­ma­tode abun­dance and func­tional group com­po­si­tion at a global scale. Na­ture, 572(7768), 194-198.

Višak, T. (2017). Cross-Species Com­par­i­sons of Welfare. in Wood­hall, A., & da Trindade, G. G. (eds.). Eth­i­cal and Poli­ti­cal Ap­proaches to Non­hu­man An­i­mal Is­sues. Pal­grave Macmil­lan, pp. 347-363.

Woodard, C. (2013). Clas­sify­ing the­o­ries of welfare. Philo­soph­i­cal Stud­ies, 165(3), 787-803.

Notes


  1. My col­league Saulius Šimčikas has com­piled a long list of es­ti­mates of global cap­tive ver­te­brates. ↩︎

  2. See this spread­sheet for de­tails. By my count, ev­ery or­der in the spread­sheet is ex­ploited in num­bers greater than ~50 mil­lion in­di­vi­d­u­als per year. ↩︎

  3. Of course, some of these an­i­mals are treated much worse than oth­ers. See the ‘Ob­jec­tions’ sec­tion for more dis­cus­sion of this point. ↩︎

  4. Ideal in the sense that we are ig­nor­ing strate­gic con­sid­er­a­tions like how the al­lo­ca­tion might af­fect pub­lic opinion. So maybe in an ideal world we would be com­mit­ting more re­sources to arthro­pod welfare, but we can’t in the ac­tual world be­cause do­ing so would risk too great a rep­u­ta­tional harm. ↩︎

  5. Some au­thors pre­fer the term ‘well-be­ing’ to ‘welfare.’ In many in­stances, two terms are meant to be syn­ony­mous. How­ever, some au­thors draw a dis­tinc­tion be­tween well-be­ing and welfare, re­serv­ing ‘welfare’ for non-in­stru­men­tal goods con­sti­tuted by ex­pe­rience. I use the term ‘welfare’ in the more ex­pan­sive sense in which a sub­ject’s welfare is con­sti­tuted by what­ever is non-in­stru­men­tally good for the sub­ject, whether ex­pe­ri­en­tial or non-ex­pe­ri­en­tial. ↩︎

  6. Note that this range need not be sym­met­ric be­tween pos­i­tive and nega­tive welfare. An an­i­mal might have only a small ca­pac­ity for pos­i­tive welfare but a large ca­pac­ity for nega­tive welfare or vice versa. ↩︎

  7. I’m here as­sum­ing the ad­di­tivity of welfare. More on that as­sump­tion in the ‘Ob­jec­tions’ sec­tion. ↩︎

  8. It’s not ob­vi­ous that they do, but we can sub­sti­tute a differ­ent fea­ture that does raise ca­pac­ity for welfare with­out af­fect­ing the sub­stance of the thought ex­per­i­ment. ↩︎

  9. It’s un­cer­tain that such a pig would re­main a pig. But be­cause it is un­cer­tain, it is an epistemic pos­si­bil­ity that it would. ↩︎

  10. Of course, if there were some an­i­mals that were ca­pa­ble of trans­for­ma­tion into su­per­plea­sure ma­chines and some that were not, that in­for­ma­tion could be valuable to our tech­nolog­i­cally ad­vanced de­scen­dants. Similarly, if there were a way to re­duce the over­all in­ten­sity of valenced ex­pe­rience, that tech­nol­ogy could plau­si­bly lead to re­duc­tions in an­i­mal suffer­ing if the tech­nique were ap­plied to an­i­mals lead­ing net-nega­tive lives. ↩︎

  11. Another pos­si­bil­ity is that pigs already have the la­tent po­ten­tial for ex­treme plea­sure, if, say we were able to si­mul­ta­neously stim­u­late all their neu­rons at once. As­sum­ing that pigs can­not ar­tifi­cally achieve this stim­u­la­tion on their own and that no nat­u­ral cir­cum­stance ac­ti­vates such a stim­u­la­tion, such a pos­si­bil­ity only im­plies a large po­ten­tial for plea­sure, not a large ca­pac­ity for plea­sure. ↩︎

  12. Or, in Lewisian terms, the coun­ter­parts of S ↩︎

  13. Ad­mit­tedly, filling in the de­tails of this rel­a­tiviza­tion will be com­plex. It’s not at all clear how to define ‘nor­mal vari­a­tion’ or ‘species-typ­i­cal an­i­mal.’ I set aside that difficulty for now. ↩︎

  14. When I say that they are in a po­si­tion to make a greater con­tri­bu­tion, I of course mean on a per cap­ita ba­sis. At the group level, ex­tremely nu­mer­ous an­i­mals might de­serve more at­ten­tion even if their in­di­vi­d­ual ca­pac­ity for welfare is quite low be­cause col­lec­tively the group can make a big­ger welfare con­tri­bu­tion than other groups. See the “Ob­jec­tions” sec­tion for more dis­cus­sion of this is­sue. ↩︎

  15. Cer­tainly this is true of some in­di­vi­d­u­als. ↩︎

  16. See Lin 2018 for dis­cus­sion and a defense of welfare in­vari­abil­ism. ↩︎

  17. Of course, not all species of birds fly, so unim­peded flight is not a welfare con­stituent for all birds. In this dis­cus­sion birds is im­plic­itly re­stricted to fly­ing birds. ↩︎

  18. Again, ob­vi­ously, these claims aren’t true of all birds. ↩︎

  19. A dis­tinct ex­pla­na­tion is that fly­ing ex­em­plifies the essence of be­ing a (fly­ing) bird and that swim­ming ex­em­plifies the essence of be­ing a fish and that ex­em­plify­ing one’s species-rel­a­tive essence con­tributes to one’s flour­ish­ing. In this case, one’s de­gree of flour­ish­ing is the non-in­stru­men­tal good that de­ter­mines one’s welfare. See Hursthouse 1999, es­pe­cially chap­ter 9, for more on the con­cept of ‘flour­ish­ing.’ ↩︎

  20. Depend­ing on one’s preferred the­ory of welfare, these ac­tivi­ties might be valuable for their own sake or they might be valuable for the pos­i­tive men­tal states they en­gen­der. ↩︎

  21. Welfare in­vari­abil­ism im­plies that if the­o­ret­i­cal con­tem­pla­tion were a welfare con­stituent, then if a fish en­gaged in the­o­ret­i­cal con­tem­pla­tion, it would be non-in­stru­men­tally good for that fish. ↩︎

  22. If vari­abil­ism is true, then de­ter­min­ing ca­pac­ity for welfare is likely to be much more difficult be­cause we’ll have to figure out the right the­ory of welfare for each of the an­i­mals that we care about. ↩︎

  23. This tri­par­tite di­vi­sion traces back to Parfit 1984, though it’s hardly ex­haus­tive of the con­tem­po­rary liter­a­ture. See Woodard 2013 for a novel clas­sifi­ca­tory scheme that in­tro­duces 16 dis­tinct cat­e­gories. ↩︎

  24. In some clas­sifi­ca­tory schema of welfare the­o­ries, he­do­nis­tic the­o­ries is re­placed with the broader cat­e­gory men­tal state the­o­ries. A the­ory is a men­tal state the­ory if and only if the con­stituents of welfare are men­tal states. He­donism is by far the most pop­u­lar men­tal state the­ory, so for sim­plic­ity’s sake I will avoid dis­cus­sion of the broader cat­e­gory. ↩︎

  25. Ac­cord­ing to some ver­sions of de­sire the­ory the rele­vant de­sires need not be one’s ac­tual de­sires. For in­stance, full in­for­ma­tion the­ory defines welfare in terms of the de­sires that a suit­ably ideal­ized ver­sion of one­self would hold if one were fully in­formed. See Tiberius 2015: 164-166 for more on full in­for­ma­tion the­ory. ↩︎

  26. Both he­do­nis­tic the­o­ries and de­sire-fulfill­ment the­o­ries could be un­der­stood as ob­jec­tive list the­o­ries, but in the con­text of the tra­di­tional clas­sifi­ca­tory scheme, it’s un­der­stood that the goods of an ob­jec­tive list the­ory go be­yond the mere ex­pe­rience of plea­sure or satis­fac­tion of de­sires. ↩︎

  27. See Fletcher 2016a for an overview. ↩︎

  28. The modal sta­tus of this claim is a bit un­clear. Even if the welfare con­stituents dis­cussed in this para­graph are in­ac­cessible to non­hu­man an­i­mals in the ac­tual world and in nearby pos­si­ble wor­lds, it doesn’t fol­low that these welfare con­stituents are nec­es­sar­ily in­ac­cessible. ↩︎

  29. See Fin­nis 2011, Fletcher 2013, Fletcher 2016b, Lin 2014, Lin 2017, Hooker 2015 for re­cent work in the ob­jec­tive list tra­di­tion. ↩︎

  30. This quote from Ka­gan 2019 nicely sum­ma­rizes ways in which ob­jec­tive list welfare con­stituents might be in­ac­cessible, in whole or in part, to cer­tain non­hu­man an­i­mals: “First of all, then, peo­ple have deeper and more mean­ingful re­la­tion­ships than an­i­mals, with more sig­nifi­cant and valuable in­stances of friend­ships and love and fam­ily re­la­tions, based not just on car­ing and shared af­fec­tion but on in­sight and mu­tual un­der­stand­ing as well. Se­cond, peo­ple are ca­pa­ble of pos­sess­ing greater and more valuable knowl­edge, in­clud­ing not only self-knowl­edge and knowl­edge of one’s fam­ily and friends, but also sys­tem­atic em­piri­cal knowl­edge as well for an in­cred­ibly wide range of phe­nom­ena, cul­mi­nat­ing in beau­tiful and sweep­ing sci­en­tific the­o­ries. Third, peo­ple are ca­pa­ble of a sig­nifi­cantly greater range of achieve­ments, dis­play­ing cre­ativity and in­ge­nu­ity as we pur­sue a vast range of goals, in­clud­ing hob­bies, cul­tural pur­suits, busi­ness en­deav­ors, and poli­ti­cal un­der­tak­ings. Fourth, peo­ple have a highly de­vel­oped aes­thetic sense, with so­phis­ti­cated ex­pe­rience and un­der­stand­ing of works of art, in­clud­ing mu­sic, dance, paint­ing, liter­a­ture and more, as well as hav­ing a deeper ap­pre­ci­a­tion of nat­u­ral beauty and the aes­thetic di­men­sions of the nat­u­ral world, in­clud­ing the laws of na­ture and of math­e­mat­ics. Fifth, peo­ple have greater pow­ers of nor­ma­tive re­flec­tion, with a height­ened abil­ity to eval­u­ate what mat­ters, a strik­ing ca­pac­ity to aim for lives that are mean­ingful and most worth liv­ing, and a re­mark­able drive to dis­cover what moral­ity de­mands of us” (48). ↩︎

  31. See also this pas­sage: “Now it is an un­ques­tion­able fact that those who are equally ac­quainted with, and equally ca­pa­ble of ap­pre­ci­at­ing and en­joy­ing, both, do give a most marked prefer­ence to the man­ner of ex­is­tence which em­ploys their higher fac­ul­ties. Few hu­man crea­tures would con­sent to be changed into any of the lower an­i­mals, for a promise of the ful­lest al­lowance of a beast’s plea­sures; no in­tel­li­gent hu­man be­ing would con­sent to be a fool, no in­structed per­son would be an ig­no­ra­mus, no per­son of feel­ing and con­science would be self­ish and base, even though they should be per­suaded that the fool, the dunce, or the ras­cal is bet­ter satis­fied with his lot than they are with theirs. They would not re­sign what they pos­sess more than he for the most com­plete satis­fac­tion of all the de­sires which they have in com­mon with him. If they ever fancy they would, it is only in cases of un­hap­piness so ex­treme, that to es­cape from it they would ex­change their lot for al­most any other, how­ever un­de­sir­able in their own eyes. A be­ing of higher fac­ul­ties re­quires more to make him happy, is ca­pa­ble prob­a­bly of more acute suffer­ing, and cer­tainly ac­cessible to it at more points, than one of an in­fe­rior type” (Mill 1861: chap­ter 2). ↩︎

  32. I dis­cuss the spe­cific ca­pa­bil­ities that might make a differ­ence in the sec­ond en­try in the se­ries. ↩︎

  33. For ex­am­ple, differ­ences in neu­ral pro­cess­ing speed might give rise to differ­ences in the sub­jec­tive ex­pe­rience of time. Thus, for a given minute of ob­jec­tive time, some an­i­mals might ex­pe­rience more or less than a minute of sub­jec­tive time. I dis­cuss this pos­si­bil­ity in more de­tail in the third en­try in the se­ries. ↩︎

  34. Vallen­tyne is not him­self a he­do­nist. He adds, “More­over, well-be­ing does not de­pend solely on pain and plea­sure. It’s con­tro­ver­sial ex­actly what else is rele­vant — ac­com­plish­ments, re­la­tion­ships, and so on — but all ac­counts agree that typ­i­cal hu­mans have greater ca­pac­i­ties for what­ever the ad­di­tional rele­vant items are” (ibid.). ↩︎

  35. See Akhtar 2011 for gen­eral dis­cus­sion of this point. ↩︎

  36. See Broom 2007: “For some sen­tient an­i­mals, pain can be es­pe­cially dis­turb­ing on some oc­ca­sions be­cause the in­di­vi­d­ual con­cerned uses its so­phis­ti­cated brain to ap­pre­ci­ate that such pain in­di­cates a ma­jor risk. How­ever, more so­phis­ti­cated brain pro­cess­ing will also provide bet­ter op­por­tu­ni­ties for cop­ing with some prob­lems. For ex­am­ple, hu­mans may have means of deal­ing with pain that fish do not, and may suffer less from pain be­cause they are able to ra­tio­nal­ise that it will not last for long. There­fore, in some cir­cum­stances, hu­mans who ex­pe­rience a par­tic­u­lar pain might suffer more than fish, whilst in other cir­cum­stances a cer­tain de­gree of pain may cause worse welfare in fish than in hu­mans” (103). ↩︎

  37. A similar story can be told about plea­surable ex­pe­riences. The knowl­edge that a given plea­surable ex­pe­rience is fleet­ing or un­de­served or bad for one’s health can re­duce en­joy­ment of the ex­pe­rience. My dog seems to en­joy her dog treats more than I en­joy my ice cream at least in part be­cause I eat my ice cream with a guilty con­science. ↩︎

  38. Alter­na­tively, it might be the statis­ti­cal reg­u­lar­ity of the pat­tern rather than the phe­nom­e­nal in­ten­sity of the pat­tern that would be as­sisted by cog­ni­tive so­phis­ti­ca­tion. Thanks to Gavin Tay­lor for this point. ↩︎

  39. Even ig­nor­ing the com­bi­na­tory effects, it might be the case that in­tel­lec­tual, emo­tional, and so­cial plea­sures gen­er­ally out­strip mere phys­i­cal plea­sures in in­ten­sity (and con­versely for pains). ↩︎

  40. See, in­ter alia, Višak 2017 for an ar­gu­ment in fa­vor of the so-called self-fulfill­ment the­ory of welfare, ac­cord­ing to which “a max­i­mally well-off dog or squir­rel is far­ing just as well as a max­i­mally well-off hu­man. An in­di­vi­d­ual’s cog­ni­tive and emo­tional ca­pac­i­ties do not nec­es­sar­ily de­ter­mine how well off this in­di­vi­d­ual can be” (348). ↩︎

  41. Mo­ral stand­ing is also some­times called ‘moral pa­tient­hood’ or ‘moral con­sid­er­abil­ity.’ ↩︎

  42. Mo­ral stand­ing should be dis­t­in­guished from moral agency. Mo­ral agency is the ca­pac­ity to be morally re­spon­si­ble for one’s ac­tions or the ca­pac­ity to owe moral obli­ga­tions to other be­ings. Mo­ral stand­ing does not en­tail moral agency. ↩︎

  43. Note that this is the nar­row un­der­stand­ing of sen­tience. The broader (and more com­mon) un­der­stand­ing of sen­tience equates it with phe­nom­e­nal con­scious­ness (i.e., sen­tience is the ca­pac­ity for any sort of ex­pe­rience, valenced or not). ↩︎

  44. Note that agency is some­times un­der­stood to re­quire some­thing like ra­tio­nal de­liber­a­tion. This thicker sense of agency would ob­vi­ously be more re­stric­tive than the thin sense in which agency might be suffi­cient for moral stand­ing. Still, there is con­sid­er­able dis­agree­ment as to what con­sti­tutes a de­sire, plan, or prefer­ence, and one’s views on this is­sue will in­fluence one’s views on which an­i­mals have moral stand­ing and/​or one’s view on the plau­si­bil­ity of agency as suffi­cient for moral stand­ing. ↩︎

  45. The the­olog­i­cal-minded might pre­fer a view on which moral stand­ing is grounded in the pos­ses­sion of a Carte­sian soul. But on most such ac­counts, the pos­ses­sion of a Carte­sian soul grants sen­tience or agency or both. So even most the­olo­gians will agree that all sen­tient agents have moral stand­ing be­cause they will think that the class of moral agents is co­ex­ten­sive with the class of be­ings with Carte­sian souls. ↩︎

  46. Agency is harder to define than sen­tience, and this defi­ni­tion com­pli­cates the de­bate over whether agency is suffi­cient for moral stand­ing. If even crude de­sires, plans, and prefer­ences are enough for agency, then it ap­pears that crea­tures like spi­ders qual­ify as agents, which may by it­self be a rea­son to sus­pect agency is in­suffi­cient for moral stand­ing (Car­ruthers 2007). More­over, if one sets the bar too low for agency, then it will be hard to ex­clude so­phis­ti­cated com­puter pro­grams, like OpenAI Five play­ing Dota 2. Although it is cer­tainly pos­si­ble that digi­tal minds can ac­quire moral stand­ing, there is wide­spread agree­ment that cur­rent pro­grams do not have such stand­ing. ↩︎

  47. Note that some au­thors use the term ‘moral sta­tus’ the way I’m us­ing the term ‘moral stand­ing.’ This ter­minolog­i­cal differ­ence should be dis­t­in­guished from the case where an au­thor uses the terms the way I am but who thinks that there are no de­grees of moral sta­tus, in which case moral sta­tus col­lapses to moral stand­ing. ↩︎

  48. ‘Fish’ is a pa­ra­phyletic group. Any tax­o­nomic group con­tain­ing all fish would also con­tain tetrapods, which are not fish. ↩︎

  49. I’m here brack­et­ing any eco­cen­trist or re­la­tion­ist views that re­ject an in­di­vi­d­u­al­ist con­cep­tion of moral sta­tus. ↩︎

  50. Other uni­tar­i­ans in­clude Eliz­a­beth Har­man, Martha Nuss­baum, and Os­car Horta. ↩︎

  51. Other pro­po­nents of the hi­er­ar­chi­cal view in­clude Peter Vallen­tyne, Jean Kazez, and of course John Stu­art Mill. ↩︎

  52. Pri­ori­tar­i­anism is the view ac­cord­ing to which ad­di­tions to welfare mat­ter more the worse off the per­son is whose welfare is af­fected. See Parfit 1997 for more dis­cus­sion. ↩︎

  53. Egal­i­tar­i­anism is the view ac­cord­ing to which a sub­ject’s welfare is weighted by its stand­ing rel­a­tive to the welfare of other sub­jects, with more equal dis­tri­bu­tions of welfare bet­ter than less equal dis­tri­bu­tions of welfare. See Haus­man & Wal­dren 2011 for more dis­cus­sion. ↩︎

  54. Another op­tion is to re­ject views with dis­tribu­tive re­quire­ments like egal­i­tar­i­anism and pri­ori­tar­i­anism. Nei­ther Ka­gan nor Višak en­dorse this op­tion. ↩︎

  55. Note that Ka­gan’s po­si­tion does not en­tail that pri­ori­tar­i­anism and egal­i­tar­i­anism will never de­mand that we pri­ori­tize a mouse’s welfare over a hu­man’s welfare. Depend­ing on the ex­act differ­ence in moral sta­tus, it might, for ex­am­ple, be the case that we ought to pri­ori­tize a mouse’s welfare over a hu­man’s welfare when the mouse is a 4 out of 10 and the hu­man is a 60 out of 100. ↩︎

  56. Note that Singer is not nec­es­sar­ily en­dors­ing this view; only say­ing that it can­not be re­jected out of hand as speciesist. ↩︎

  57. The view that welfare ca­pac­ity or ra­tio­nal agency ground moral stand­ing does not au­to­mat­i­cally gen­er­ate a com­mit­ment to de­grees of moral sta­tus, even if welfare ca­pac­ity and agency ad­mit of de­grees. For one thing, al­though ca­pac­ity for welfare and ca­pac­ity for ra­tio­nal choice ad­mit of de­grees, the pos­ses­sion of these ca­pac­i­ties does not: one ei­ther pos­sesses these ca­pac­i­ties or one does not. Put an­other way, one is ei­ther a welfare sub­ject or not; one is ei­ther a ra­tio­nal agent or one is not. An anal­ogy: Age ad­mits of de­grees. In many ju­ris­dic­tions one must be 18 years old to vote, and there are good ar­gu­ments that there should be some age re­stric­tions on vot­ing. But those ar­gu­ments don’t im­ply that the older one is, the more one’s vote should count. ↩︎

  58. As a re­minder, these are merely some the­o­ret­i­cal difficul­ties. Ac­tu­ally mea­sur­ing and com­par­ing these fea­tures across an­i­mals in prac­tice raises a slew of differ­ent but no less vex­ing prob­lems. I dis­cuss these prob­lems in the sec­ond en­try in the se­ries. ↩︎

  59. In a re­cent talk at Notre Dame, Eric Sch­witzgebel offers a more ex­treme ver­sion of the same prob­lem con­cern­ing di­ver­gent AI: “Diver­gent AI would have hu­man or su­per­hu­man lev­els of some fea­tures that we tend to re­gard as im­por­tant to moral sta­tus but sub­hu­man lev­els of other fea­tures that we tend to re­gard as im­por­tant to moral sta­tus. For ex­am­ple, it might be pos­si­ble to de­sign AI with im­mense the­o­ret­i­cal and prac­ti­cal in­tel­li­gence but with no ca­pac­ity for gen­uine joy or suffer­ing. Such AI might have con­scious ex­pe­riences with lit­tle or no emo­tional valence. Just as we can con­sciously think to our­selves, with­out much emo­tional valence, there’s a moun­tain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this di­ver­gent AI could have con­scious thoughts like that. But it would never feel wow, yippee! And it would never feel crush­ingly dis­ap­pointed, or bored, or de­pressed. It isn’t clear what the moral sta­tus of such an en­tity would be: On some moral the­o­ries, it would de­serve hu­man-grade rights; on other the­o­ries it might not mat­ter how we treat it.” ↩︎

  60. In a re­cent talk at Notre Dame, Eric Sch­witzgebel offers the ex­am­ple of “a su­per­plea­sure ma­chine but one with lit­tle or no ca­pac­ity for ra­tio­nal thought. It’s like one gi­ant, ir­ra­tional or­gasm all day long. Would it be great to make such things and ter­rible to de­stroy them, or is such ir­ra­tional plea­sure not re­ally some­thing worth much in the moral calcu­lus?” ↩︎

  61. One might ac­count for these in­tu­itions by ap­peal to the po­ten­tial ca­pac­i­ties that ba­bies pos­sess. See Har­man 2003 for dis­cus­sion and crit­i­cism of this idea. ↩︎

  62. One might at­tempt to skirt this difficulty by ap­peal to modal ca­pac­i­ties. Although the cog­ni­tively-im­paired hu­man does not have the po­ten­tial to de­velop species-typ­i­cal in­tel­lec­tual and emo­tional so­phis­ti­ca­tion, in nearby pos­si­ble wor­lds, the per­son does pos­sess this po­ten­tial. See DeGrazia 2016 for dis­cus­sion and crit­i­cism of this idea. ↩︎

  63. See Ka­gan 2019: 164-169 for more dis­cus­sion of this is­sue. (Note that this ex­am­ple is for illus­tra­tive pur­poses only. I make no claim as to an ac­tual differ­ence in in­tel­li­gence be­tween as­tro­physi­cists and so­cial me­dia in­fluencers. [And even if as­tro­physi­cists were smarter, so­cial me­dia in­fluencers might score higher on other morally rele­vant traits, like em­pa­thy.]) ↩︎

  64. If moral sta­tus is a con­tin­u­ous gra­di­ent and de­ter­mined at least in part by so­cial, af­fec­tive, or in­tel­lec­tual ca­pa­bil­ity, then some hu­mans will likely have a higher sta­tus than oth­ers. If moral sta­tus is in­stead a dis­crete se­ries of lay­ers, then a sin­gle layer may en­com­pass all hu­mans. The like­li­hood of this pos­si­bil­ity de­pends on how many lay­ers there are. ↩︎

  65. Im­por­tantly, Ka­gan is not merely sug­gest­ing that we di­vide moral sta­tus into six tiers for prac­ti­cal pur­poses. He be­lieves there ac­tu­ally are six tiers (give or take a cou­ple) of moral sta­tus. This po­si­tion fol­lows from his (ten­ta­tive) com­mit­ment to prac­ti­cal re­al­ism, the view that “moral rules are to be eval­u­ated with an eye to­ward our ac­tual epistemic and mo­ti­va­tional limi­ta­tions” (Ka­gan 2019: 292). ↩︎

  66. Another term that might be used to cap­ture both moral sta­tus and ca­pac­ity for welfare is ‘moral weight.’ Although ‘sta­tus-ad­justed welfare’ isn’t a perfect term, I think ‘moral weight’ suffers from two prob­lems. First, to my ear, it doesn’t sound ag­nos­tic be­tween the hi­er­ar­chi­cal ap­proach and the uni­tar­ian ap­proach. One in­for­mal way of de­scribing uni­tar­i­anism is ‘the view that re­jects moral weights.’ Se­cond, the term is am­bigu­ous. It might mean that differ­ent in­di­vi­d­u­als can have the same in­ter­est but weight it differ­ently (e.g., it mat­ters morally that the per­son in ex­treme poverty puts a differ­ent weight on re­ceiv­ing $100 than Mike Bloomberg does) or it might mean that differ­ent in­di­vi­d­u­als with in­ter­ests of the same weight might not count the same (e.g., the in­ter­ests of the in­di­vi­d­ual with higher moral sta­tus takes pri­or­ity, i.e., the hi­er­ar­chi­cal ap­proach). ↩︎

  67. A max­i­miz­ing act con­se­quen­tial­ist who be­lieves welfare is the only thing of in­trin­sic value will en­dorse this an­swer. How­ever, other nor­ma­tive the­o­ries will de­liver differ­ent an­swers. For ex­am­ple, some the­o­ries will say that a world in which sta­tus-ad­justed welfare is max­i­mized but un­evenly dis­tributed might be worse than a world in which sta­tus-ad­justed welfare is not max­i­mized but is more evenly dis­tributed. More ob­vi­ously, ax­iolo­gies that hold that welfare isn’t the only in­trin­sic value won’t be­lieve that sta­tus-ad­justed welfare is the only thing that should be max­i­mized. ↩︎

  68. If pas­ture-raised cows lead net-pos­i­tive lives, then on some con­se­quen­tial­ist views, re­duc­ing the stock of pas­ture-raised cows may ac­tu­ally be a net-nega­tive in­ter­ven­tion. ↩︎

  69. See the sec­tion on in­ten­sity of suffer­ing in Stephen War­ren’s “Suffer­ing by the Pound” for more de­tail. ↩︎

  70. See Luke Muehlhauser’s “Pre­limi­nary Thoughts on Mo­ral Weight” for the best jus­tified es­ti­mates of which I’m aware. Muehlhauser’s ranges are ex­tremely large, ap­pro­pri­ately re­flect­ing our deep un­cer­tainty about the sub­ject. ↩︎

  71. See, for ex­am­ple, Figure 1 in the FAO’s 2018 “The State of World Fish­eries and Aqua­cul­ture” re­port. ↩︎

  72. The farm­ing of cochineal may cause an ad­di­tional 4.6 to 21 trillion deaths, pri­mar­ily nymphs that do not sur­vive to adult­hood. ↩︎

  73. For defenses of a similar po­si­tion, see Vallen­tyne 2007 and Sachs 2011. ↩︎

  74. Kazez adds, “As I put it in the last chap­ter, species can be very roughly ranged along a lad­der. In­di­vi­d­ual hu­man lives do have more value than in­di­vi­d­ual au­rochs lives, be­cause they in­volve more valuable ca­pac­i­ties. If that rank­ing meant there was an ex­change rate, with one hu­man life worth 100 au­rochs lives, or some­thing of the sort, then we could get a grip on the ‘profli­gacy point.’ If you kill more an­i­mals to save a hu­man be­ing than a hu­man life is worth, then that’s profli­gate … and dis­re­spect­ful. But grant­ing there’s a rank­ing doesn’t mean rec­og­niz­ing any ex­change rate. If one hu­man life has more value than one au­rochs life, there’s noth­ing that says that there must be an equiv­alence be­tween one hu­man life and 10, or 100, or 1,000, or any num­ber of au­rochs lives. And that’s not a mat­ter of speciesist prej­u­dice. The same is true when two an­i­mal species are com­pared. Chim­panzee lives may have more value, typ­i­cally, than squir­rel lives. It doesn’t fol­low that one chim­panzee is ‘worth’ 10 squir­rels, or 100, or 1,000” (Kazez 2010: 112). ↩︎

  75. Jamie Mayer­field makes a similar point about com­par­ing hu­man pains: “I said that my in­tu­itions fa­vor the claim that we should pre­vent one per­son from ex­pe­rienc­ing the pain of tor­ture rather than pre­vent a mil­lion oth­ers from ex­pe­rienc­ing the pain of acute frus­tra­tion. But in fact my in­tu­itions fa­vor an even stronger claim. It seems to me that when the differ­ence in in­ten­sity is this large, no differ­ence in the num­ber of suffer­ers can jus­tify the more in­tense suffer­ing. The se­vere tor­ture of one per­son seems worse than the painful frus­tra­tion of any num­ber of peo­ple” (Mayer­field 1999: 183). See Car­l­son 2000 for more dis­cus­sion. ↩︎

  76. Alter­na­tively, one might adopt John Stu­art Mill’s con­cep­tion of hap­piness and hold that chim­panzee hap­piness is the product of higher plea­sures and squir­rel hap­piness is the product of lower plea­sures. If no amount of lower plea­sure could equal any amount of higher plea­sure, then one would have a rea­son to pre­fer chim­panzee hap­piness to any amount of squir­rel hap­piness. How­ever, that po­si­tion is (a) im­plau­si­ble and (b) seems to aban­don the prin­ci­ple that hap­piness is the only thing that mat­ters. ↩︎

  77. For gen­eral dis­cus­sion of whether and how to dis­count for prob­a­bil­ity of sen­tience, see Sebo 2018. ↩︎

  78. Uncer­tainty in a cost-effec­tive­ness es­ti­mate is not nec­es­sar­ily pro­por­tional to un­cer­tainty in a given pa­ram­e­ter. And there may be spe­cific in­stances in which we are more un­cer­tain about sen­tience than about moral sta­tus. (For ex­am­ple, if one thought agency were suffi­cient for moral stand­ing, one might be able to es­ti­mate the moral sta­tus of, say, an ad­vanced AI pro­gram even if one were un­sure whether the AI were sen­tient.) Nev­er­the­less, the gen­eral point ap­pears sound: given the typ­i­cal differ­ence in un­cer­tain­ties, re­duc­ing un­cer­tainty about moral sta­tus and ca­pac­ity for welfare is nor­mally go­ing to be more im­pact­ful than re­duc­ing un­cer­tainty about sen­tience. ↩︎

  79. One might adopt a po­si­tion on which moral prop­er­ties (like moral sta­tus) ex­ist, but they’re not grounded in mind-in­de­pen­dent prop­er­ties. Me­taeth­i­cal con­struc­tivism is one such view. If an­tire­al­ism is the view that moral prop­er­ties do not ex­ist, then con­struc­tivism is not an­tire­al­ist. (Mind-de­pen­dent prop­er­ties are still prop­er­ties, af­ter all.) Whether such a view is wor­thy of the man­tle of re­al­ism is, how­ever, con­tentious. ↩︎