Is Suffering Convex?

Epistemic Sta­tus: Pro­vi­sional thoughts that seem likely to ei­ther a) be very worth shar­ing and ex­pand­ing upon, b) in­valid based on ar­gu­ments I have not thought of, or c) repli­cat­ing dis­cus­sions that I’m un­aware of.

Bot­tom line

I posit that moral suffer­ing is max­i­mal for agents of mod­er­ate so­phis­ti­ca­tion, and that suffer­ing of mod­er­ately in­tel­li­gent non-hu­man mam­mals and younger chil­dren are in some sense much more dam­ag­ing and morally im­por­tant than suffer­ing of ei­ther less com­plex or­ganisms, or older chil­dren, more so­phis­ti­cated an­i­mals, and adults. If true, the im­pli­ca­tions go be­yond the rel­a­tive im­por­tance of hu­mans and an­i­mals, and af­fects the moral rele­vance of the far fu­ture.

Hu­mans at differ­ent ages

Most peo­ple seem to agree that phys­i­cal pain isn’t always suffer­ing. For ex­am­ple, work­ing out pro­duces phys­i­cal pain that many peo­ple de­scribe as en­joy­able be­cause of the con­text that the ex­er­ciser puts around the ac­tivity.It is ra­tio­nal to un­dergo painful surgery to ad­dress a med­i­cal prob­lem, and adults can reach a level of so­phis­ti­ca­tion where they ac­cept the painful pro­cess of surgery and re­cov­ery in or­der to gain some­thing longer term. Peo­ple who cul­ti­vate a mind­set that con­tex­tu­al­izes and ab­stracts away from their own suffer­ing are even more “so­phis­ti­cated” in a cer­tain sense, and I would con­sider the pain from this surgery to be less morally ob­jec­tion­able than the same pain with­out the con­text. In­fants never con­tex­tu­al­ize suffer­ing. It seems plau­si­ble that they don’t clearly dis­t­in­guish be­tween differ­ent lev­els of suffer­ing ei­ther—for young enough in­fants, cold from a di­aper change gets the same level of re­ac­tion as re­ceiv­ing a shot, or even un­der­go­ing a more ma­jor surgery. The length of time is ar­guably morally rele­vant, but they don’t ap­pre­ci­ate the differ­ence be­tween a minor short term pain or dis­com­fort and a more ma­jor, longer term one.

Young chil­dren are able to con­tex­tu­al­ize their suffer­ing, and by age one-and-a-half, my kids could un­der­stand that di­aper changes are un­pleas­ant for them, but they would (usu­ally) vol­un­teer to un­dergo them be­cause they were aware that they would be hap­pier with a new di­aper. As chil­dren age, they be­come more able to ap­pre­ci­ate this.On the other hand, in some sense the pain un­der­gone by a five- or six- year old is worse than that un­der­gone by an in­fant. They an­ti­ci­pate it, re­mem­ber it, and are af­fected by it much more than a much younger child. Adults do the same—but there is a trade-off be­tween the so­phis­ti­ca­tion of con­text and the in­creased pain from imag­i­na­tion, mem­ory, and cog­ni­tive ap­pre­ci­a­tion of pain. I think it is clear from both my mem­ory and my per­cep­tion of my chil­dren that there are more in­tense feel­ings that oc­cur in kids than adults.


Frans de Waal makes a clear case that hu­man­ity his­tor­i­cally un­der­es­ti­mates the suffer­ing of an­i­mals. Ca­tia Faria sug­gests that “all sen­tient in­di­vi­d­u­als, in­clud­ing non­hu­man an­i­mals, are morally con­sid­er­able, ir­re­spec­tive of their species or other alleged species-spe­cific at­tributes.” Even sim­ple an­i­mals al­most cer­tainly ex­pe­rience what we think of as pain. Lizards and birds suffer when in­jured. In­sects seem to ex­pe­rience and avoid pain as well—though at some point it is less clear ex­actly how much of their ex­pe­rience is ac­tu­ally “ex­pe­rience” in a morally rele­vant sense.

It seems, how­ever, that suffer­ing as a morally rele­vant con­cept has some lower bound, and the fo­cus on “sen­tience” be­comes crit­i­cal. Clearly non-con­scious bac­te­ria, re­in­force­ment learn­ing sys­tems be­low a cer­tain com­plex­ity, and similarly sim­ple sys­tems seem to have near-zero moral weight—be­cause what they ex­pe­rience is not nor­mally un­der­stood as suffer­ing. As so­phis­ti­ca­tion re­duces past a cer­tain point, the moral rele­vance of each an­i­mal falls. (I am not dis­cussing the ag­gre­gate amount of to­tal pain suffered, nor am I ad­vo­cat­ing any­thing like cer­tainty about where this line lies. I’d sug­gest that moral re­spon­si­bil­ity seems to ad­vo­cate for a pre­cau­tion­ary prin­ci­ple in this re­gard.)

As we move to­wards more com­plex an­i­mals, we start to see more clearly rele­vant pain. More than that, the pain seems to have in­creas­ing moral weight. Tem­ple Grandin and Cather­ine John­son ar­gue in “An­i­mals in Trans­la­tion” that (some) an­i­mals feel fear, and this seems to match our (po­ten­tially un­re­li­able) in­tu­ition that higher mam­mals that can plan and rea­son about the fu­ture also can an­ti­ci­pate fear. In a very real sense, they suffer more than adult hu­mans, be­cause they have al­most no con­trol over their en­vi­ron­ment.


I would not ad­vo­cate mea­sure­ment of im­por­tance of suffer­ing, but there are real im­pli­ca­tions to the po­ten­tial for con­vex­ity. Most of these are un­sur­pris­ing for effec­tive al­tru­ists—an­i­mals are at least po­ten­tially more morally rele­vant than hu­mans, and miti­gat­ing suffer­ing means we should fo­cus on re­duc­ing ex­pe­rienced suffer­ing, rather than in­creas­ing life spans. As a per­haps more es­o­teric con­se­quence, it ar­gues that in­creas­ing in­tel­li­gence may re­duce morally rele­vant pain—and there­fore parts of the far fu­ture where there has been an in­tel­li­gence ex­plo­sion are less morally rele­vant than the pre­sent.