From humans in Canada to battery caged chickens in the United States, which animals have the hardest lives: results

With Char­ity En­trepreneur­ship team af­ter spend­ing con­sid­er­able time on cre­at­ing the best sys­tem we could for eval­u­at­ing an­i­mal welfare, we ap­plied this sys­tem to 15 differ­ent an­i­mals/​breeds. This in­cluded 6 types of wild an­i­mal and 7 differ­ent types of farm an­i­mal en­vi­ron­ments, as well as 2 hu­man con­di­tions for baseline com­par­i­sons. This was far from a com­plete list, but it gave us enough in­for­ma­tion to get a sense of the differ­ent con­di­tions. Each re­port was limited to 2-5 hours with pre-set eval­u­a­tion crite­ria (as seen in this post), a 1-page sum­mary, and a sec­tion of rough notes (gen­er­ally in the 5-10 page range). Each sum­mary re­port was read by 8 raters (3 from the in­ter­nal CE re­search team, 5 ex­ter­nal to the CE team). The av­er­age weight­ings and ranges in the spread­sheet be­low are gen­er­ated by av­er­ag­ing the as­sess­ments of these raters.

Click to view the report

The goal of Char­ity En­trepreneur­ship is to com­pare the differ­ent char­i­ta­ble in­ter­ven­tions and ac­tions so that new strong char­i­ties can be founded. One of the nec­es­sary steps in such a pro­cess is hav­ing a way to com­pare differ­ent an­i­mals in differ­ent con­di­tions. We have pre­vi­ously writ­ten both about our crite­ria for eval­u­at­ing an­i­mals and about our pro­cess for com­ing to that crite­ria. This post ex­plains our pro­cess and how the re­sults for this sys­tem are be­ing ap­plied to differ­ent an­i­mal con­di­tions.

One of the goals of our sys­tem was to be ap­pli­ca­ble across differ­ent an­i­mals and differ­ent situ­a­tions. We ended up com­par­ing 9 an­i­mals (Hu­mans, Hens, Turkeys, Fish, Cows, Chim­panzees, Birds, Rats, Bugs). Th­ese an­i­mals are not based on con­sis­tent biolog­i­cal tax­on­omy due to limited in­for­ma­tion be­ing available on cer­tain types (e.g. there was enough in­for­ma­tion on rats speci­fi­cally to do a re­port on them, but for wild birds we had to look at a va­ri­ety of birds to get suffi­cient data). We are not con­cerned about this limi­ta­tion, as most of the in­ter­ven­tions we are con­sid­er­ing would hit a wide range of an­i­mals (e.g. a hu­mane in­sec­ti­cide would most likely not be tar­get-spe­cific, so the most rele­vant data here is an in­dex for bugs as a whole as op­posed to an in­dex on a spe­cific species.)

The re­ports are for­mat­ted so that it is easy to quickly grasp the main in­for­ma­tion con­nected with the spe­cific rat­ing. Each re­port is a sum­mary page with the key in­for­ma­tion and a short de­scrip­tion as to why the given rat­ing, and thus, should be pol­ished and read­able to all. Each re­port was time capped at 1-5 hours, so they are limited in both scope and depth. We are keen to get more in­for­ma­tion on any of these ar­eas (par­tic­u­larly in­for­ma­tion that is nu­mer­i­cally quan­tified or re­lated to wild an­i­mals, as this in­for­ma­tion was the hard­est to find).

​Sam­ple re­port:

After each of the re­ports were drawn up, each sum­mary re­port was read and eval­u­ated by 8 raters. We tried to get a di­verse set of raters but all with a broadly util­i­tar­ian and EA frame­work. Three raters were from our in­ter­nal CE re­search team (the staff who cre­ated or con­tributed to the re­ports) and five raters were ex­ter­nal to the CE team, but in­volved in the an­i­mal rights’ re­search space (e.g. work­ing or in­tern­ing for EA an­i­mal or­ga­ni­za­tions). The CE re­search team talked over rat­ings and dis­agree­ments openly, but the ex­ter­nal raters did not see or dis­close any CE rat­ings un­til af­ter they had put in theirs. Eth­i­cally, peo­ple were best de­scribed as clas­si­cal util­i­tar­i­ans, but with some slight vari­a­tion (e.g. some more pri­ori­tar­ian, some nega­tive lean­ing util­i­tar­i­ans). We liked the con­cept of mul­ti­ple in­de­pen­dent raters as there are many soft judg­ment calls and in­creas­ing the num­bers of peo­ple do­ing rat­ings seems to miti­gate spe­cific bi­ases and fal­la­cies. This sys­tem has also been used be­fore, and to good effect, by GiveWell.

Ul­ti­mately, we ended up with a wide range of rat­ings go­ing from 81 (strongly net pos­i­tive) to −57 (strongly net nega­tive). Some of the re­ports were pretty sur­pris­ing and ended up chang­ing our in­tu­itions (for ex­am­ple, many wild an­i­mals were worse off than in our ini­tial views). Others were not that sur­pris­ing (for ex­am­ple, the rank­ings of fac­tory farmed hens).

Our full spread­sheet, with all the rat­ings as well as links to the 1-page re­ports, gives spe­cific de­scrip­tions as to why cer­tain an­i­mals and situ­a­tions re­ceived cer­tain rat­ings. We feel as though there is lots of room to im­prove these num­bers, par­tic­u­larly with deeper in­ves­ti­ga­tion into the lives of wild an­i­mal. But we limited our time on these re­ports due to find­ing that, his­tor­i­cally, within our CEAs, fac­tors like these did not end up car­ry­ing the most weight or be­ing the source of high­est vari­abil­ity. For ex­am­ple, the cost of an in­ter­ven­tion can vary by sev­eral or­ders of mag­ni­tude, and more lo­gis­ti­cal fac­tors were more of­ten the de­cid­ing fac­tor when de­cid­ing be­tween the most promis­ing look­ing in­ter­ven­tions.

If you want to re­ceive in­for­ma­tion about our lat­est re­ports, sub­scribe to Char­ity En­trepreneur­ship’s newslet­ter. Once a month we will send you a sum­mary of our pro­gres.