Reducing existential risks or wild animal suffering?

This ar­ti­cle ar­gues why re­duc­ing wild an­i­mal suffer­ing may be more im­por­tant than re­duc­ing ex­is­ten­tial risks. The ar­gu­ment is largely based on my newly de­vel­opped pop­u­la­tion eth­i­cal the­ory of vari­able crit­i­cal level util­i­tar­i­anism.

What are the most im­por­tant fo­cus ar­eas if you want to do the most good in the world? Fo­cus on the cur­rent gen­er­a­tion or the far fu­ture? Fo­cus on hu­man welfare or an­i­mal welfare? Th­ese are the fun­da­men­tal cause pri­ori­ti­za­tion ques­tions of effec­tive al­tru­ism. Look for the biggest prob­lems that are the most ne­glected and are the eas­iest to solve. If we do this ex­er­cise, two fo­cus ar­eas be­come im­mensely im­por­tant: re­duc­ing ex­is­ten­tial risks and re­duc­ing wild an­i­mal suffer­ing. But which of those two de­serves our top pri­or­ity?

X-risks

An ex­is­ten­tial risk (X-risk) is a catas­trophic dis­aster from na­ture (e.g. an as­ter­oid im­pact, a su­per­virus pan­demic or a su­per­vol­cano erup­tion), tech­nolo­gies (e.g. ar­tifi­cial su­per­in­tel­li­gence, syn­thetic biol­ogy, nan­otech­nol­ogy or nu­clear weapons) or hu­man ac­tivi­ties (e.g. run­away global warm­ing or en­vi­ron­men­tal degra­da­tion), that can end all of civ­i­liza­tion or in­tel­li­gent life on earth.

If we man­age to avoid ex­is­ten­tial risks, there can be flour­ish­ing hu­man or in­tel­li­gent life for many gen­er­a­tions in the fu­ture, able to colonize other planets and mul­ti­ply by the billions. The num­ber of sen­tient be­ings with long happy flour­ish­ing lives in the far fu­ture can be im­mense: a hun­dred thou­sand billion billion billion (1032) hu­mans, in­clud­ing a mil­lion billion (1016) hu­mans on Earth, ac­cord­ing to some es­ti­mates. In a world where an ex­is­ten­tial risk oc­curs, all those po­ten­tially happy peo­ple will never be born.

WAS

Wild an­i­mal suffer­ing (WAS) is the prob­lem cre­ated by star­va­tion, pre­da­tion, com­pe­ti­tion, in­juries, dis­eases and par­a­sites that we see in na­ture. There are a lot of wild an­i­mals al­ive to­day: e.g. 1013 – 1015 fish, 1017 – 1019 in­sects, ac­cord­ing to some es­ti­mates. It is pos­si­ble that many of those an­i­mals have lives not worth liv­ing, that those an­i­mals have more or stronger nega­tive than pos­i­tive ex­pe­riences and hence over­all a nega­tive well-be­ing. Most an­i­mals fol­low an r-se­lec­tion re­pro­duc­tive strat­egy: they have a lot of offspring (the pop­u­la­tion has a high rate of re­pro­duc­tion, hence the name ‘r-se­lec­tion’), and only a few of them sur­vive long enough to re­pro­duce them­selves. Most lives of those an­i­mals are very short and there­fore prob­a­bly mis­er­able. We are not likely to see most of those an­i­mals, be­cause they will die and be eaten quickly. When we see a happy bird singing, ten of its siblings died within a few days af­ter hatch­ing. When the vast ma­jor­ity of new­borns die, we can say that na­ture is a failed state, not able to take care of the well-be­ing of its in­hab­itants.

Due to the num­bers (billions of billions), the suffer­ing of wild an­i­mals may be a big­ger prob­lem than all hu­man suffer­ing from vi­o­lence, ac­ci­dents and dis­eases (a few billion hu­mans per year), and all hu­man caused suffer­ing of do­mes­ti­cated an­i­mals (a few hun­dred billion per year).

Pop­u­la­tion ethics

What is worse: all the suffer­ing, to­day and in the fu­ture, of wild an­i­mals who have mis­er­able lives? Or the non-ex­is­tence of a huge num­ber of peo­ple in the far fu­ture who could have had beau­tiful lives? To solve this ques­tion, we need to an­swer one of the most fun­da­men­tal ques­tion in ethics: what is the best pop­u­la­tion eth­i­cal the­ory? Pop­u­la­tion ethics is the branch of moral philos­o­phy that deals with choices that in­fluence who will ex­ist and how many in­di­vi­d­u­als will ex­ist.

A promis­ing pop­u­la­tion eth­i­cal the­ory is the vari­able crit­i­cal level util­i­tar­i­anism. Each sen­tient be­ing has a util­ity func­tion that mea­sures how strongly that in­di­vi­d­ual prefers a situ­a­tion. That util­ity can be a func­tion of hap­piness and all other things val­ued by that in­di­vi­d­ual. If your util­ity is pos­i­tive in a cer­tain situ­a­tion, you have a pos­i­tive prefer­ence for that situ­a­tion. The more you pre­fer a situ­a­tion, the higher your util­ity in that situ­a­tion. If a per­son does not ex­ist, that per­son has a zero util­ity level.

The sim­plest pop­u­la­tion eth­i­cal the­ory is to­tal util­i­tar­i­anism, which says that we should choose the situ­a­tion that has the high­est to­tal sum of ev­ery­one’s util­ities. How­ever, this the­ory has a very counter-in­tu­itive im­pli­ca­tion, called a sadis­tic re­pug­nant con­clu­sion (a com­bi­na­tion of the sadis­tic con­clu­sion and the re­pug­nant con­clu­sion in pop­u­la­tion ethics). Sup­pose you can choose be­tween two situ­a­tions. In the first situ­a­tion, a mil­lion peo­ple ex­ist and have max­i­mally happy lives, with max­i­mum util­ities. In the sec­ond situ­a­tion, those mil­lion peo­ple have very mis­er­able lives, with ex­tremely nega­tive lev­els of util­ity. But in that situ­a­tion, there also ex­ist new peo­ple with util­ities slightly above zero, i.e. lives barely worth liv­ing. If we take the sum of ev­ery­one’s util­ities in that sec­ond situ­a­tion, and if the num­ber of those ex­tra peo­ple is high enough, the to­tal sum be­comes big­ger than the to­tal of util­ities in the first situ­a­tion. Ac­cord­ing to to­tal util­i­tar­i­anism, the sec­ond situ­a­tion is bet­ter, even if the already ex­ist­ing peo­ple have max­i­mally mis­er­able lives and the new peo­ple have lives barely worth liv­ing, whereas in the first situ­a­tion ev­ery­one is max­i­mally satis­fied, and no-one is mis­er­able.

To avoid this con­clu­sion, we can change the util­i­tar­ian the­ory, for ex­am­ple by us­ing a refer­ence util­ity level as a crit­i­cal level. In­stead of adding util­ities, we add rel­a­tive util­ities, where a rel­a­tive util­ity of a per­son is his or her util­ity minus the crit­i­cal level. The crit­i­cal level of a non-ex­ist­ing per­son is zero. This pop­u­la­tion eth­i­cal the­ory is the crit­i­cal level util­i­tar­i­anism, and it can avoid the sadis­tic re­pug­nant con­clu­sion: if the crit­i­cal level is higher than the small pos­i­tive util­ities of the new peo­ple in the sec­ond situ­a­tion, the rel­a­tive util­ities of those ex­tra peo­ple are all nega­tive. The sum of all those rel­a­tive util­ities never be­comes pos­i­tive, which means the to­tal rel­a­tive util­ity of the first situ­a­tion is always higher than the sec­ond situ­a­tion, and so the first situ­a­tion is preferred.

If all crit­i­cal lev­els of all per­sons in all situ­a­tions are the same, we have a con­stant or rigid crit­i­cal level util­i­tar­i­anism, but this the­ory still faces some prob­lems. We can make the the­ory more flex­ible by al­low­ing vari­able crit­i­cal lev­els: not only can ev­ery­one de­ter­mine his or her own util­ity in a spe­cific situ­a­tion, ev­ery­one can also choose his or her crit­i­cal level. The preferred crit­i­cal level can vary from per­son to per­son and from situ­a­tion to situ­a­tion.

A per­son’s crit­i­cal level always lies within a range, be­tween his or her low­est preferred and high­est preferred lev­els. The low­est preferred crit­i­cal level is zero: if a per­son would choose a nega­tive crit­i­cal level, that per­son would ac­cept a situ­a­tion where he or she can have a nega­tive util­ity, such as a life not worth liv­ing. Ac­cept­ing a situ­a­tion that one would not pre­fer, is ba­si­cally a con­tra­dic­tion. The high­est preferred crit­i­cal level varies from per­son to per­son. Sup­pose we can de­cide to bring more peo­ple into ex­is­tence. If they choose a very high crit­i­cal level, their util­ities fall be­low this crit­i­cal level, and hence their rel­a­tive util­ities be­come nega­tive. In other words: it is bet­ter that they do not ex­ist. So if ev­ery­one would choose a very high crit­i­cal level, it is bet­ter that no-one ex­ists, even if peo­ple can have pos­i­tive util­ities (but nega­tive rel­a­tive util­ities). This the­ory is a kind of naive nega­tive util­i­tar­i­anism, be­cause ev­ery­one’s rel­a­tive util­ity be­comes a nega­tive num­ber and we have to choose the situ­a­tion that max­i­mizes the to­tal of those rel­a­tive util­ities. It is a naive ver­sion of nega­tive util­i­tar­i­anism, be­cause the max­i­mum will be at the situ­a­tion where no-one ex­ists (i.e. where all rel­a­tive util­ities are zero in­stead of nega­tive). If peo­ple do not want that situ­a­tion, they have cho­sen a crit­i­cal level that is too high. If ev­ery­one chose their high­est preferred crit­i­cal level, we end up with a bet­ter kind of nega­tive util­i­tar­i­anism, which avoids the con­clu­sion that non-ex­is­tence is always best. It is a quasi-nega­tive util­i­tar­i­anism, be­cause the rel­a­tive util­ities are no-longer always nega­tive. They can some­times be (slightly) pos­i­tive, in or­der to al­low the ex­is­tence of ex­tra per­sons.

X-risks ver­sus WAS

Now we come to the cru­cial ques­tion: if vari­able crit­i­cal level util­i­tar­i­anism is the best pop­u­la­tion eth­i­cal the­ory, what does it say about our two prob­lems of ex­is­ten­tial risks and wild an­i­mal suffer­ing?

If ev­ery­one chose their low­est preferred crit­i­cal level, we end up with to­tal util­i­tar­i­anism, and ac­cord­ing to that the­ory, the po­ten­tial ex­is­tence of many happy peo­ple in the far fu­ture be­comes dom­i­nant. Even if the prob­a­bil­ity of an ex­is­ten­tial risk is very small (say one in a mil­lion the next cen­tury), re­duc­ing that prob­a­bil­ity is of high­est im­por­tance if so many fu­ture lives are at stake. How­ever, we have seen that to­tal util­i­tar­i­anism con­tains a sadis­tic re­pug­nant con­clu­sion that will not be ac­cepted by many peo­ple. This means those peo­ple de­crease their cre­dence in this the­ory.

If peo­ple want to move safely away from the sadis­tic re­pug­nant con­clu­sion and other prob­lems of rigid crit­i­cal level util­i­tar­i­anism, they should choose a crit­i­cal level in­finites­i­mally close to (but still be­low) their high­est preferred lev­els. If ev­ery­one does so, we end up with a quasi-nega­tive util­i­tar­i­anism. Ac­cord­ing to this the­ory, adding new peo­ple (or guaran­tee­ing the ex­is­tence of fu­ture peo­ple by elimi­nat­ing ex­is­ten­tial risks) be­comes only marginally im­por­tant. The prime fo­cus of this the­ory is avoid­ing the ex­is­tence of peo­ple with nega­tive lev­els of util­ity: adding peo­ple with pos­i­tive util­ities be­comes barely im­por­tant be­cause their rel­a­tive util­ities are small. But adding peo­ple with nega­tive util­ities is always bad, be­cause the crit­i­cal lev­els of those peo­ple are always pos­i­tive and hence their rel­a­tive util­ities are always nega­tive and of­ten big in size.

How­ever, we should not avoid the ex­is­tence of peo­ple with nega­tive util­ities at all costs. Sim­ply de­creas­ing the num­ber of fu­ture peo­ple (avoid­ing their ex­is­tence), in or­der to de­crease the num­ber of po­ten­tial peo­ple with mis­er­able lives, is not a valid solu­tion ac­cord­ing to quasi-nega­tive util­i­tar­i­anism. Sup­pose there will be one sen­tient be­ing in the fu­ture who will have a nega­tive util­ity, i.e. a life not worth liv­ing, and the only al­ter­na­tive op­tion to avoid that nega­tive util­ity, is that no-one in the fu­ture ex­ists. How­ever, the other po­ten­tial fu­ture peo­ple strongly pre­fer their own ex­is­tence: they all have very pos­i­tive util­ities. In or­der to al­low for their ex­is­tence, they could lower their crit­i­cal lev­els such that a fu­ture with all those happy fu­ture be­ings and the one mis­er­able in­di­vi­d­ual is still preferred. This means that ac­cord­ing to quasi-nega­tive util­i­tar­i­anism, the po­ten­tial ex­is­tence of one mis­er­able per­son in the fu­ture does not im­ply that we should pre­fer a world where no-one will live in the fu­ture. How­ever, what if a lot of fu­ture in­di­vi­d­u­als (say a ma­jor­ity) have lives not worth liv­ing? The few happy po­ten­tial peo­ple will have to de­crease their own crit­i­cal lev­els be­low zero in or­der to al­low their ex­is­tence. In other words: if the num­ber of fu­ture mis­er­able lives is too high, a fu­ture with­out any sen­tient be­ing would be preferred ac­cord­ing to quasi-nega­tive util­i­tar­i­anism.

If ev­ery­one chooses a high crit­i­cal level such that we end up with a quasi-nega­tive util­i­tar­i­anism, we should give more pri­or­ity to elimi­nat­ing wild an­i­mal suffer­ing than elimi­nat­ing ex­is­ten­tial risks, be­cause lives with nega­tive util­ities are prob­a­bly most com­mon in wild an­i­mals and adding lives with pos­i­tive well-be­ing is only min­i­mally im­por­tant. In an ex­treme case where most fu­ture lives would be un­avoid­ably very mis­er­able (i.e. if the only way to avoid this mis­ery is to avoid the ex­is­tence of those fu­ture peo­ple), avoid­ing an ex­is­ten­tial risk could even be bad, be­cause it would guaran­tee the con­tinued ex­is­tence of this huge mis­ery. Es­ti­mat­ing the dis­tri­bu­tion of util­ities in fu­ture hu­man and an­i­mal gen­er­a­tions be­comes cru­cial. But even if with cur­rent tech­nolo­gies most fu­ture lives would be mis­er­able, it can still be pos­si­ble to avoid that fu­ture mis­ery by us­ing new tech­nolo­gies. Hence, de­vel­op­ing new meth­ods to avoid wild an­i­mal suffer­ing be­comes a pri­or­ity.

Ex­pected value calculations

If to­tal util­i­tar­i­anism is true (i.e. if ev­ery­one chooses a crit­i­cal level equal to zero), and if ex­is­ten­tial risks are elimi­nated, the re­sult­ing in­crease in to­tal rel­a­tive util­ity (of all cur­rent and far-fu­ture peo­ple) is very big, be­cause the num­ber of fu­ture peo­ple is so large. If quasi-nega­tive util­i­tar­i­anism is true (i.e. if ev­ery­one chooses their max­i­mum preferred crit­i­cal level), and if wild an­i­mal suffer­ing is elimi­nated, the re­sult­ing in­crease in to­tal rel­a­tive util­ity of all cur­rent and near-fu­ture[1] wild an­i­mals is big, but per­haps smaller than the in­crease in to­tal rel­a­tive util­ity by elimi­nat­ing ex­is­ten­tial risks ac­cord­ing to to­tal util­i­tar­i­anism, be­cause the num­ber of cur­rent and near-fu­ture wild an­i­mals is smaller than the num­ber of po­ten­tial far-fu­ture peo­ple with happy lives. This im­plies that elimi­nat­ing ex­is­ten­tial risks is more valuable, given the truth of to­tal util­i­tar­i­anism, than elimi­nat­ing wild an­i­mal suffer­ing, given the truth of quasi-nega­tive util­i­tar­i­anism.

How­ever, to­tal util­i­tar­i­anism seems a less plau­si­ble pop­u­la­tion eth­i­cal the­ory than quasi-nega­tive util­i­tar­i­anism be­cause it faces the sadis­tic re­pug­nant con­clu­sion. This im­plau­si­bil­ity of to­tal util­i­tar­i­anism means it is less likely that ev­ery­one chooses a crit­i­cal level of zero. Elimi­nat­ing ex­is­ten­tial risks was most valuable when to­tal util­i­tar­i­anism was true, but its ex­pected value be­comes lower be­cause the low prob­a­bil­ity of to­tal util­i­tar­i­anism be­ing true. The ex­pected value of elimi­nat­ing wild an­i­mal suffer­ing could be­come higher than the ex­pected value of elimi­nat­ing ex­is­ten­tial risks.

But still, even if the frac­tion of fu­ture peo­ple who choose zero crit­i­cal lev­els is very low, the huge num­ber of fu­ture peo­ple in­di­cate that guaran­tee­ing their ex­is­tence (i.e. elimi­nat­ing ex­is­ten­tial risks) re­mains very im­por­tant.

The in­ter­con­nect­ed­ness of X-risks and WAS

There is an­other rea­son why re­duc­ing wild an­i­mal suffer­ing might gain im­por­tance over re­duc­ing ex­is­ten­tial risks. If we re­duce ex­is­ten­tial risks, more fu­ture gen­er­a­tions of wild an­i­mals will be born. This in­creases the like­li­hood that more an­i­mals with nega­tive util­ities will be born. For ex­am­ple: coloniz­ing other planets could be a strat­egy to re­duce ex­is­ten­tial risks (e.g. blow­ing up planet Earth would not kill all hu­mans if we could sur­vive on other planets). But coloniza­tion of planets could mean in­tro­duc­ing ecosys­tems and hence in­tro­duc­ing wild an­i­mals, which in­creases the num­ber of wild an­i­mals and in­creases the risk of more fu­ture wild an­i­mal suffer­ing. If de­creas­ing ex­is­ten­tial risks means that the num­ber of fu­ture wild an­i­mals in­creases, and if this num­ber be­comes big­ger and big­ger, the non-ex­is­tence of an­i­mals with nega­tive util­ities (i.e. the elimi­na­tion of wild an­i­mal suffer­ing) be­comes more and more im­por­tant.

On the other hand, if an ex­is­ten­tial risk kills all hu­mans, but the non-hu­man an­i­mals sur­vive, and if hu­mans could have been the only hope for wild an­i­mals in the far fu­ture by in­vent­ing new tech­nolo­gies that elimi­nate wild an­i­mal suffer­ing, an ex­is­ten­tial risk might make it worse for the an­i­mals in the far fu­ture. That means elimi­nat­ing ex­is­ten­tial risks might be­come more im­por­tant when elimi­nat­ing wild an­i­mal suffer­ing be­comes more im­por­tant.

So we have to make a dis­tinc­tion be­tween ex­is­ten­tial risks that could kill all hu­mans and an­i­mals, ver­sus ex­is­ten­tial risks that would kill only those per­sons who could po­ten­tially help fu­ture wild an­i­mals. The sec­ond kind of ex­is­ten­tial risk is bad for wild an­i­mal suffer­ing, so elimi­nat­ing this sec­ond kind of risk is im­por­tant to elimi­nate wild an­i­mal suffer­ing in the far fu­ture.

Victimhood

The differ­ence be­tween to­tal util­i­tar­i­anism (pri­ori­tiz­ing the elimi­na­tion of ex­is­ten­tial risks) and quasi-nega­tive util­i­tar­i­anism (pri­ori­tiz­ing the elimi­na­tion of wild an­i­mal suffer­ing), can also be un­der­stood in terms of vic­tim­hood. If due to an ex­is­ten­tial risk a po­ten­tial happy per­son would not ex­ist in the fu­ture, that non-ex­ist­ing per­son can­not be con­sid­ered as a vic­tim. That non-ex­ist­ing per­son can­not com­plain against his or her non-ex­is­tence. He or she does not have any ex­pe­riences and hence is not aware of be­ing a vic­tim. He or she does not have any prefer­ences in this state of non-ex­is­tence. On the other hand, if a wild an­i­mal has a nega­tive util­ity (i.e. a mis­er­able life), that an­i­mal can be con­sid­ered as a vic­tim.

Of course, ex­is­ten­tial risks cre­ate vic­tims: the fi­nal gen­er­a­tion of ex­ist­ing peo­ple would be harmed and would not like the ex­tinc­tion. But this num­ber of peo­ple in the last gen­er­a­tion will be rel­a­tively small com­pared to the many gen­er­a­tions of many wild an­i­mals who can suffer. So if the sta­tus of vic­tim­hood is es­pe­cially bad, wild an­i­mal suffer­ing gets worse than ex­is­ten­tial risks, be­cause the prob­lem of wild an­i­mal suffer­ing cre­ates more vic­tims.

Neglectedness

Both ex­is­ten­tial risk re­duc­tion and wild an­i­mal suffer­ing re­duc­tion are im­por­tant fo­cus ar­eas of effec­tive al­tru­ism, but re­duc­ing wild an­i­mal suffer­ing seems to be more ne­glected. Only a few or­ga­ni­za­tions work on re­duc­ing wild an­i­mal suffer­ing: Wild-An­i­mal Suffer­ing Re­search, An­i­mal Ethics, Utility Farm and the Foun­da­tional Re­search In­sti­tute. On the other hand, there are many or­ga­ni­za­tions work­ing on ex­is­ten­tial risks both gen­er­ally (e.g. the Cen­tre for the Study of Ex­is­ten­tial Risk, the Fu­ture of Hu­man­ity In­sti­tute, the Fu­ture of Life In­sti­tute, the Global Catas­trophic Risk In­sti­tute and 80000 Hours) and speci­fi­cally (work­ing on AI-safety, nu­clear weapons, global warm­ing, global pan­demics,…). As wild an­i­mal suffer­ing is more ne­glected, it has a lot of room for more fund­ing. Based on the im­por­tance-tractabil­ity-ne­glect­ed­ness frame­work, wild an­i­mal suffer­ing de­serves a higher pri­or­ity.

Summary

In the pop­u­la­tion eth­i­cal the­ory of vari­able crit­i­cal level util­i­tar­i­anism, there are two ex­treme crit­i­cal lev­els that cor­re­spond with two dom­i­nant pop­u­la­tion eth­i­cal the­o­ries. If ev­ery­one chooses the low­est preferred crit­i­cal level (equal to zero), we end up with to­tal util­i­tar­i­anism. If ev­ery­one chooses the high­est preferred crit­i­cal level, we end up with quasi-nega­tive util­i­tar­i­anism. Ac­cord­ing to to­tal util­i­tar­i­anism, we should give top pri­or­ity to avoid­ing ex­is­ten­tial risks such that the ex­is­tence of many fu­ture happy peo­ple is guaran­teed. Ac­cord­ing to quasi-nega­tive util­i­tar­i­anism, we should give top pri­or­ity to avoid­ing wild an­i­mal suffer­ing such that the non-ex­is­tence of an­i­mals with mis­er­able lives (nega­tive util­ities) is guaran­teed (but not always sim­ply by de­creas­ing or elimi­nat­ing wild an­i­mal pop­u­la­tions and not nec­es­sar­ily at the cost of whip­ping out all life).

The value of elimi­nat­ing ex­is­ten­tial risks when ev­ery­one chooses the low­est preferred crit­i­cal level would prob­a­bly be higher than the value of elimi­nat­ing wild an­i­mal suffer­ing when ev­ery­one chooses the high­est preferred crit­i­cal level. But to­tal util­i­tar­i­anism is less likely to be our preferred pop­u­la­tion eth­i­cal the­ory be­cause it faces the sadis­tic re­pug­nant con­clu­sion. This means that the ex­pected value of elimi­nat­ing wild an­i­mal suffer­ing could be big­ger than the ex­pected value of elimi­nat­ing ex­is­ten­tial risks. Th­ese calcu­la­tions be­come even more com­plex when we con­sider the in­ter­con­nect­ed­ness of the prob­lems of ex­is­ten­tial risks and wild an­i­mal suffer­ing. For ex­am­ple, de­creas­ing ex­is­ten­tial risks might in­crease the prob­a­bil­ity of the ex­is­tence of more fu­ture wild an­i­mals with nega­tive util­ities. But elimi­nat­ing some ex­is­ten­tial risks might also guaran­tee the ex­is­tence of peo­ple who could help wild an­i­mals and po­ten­tially elimi­nate all fu­ture wild an­i­mal suffer­ing with new tech­nolo­gies.

Fi­nally, wild an­i­mal suffer­ing de­serves a higher pri­or­ity be­cause this fo­cus area is more ne­glected than ex­is­ten­tial risks.



[1] We can­not sim­ply add the rel­a­tive util­ities of far-fu­ture wild an­i­mals, be­cause that would pre­sume that ex­is­ten­tial risks are avoided.