Probability estimate for wild animal welfare prioritization

In this ar­ti­cle I calcu­late my sub­jec­tive prob­a­bil­ity es­ti­mate that the prob­lem of wild an­i­mal suffer­ing is the most im­por­tant cause area in effec­tive al­tru­ism. I will use a Fermi es­ti­mate to calcu­late lower and up­per bounds of the prob­a­bil­ity that re­search about in­ter­ven­tions to im­prove wild an­i­mal welfare should be given top pri­or­ity. A Fermi es­ti­mate breaks the prob­a­bil­ity up into sev­eral fac­tors such that the es­ti­mate of the to­tal prob­a­bil­ity is the product of the es­ti­mates of the fac­tors. This method is known in su­perfore­cast­ing to in­crease ac­cu­racy or pre­dic­tive power.

With the lower and up­per bound es­ti­mates, and a dis­cus­sion of in­ter­con­nect­ed­ness of cause ar­eas, I es­ti­mate prob­a­bil­ities for the four ma­jor effec­tive al­tru­ism cause ar­eas. Th­ese prob­a­bil­ities can serve as al­lo­ca­tion per­centages for a dona­tion port­fo­lio.

Ma­jor cause areas

Effec­tive al­tru­ism has four ma­jor cause ar­eas, which is also re­flected in the four effec­tive al­tru­ism funds. First, there is the meta-level cause area: com­mu­nity build­ing, pri­ori­ti­za­tion re­search and aware­ness rais­ing about effec­tive al­tru­ism. Next, there are three ob­ject-level cause ar­eas: hu­man welfare, an­i­mal welfare and the long-term fu­ture. Those three ob­ject-level cause ar­eas can be bet­ter split into four, based on two con­sid­er­a­tions: time (short-term ver­sus long-term) and tar­get sub­ject (hu­man ver­sus non-hu­man an­i­mal).

With these two di­men­sions, we can cre­ate four ob­ject-level cause ar­eas. The short-term hu­man cause area in­volves in­creas­ing cur­rent gen­er­a­tion hu­man welfare, pri­mar­ily by im­prov­ing global health and hu­man de­vel­op­ment and re­duc­ing ex­treme poverty. The long-term hu­man cause area in­volves guaran­tee­ing far-fu­ture hu­man welfare, pri­mar­ily by avoid­ing ex­is­ten­tial risks (X-risks) that could end hu­man civ­i­liza­tion. The short-term an­i­mal cause area in­volves in­creas­ing an­i­mal welfare, pri­mar­ily by de­creas­ing an­i­mal suffer­ing in fac­tory farm­ing. Fi­nally, the long-term an­i­mal cause area deals with wild an­i­mal welfare, pri­mar­ily by do­ing re­search about safe and effec­tive in­ter­ven­tions in na­ture to im­prove far fu­ture wild an­i­mal welfare. Due to lack of effec­tive in­ter­ven­tions and knowl­edge, short-term wild an­i­mal welfare im­prove­ments are un­fea­si­ble (in­tractable).

There are other cause ar­eas, such as effec­tive en­vi­ron­men­tal­ism, and other tar­get sub­jects, such ecosys­tems, plants, digi­tal sen­tient en­tities or aliens, but these are less im­por­tant: ecosys­tems and plants are most likely not sen­tient (have no sub­jec­tive in­ter­ests), digi­tal sen­tience does not yet ex­ist and aliens have not yet made con­tact with us.

Based on our be­liefs and prefer­ences, we can choose our preferred cause ar­eas.

You should choose short-term cause ar­eas in par­tic­u­lar when:

· you pre­fer a per­son-af­fect­ing pop­u­la­tion ethic (mak­ing ex­ist­ing peo­ple happy in­stead of mak­ing ex­tra happy peo­ple),

· you be­lieve the pop­u­la­tion of tar­get sub­jects (e.g. hu­mans) might definitely and un­avoid­ably go ex­tinct in the not so far fu­ture, so at­tempts to im­prove far fu­ture welfare will be pointless,

· you be­lieve that in­creas­ing or max­i­miz­ing hap­piness be­comes un­avoid­able in the far fu­ture (for ex­am­ple you be­lieve we will un­avoid­ably de­velop ar­tifi­cial su­per­in­tel­li­gent ma­chines that au­to­mat­i­cally solve all prob­lems of fu­ture sen­tient be­ings), so at­tempts to im­prove far fu­ture welfare will be un­nec­es­sary, or

· you pre­fer an agent-rel­a­tive ethic: a moral agent is al­lowed to be par­tial to­wards those in­di­vi­d­u­als who are known to ex­ist by the agent (e.g. to­wards those in­di­vi­d­u­als who ex­ist at the time when the agent makes a choice to help).

You should choose long-term cause ar­eas in par­tic­u­lar when:

· you pre­fer a pop­u­la­tion ethic that strongly val­ues pos­i­tive out­comes (e.g. to­tal util­i­tar­i­anism that max­i­mizes to­tal fu­ture hap­piness or prefer­ence satis­fac­tion), and you be­lieve that a fu­ture state with pos­i­tive ag­gre­gate welfare is pos­si­ble, such that you pri­ori­tize avoid­ing X-risks (avoid­ing fu­ture non-ex­is­tence of many happy in­di­vi­d­u­als), or

· you pre­fer a pop­u­la­tion ethic with a pro­cre­ation asym­me­try that strongly dis­val­ues nega­tive out­comes (e.g. suffer­ing fo­cused ethics, some kinds of nega­tive util­i­tar­i­anism or vari­able crit­i­cal level util­i­tar­i­anism), and you be­lieve that with­out proper in­ter­ven­tions we get a fu­ture state with many suffer­ing en­tities (with net-nega­tive welfare), such that you pri­ori­tize avoid­ing S-risks (avoid­ing fu­ture ex­is­tence of many suffer­ing in­di­vi­d­u­als).

You should choose hu­man cause ar­eas in par­tic­u­lar when:

· you pre­fer to be max­i­mally sure about the level of sen­tience (by se­lect­ing tar­get sub­jects who most strongly look like you at a neu­ro­biolog­i­cal level or who can talk and clearly com­mu­ni­cate their feel­ings to you),

· you pre­fer the most effi­cient welfare im­prov­ing solu­tions that re­quire some min­i­mum level of in­tel­li­gence (e.g. eco­nomic mar­ket solu­tions that re­quire an un­der­stand­ing of money, prices, prop­erty rights, in­cen­tives,…),

· you pre­fer to help those who can most effec­tively help oth­ers (e.g. de­vel­op­ment of poor coun­tries will in­crease the num­ber of peo­ple who can do sci­en­tific re­search, hu­mans have highly de­vel­oped skills of co­op­er­a­tion, hu­mans can de­sign eco­nomic mechanisms that effec­tively cre­ate mu­tu­ally benefi­cial situ­a­tions), or

· you be­lieve that most hu­mans can reach higher lev­els of hap­piness (or suffer­ing) than non-hu­man an­i­mals.

You should choose an­i­mal cause ar­eas in par­tic­u­lar when:

· you be­lieve sen­tience is more im­por­tant than e.g. ra­tio­nal­ity or in­tel­li­gence and you be­lieve an­i­mals are likely to be sen­tient and their po­ten­tial welfare lev­els are not ex­tremely smaller than those of hu­mans.

Lower bound prob­a­bil­ity estimate

In this sec­tion I perform a Fermi es­ti­mate of the lower bound of the prob­a­bil­ity that wild an­i­mal welfare (the far-fu­ture an­i­mal cause area) gets pri­or­ity. The to­tal lower bound prob­a­bil­ity is the product of the prob­a­bil­ities of 14 con­di­tions. I pre­sent my per­sonal lower bound es­ti­mates for the moral val­idity or fac­tual truth of each moral and fac­tual con­di­tion. The prob­a­bil­ity es­ti­mate of each con­di­tion is con­di­tional on the truth or val­idity of all the pre­vi­ous con­di­tions (e.g. given that con­di­tion 1 is valid, how likely is con­di­tion 2 valid?).

1. No un­wanted arbitrariness

Eth­i­cal sys­tems should not con­tain un­wanted ar­bi­trari­ness such as dis­crim­i­na­tion on the ba­sis of time, place or species. When some­one ex­ists, where some­one ex­ists and to which biolog­i­cal cat­e­gory (race, species, genus, or­der,…) some­one be­longs, is morally ir­rele­vant.

My prob­a­bil­ity es­ti­mate (nor­ma­tive cer­tainty) of this con­di­tion is >99%, which means I’m highly con­fi­dent about the moral val­idity of this prin­ci­ple to avoid un­wanted ar­bi­trari­ness. This es­ti­mate is based on my moral in­tu­itions about fun­da­men­tal moral rea­sons. For ex­am­ple: if I am al­lowed to in­clude un­wanted ar­bi­trari­ness in my ethic, ev­ery­one else is al­lowed to do so as well, even if I do not want their kinds of ar­bi­trari­ness, so I can­not ra­tio­nally want this.

If this con­di­tion turns out to be in­valid, we are al­lowed to pri­ori­tize cur­rent gen­er­a­tions or hu­mans (the short-term hu­man cause area).

If eth­i­cal sys­tems have to avoid un­wanted ar­bi­trari­ness, an­i­mals and the far fu­ture mat­ter, but we do not yet know in what sense or how much they mat­ter. To solve that ques­tion, we need to know our moral val­ues.

2. Con­se­quen­tial­ist ethic

Mo­ral in­di­vi­d­u­al­ism and con­se­quen­tial­ism are valid moral the­o­ries. This means that in­di­vi­d­ual out­comes ex­ist and are the only things that mat­ter. An in­di­vi­d­ual out­come not nec­es­sar­ily only in­cludes the level of hap­piness, prefer­ence satis­fac­tion or welfare of an in­di­vi­d­ual, but can also in­clude the strength of an in­di­vi­d­ual rights vi­o­la­tion or the level of au­ton­omy and free­dom of that in­di­vi­d­ual. In­di­vi­d­ual out­comes in­clude ev­ery­thing that the in­di­vi­d­ual cares about.

My prob­a­bil­ity es­ti­mate (nor­ma­tive cer­tainty) of this con­di­tion is >99%. This es­ti­mate is based on a per­sonal prefer­ence for au­ton­omy and avoid­ing ar­bi­trari­ness: if I may im­pose my val­ues on oth­ers, then some­one else may also im­pose his val­ues on me, and I can­not want that. Hence, the only things that I should morally value, are the things that are val­ued by oth­ers. For ex­am­ple I may value the well-be­ing of a sen­tient in­di­vi­d­ual, be­cause that in­di­vi­d­ual also cares about her own well-be­ing. But I may not in­trin­si­cally value e.g. the nat­u­ral­ness of an ecosys­tems, the beauty of a paint­ing or the in­tegrity of a cul­ture, be­cause the ecosys­tem, the paint­ing and the cul­ture them­selves do not care about any­thing. Similarly, a ho­mo­pho­bic per­son may not value the sex­ual pu­rity of a ho­mo­sex­ual per­son (when he be­lieves that ho­mo­sex­u­al­ity is im­pure), be­cause that value is not shared by the ho­mo­sex­ual.

If this con­di­tion turns out to be in­valid, we are al­lowed to pri­ori­tize en­vi­ron­men­tal is­sues, the pro­tec­tion of cul­tural tra­di­tions, and we are al­lowed to im­pose our own val­ues on oth­ers who can­not want that. For ex­am­ple it al­lows for eco­cen­tric val­ues, where our (es­thetic) val­ues of nat­u­ral­ness and in­tegrity of ecosys­tems are con­sid­ered more im­por­tant than the welfare of sen­tient wild an­i­mals. This eco­cen­trism re­sults in a hands-off policy where we should not in­ter­vene in na­ture to in­crease ev­ery­one’s welfare.

If we choose a con­se­quen­tial­ist ethic, we still have to figure out how to com­pare the out­comes be­tween differ­ent in­di­vi­d­u­als. If the welfare of a wild an­i­mal is in­com­pa­rable to the welfare of a hu­man, we can­not yet de­cide whether to pri­ori­tize wild an­i­mal welfare.

3. In­ter­per­sonal com­pa­ra­bil­ity of outcomes

Out­comes (good­ness or bad­ness) of in­di­vi­d­u­als can be mea­sured and in­ter­per­son­ally com­pared to a suffi­cient de­gree that makes com­par­i­sons use­ful. This means that an ag­gre­gate (to­tal) out­come ex­ists (by ag­gre­gat­ing in­di­vi­d­ual out­comes).

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >75%. This es­ti­mate is based on my moral judg­ment that con­sid­er­a­tions about fair­ness or equal­ity are im­por­tant and sen­si­ble, as well as on fac­tual neu­ro­biolog­i­cal (and evolu­tion­ary) similar­i­ties be­tween sen­tient be­ings, the ex­is­tence of just no­tice­able differ­ences in ex­pe­riences and other con­sid­er­a­tions ex­plained here and here.

If this con­di­tion turns out to be wrong, we can choose a very nar­row con­trac­tu­al­ist ethic and a welfare eco­nomics re­stricted to mere Pareto effi­ciency. Such con­trac­tu­al­ism and Pareto effi­ciency is usu­ally re­stricted to (a sub­group of) hu­mans, and avoids is­sues of equal­ity, which means that the scope is very nar­row. The con­trac­tu­al­ism can be ex­tended to in­clude equal­ity of op­por­tu­nity. And un­der slightly more gen­eral con­di­tions, a welfare eco­nomics with a prin­ci­ple of fair di­vi­sion of re­sources is pos­si­ble, in­clud­ing both Pareto effi­ciency and es­sen­tially envy-free­ness. This means we could fo­cus on effi­cient mar­kets, equal­ity of op­por­tu­nity, fair prop­erty rights al­lo­ca­tions, and ba­sic rights and liber­ties. This is the area of short-term hu­man welfare. How­ever, if an­i­mals are in­cluded in the fair di­vi­sion of re­sources and ba­sic liber­ties, wild an­i­mal welfare can be­come very im­por­tant as well.

If out­comes are in­ter­per­son­ally com­pa­rable, we have to de­ter­mine how they con­tribute to the ag­gre­gate out­come of all fu­ture in­di­vi­d­u­als.

4. Pos­i­tive and nega­tive out­comes

Out­comes of in­di­vi­d­u­als can be pos­i­tive or nega­tive. When a situ­a­tion is cho­sen such that the over­all life­time out­come of an in­di­vi­d­ual (over the course of its life) is net-pos­i­tive (e.g. more pos­i­tive than nega­tive ex­pe­riences), the in­di­vi­d­ual has a life worth liv­ing.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >99%. This es­ti­mate is based on my per­sonal ex­pe­rience: I can imag­ine a life with so much suffer­ing, that I would pre­fer non-ex­is­tence (i.e. not be­ing born), which means such a life is not worth liv­ing.

If this con­di­tion turns out to be wrong, we can ex­clude many pop­u­la­tion eth­i­cal the­o­ries. We do not have to worry about cre­at­ing lives not worth liv­ing, so anti­na­tal­ist con­clu­sions are au­to­mat­i­cally avoided. This means avoid­ing X-risks be­comes much more im­por­tant, al­though im­prov­ing wild an­i­mal welfare might still be im­por­tant due to the large num­ber of an­i­mals in the fu­ture.

If fu­ture in­di­vi­d­ual out­comes can be nega­tive, we have to de­ter­mine whether we can avoid the ex­is­tence of in­di­vi­d­u­als with a nega­tive welfare.

5. Pos­i­tivity of fu­ture to­tal outcome

A fu­ture with a to­tal (ag­gre­gate) nega­tive out­come or a ma­jor­ity of lives not worth liv­ing, is avoid­able. That means to­tal fu­ture out­come can be made pos­i­tive by our choices. When to­tal fu­ture out­come is pos­i­tive, pos­i­tive ex­pe­riences trump nega­tive ex­pe­riences (or lives with net-pos­i­tive welfare trump lives with net-nega­tive welfare), and most lives are worth liv­ing.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >95%. This es­ti­mate is based on my con­fi­dence in tech­nolog­i­cal progress. If new tech­nolo­gies do not un­avoid­ably re­sult in ex­tinc­tion, it is not im­pos­si­ble that they will be used for the good, to de­crease nega­tive out­comes.

If this con­di­tion turns out to be wrong, there is one con­clu­sion: choose for to­tal ex­tinc­tion (e.g. anti­na­tal­ism). If we can­not avoid a fu­ture dom­i­nated by suffer­ing, the more fu­ture gen­er­a­tions will be born, the more the to­tal out­come of the fu­ture will be nega­tive. Hence the avoidance of all fu­ture gen­er­a­tions gets top pri­or­ity.

If we can avoid ag­gre­gate nega­tive out­comes, we have to de­ter­mine how pos­i­tive in­di­vi­d­ual out­comes com­pare to nega­tive out­comes.

6. Val­idity of asym­met­ric, suffer­ing fo­cused pop­u­la­tion ethics

Some asym­met­ric, suffer­ing fo­cused pop­u­la­tion ethic is more valid than to­tal util­i­tar­i­anism that max­i­mizes the sum of ev­ery­one’s welfare. A suffer­ing fo­cused ethic is char­ac­ter­ized by an asym­me­try: when some­one has a net-nega­tive life (i.e. a nega­tive life­time out­come), this always im­plies a nega­tive con­tri­bu­tion to the to­tal ag­gre­gate out­come, but when some­one has a net-pos­i­tive life, this does not always im­ply a pos­i­tive con­tri­bu­tion to the to­tal out­come. To­tal util­i­tar­i­anism does not have such an asym­me­try. Ex­am­ples of asym­met­ric, suffer­ing fo­cused ethics are some ver­sions of nega­tive util­i­tar­i­anism, crit­i­cal level util­i­tar­i­anism, per­son-af­fect­ing views, or most gen­er­ally vari­able crit­i­cal level util­i­tar­i­anism.

To­tal util­i­tar­i­anism is sus­cep­ti­ble to the re­pug­nant sadis­tic con­clu­sion (also called the very re­pug­nant con­clu­sion), which is prob­a­bly the most coun­ter­in­tu­itive im­pli­ca­tion of to­tal util­i­tar­i­anism. Con­sider the choice be­tween two situ­a­tions. In situ­a­tion A, a num­ber of ex­tremely happy peo­ple ex­ist. In situ­a­tion B, the same peo­ple ex­ist and have ex­treme suffer­ing (max­i­mal mis­ery), and a huge num­ber of ex­tra peo­ple ex­ist, all with lives barely worth liv­ing (slight pos­i­tive welfare). If the ex­tra pop­u­la­tion in B is large enough, the to­tal welfare in B be­comes larger than the to­tal welfare in A. Hence, to­tal util­i­tar­i­anism would pre­fer situ­a­tion B, which is sadis­tic (there are peo­ple with ex­treme suffer­ing) and re­pug­nant (a huge num­ber of peo­ple have lives barely worth liv­ing and no-one is very happy).

The most sim­ple suffer­ing fo­cused ethic is vuln­er­a­ble to the ex­tinc­tion con­clu­sion: if the only ob­jec­tive is to min­i­mize suffer­ing, the best fu­ture state is the one where no-one will be born (be­cause it may be im­pos­si­ble to avoid the birth of a life not worth liv­ing or a life with suffer­ing). More nu­anced suffer­ing fo­cused ethics do not nec­es­sar­ily im­ply this con­clu­sion be­cause of bound­ary con­straints to the ob­jec­tive of min­i­miz­ing suffer­ing. So the con­di­tion states that there ex­ist con­sis­tent suffer­ing fo­cused ethics that avoid both the re­pug­nant sadis­tic con­clu­sion of to­tal util­i­tar­i­anism and the ex­tinc­tion con­clu­sion, as well as other very coun­ter­in­tu­itive con­clu­sions.

My prob­a­bil­ity es­ti­mate (nor­ma­tive cer­tainty) of this con­di­tion is >90%. This es­ti­mate is based on the strength of my moral in­tu­ition about the sadis­tic re­pug­nant con­clu­sion, and the flex­i­bil­ity of vari­able crit­i­cal level util­i­tar­i­anism to avoid very coun­ter­in­tu­itive con­clu­sions.

If this con­di­tion turns out to be wrong, to­tal util­i­tar­i­anism can be the preferred pop­u­la­tion ethic, which means we should strongly pri­ori­tize de­creas­ing X-risks (if that guaran­tees a fu­ture with more pos­i­tive than nega­tive in­di­vi­d­ual out­comes), al­though wild an­i­mal welfare might still be im­por­tant due to the large num­ber of fu­ture wild an­i­mals.

If a suffer­ing fo­cused ethic is valid, we have to de­ter­mine whether hu­man or an­i­mal suffer­ing in the fu­ture will de­crease or in­crease.

7. In­creas­ing hu­man flour­ish­ing

Hu­man flour­ish­ing will in­crease and suffer­ing will de­crease in the fu­ture (if hu­man­ity does not go ex­tinct). The num­ber of fu­ture hu­man lives with net-nega­tive welfare will be small and de­crease to be­come neg­ligible.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >80%. This es­ti­mate is based on the past hu­man tra­jec­tory: the ev­i­dence of hu­man progress, eco­nomic growth, de­crease (in ab­solute terms) of ex­treme poverty, mor­tal­ity rates and vi­o­lence, in­crease of hu­man health, life ex­pec­tancy and co­op­er­a­tion, welfare im­prov­ing tech­nolo­gies,…

If this con­di­tion turns out to be wrong, we could fo­cus on hu­man de­vel­op­ment, anti-poverty, hu­man health, es­pe­cially if we pre­fer a per­son-af­fect­ing pop­u­la­tion ethic. How­ever, even with in­creas­ing hu­man suffer­ing (de­creas­ing flour­ish­ing), it could still be pos­si­ble that the prob­lem of wild an­i­mal suffer­ing is big­ger and hence more im­por­tant.

If hu­man flour­ish­ing will in­crease, we are left with an­i­mal suffer­ing. From all an­thro­pogenic (hu­man-caused) an­i­mal suffer­ing, live­stock farm­ing is the biggest prob­lem due to the high num­ber of live­stock an­i­mals. So how does the welfare of live­stock an­i­mals com­pare to wild an­i­mals?

8. Live­stock elimination

Live­stock farm­ing and live­stock an­i­mal suffer­ing will be elimi­nated in the near fu­ture (e.g. this cen­tury).

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >90%. This es­ti­mate is based on de­vel­op­ments in an­i­mal free food tech­nolo­gies (plant-based and cul­ti­vated meat) as well as in­creases of farm an­i­mal welfare con­cerns and de­creases of meat con­sump­tion in many highly de­vel­oped coun­tries.

If this con­di­tion turns out to be wrong, we prob­a­bly should fo­cus more on ve­g­anism and the de­vel­op­ment of al­ter­na­tive foods. How­ever, it is pos­si­ble that ve­g­anism in­di­rectly in­creases wild an­i­mal suffer­ing, for ex­am­ple when live­stock farms (e.g. grass­lands) are re­placed by forests and nat­u­ral habitats. This means that wild an­i­mal suffer­ing could re­main im­por­tant.[i]

The in­vest­ments in an­i­mal free food tech­nolo­gies (billions of dol­lars by large food com­pa­nies), and the cam­paign­ing by ve­gan or­ga­ni­za­tions, means that the prob­lem of live­stock an­i­mal suffer­ing is less ne­glected than the prob­lem of wild an­i­mal suffer­ing. If live­stock farm­ing gets elimi­nated, wild an­i­mal suffer­ing be­comes the biggest re­main­ing prob­lem of an­i­mal suffer­ing, es­pe­cially if many wild an­i­mals have net-nega­tive welfare.

9. Net-nega­tive lives of wild animals

Many wild an­i­mals have lives not worth liv­ing, i.e. with a net-nega­tive life­time welfare.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >80%. This es­ti­mate is based on the high re­pro­duc­tion rates (r-se­lec­tion pop­u­la­tion dy­nam­ics), the short lifes­pans of most an­i­mals, and the abun­dance of causes of suffer­ing (dis­eases, in­juries, par­a­sitism, star­va­tion, pre­da­tion,…)

If this con­di­tion turns out to be wrong, we could fo­cus on the welfare of the cur­rent gen­er­a­tion (of hu­mans or an­i­mals) or X-risk re­duc­tion. How­ever, even if they have net-pos­i­tive lives, wild an­i­mals could still have the low­est welfare lev­els (com­pared to hu­mans), such that wild an­i­mal welfare im­prove­ments re­main im­por­tant.

If an­i­mals have net-nega­tive welfare, their welfare lev­els can still be very small com­pared to hu­mans.

10. Non-neg­ligible welfare of wild animals

Wild an­i­mals have suffi­ciently high sen­tience lev­els such that wild an­i­mal suffer­ing is a big prob­lem.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >90%. This es­ti­mate is based on e.g. brain sizes and the fact that there are or­ders of mag­ni­tude more wild an­i­mals than hu­mans. So even if a smaller brain im­plies a smaller welfare po­ten­tial, the huge num­ber of an­i­mals means that their to­tal suffer­ing can be huge.

If this con­di­tion turns out to be wrong, we could pri­ori­tize hu­man welfare (or the de­vel­op­ment of su­per­sen­tient ar­tifi­cial in­tel­li­gence with an ex­tremely high welfare po­ten­tial).

If fu­ture wild an­i­mal suffer­ing is not neg­ligible, there still may be other, big­ger causes of suffer­ing.

11. Dom­i­nance of wild an­i­mal suffering

Most far fu­ture lives with net-nega­tive welfare will be wild an­i­mals, in­stead of e.g. plants, digi­tal sen­tient en­tities or com­puter-simu­lated con­scious be­ings.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >90%. This es­ti­mate is based on the lack of ev­i­dence that plants are con­scious, my low con­fi­dence that we can and will cre­ate huge num­bers of digi­tal sen­tient en­tities with net-nega­tive ex­pe­riences and the high prob­a­bil­ity that we can eas­ily im­prove the welfare of digi­tal sen­tience once it ex­ists.

If this con­di­tion turns out to be wrong, we should fo­cus on digi­tal sen­tience welfare, and es­pe­cially avoid the re­lated S-risks (e.g. the simu­la­tion of countless digi­tal en­tities that suffer).

If there are no other big­ger suffer­ing prob­lems next to wild an­i­mal suffer­ing, it is still pos­si­ble that all at­tempts to im­prove wild an­i­mal welfare will be fu­tile, e.g. when we go ex­tinct.

12. No hu­man ex­tinc­tion or knowl­edge loss

Hu­mans will not go ex­tinct be­fore we can dras­ti­cally im­prove far fu­ture wild an­i­mal welfare. It means we do not go ex­tinct dur­ing the up­com­ing tech­nol­ogy rev­olu­tions. For ex­am­ple, we will sur­vive the tran­si­tion to­wards a world with ar­tifi­cial su­per­in­tel­li­gence (ma­chines that are more gen­er­ally in­tel­li­gent than hu­mans). Once this su­per­in­tel­li­gence is cre­ated, it can help us in avoid­ing all other kinds of X-risks, so the tran­si­tion to­wards su­per­in­tel­li­gence can be the last im­por­tant bar­rier for hu­man sur­vival.

This con­di­tion also in­cludes the non-ex­tinc­tion of hu­man knowl­edge. A big hu­man catas­tro­phe that does not re­sult in to­tal ex­tinc­tion of hu­man­ity, could still re­sult in the loss of all gained knowl­edge about wild an­i­mal welfare in­ter­ven­tions. This would mean all cur­rent in­vest­ments in wild an­i­mal welfare re­search would be­come fu­tile and sur­vived fu­ture hu­man gen­er­a­tions have to start re­search all the way from scratch.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >80%. This is based on ex­pert sur­veys about ex­is­ten­tial risks. It means that the prob­a­bil­ity of ex­tinc­tion in the tran­si­tion pe­riod could be as high as 20% if more re­sources are spend on wild an­i­mal suffer­ing re­duc­tion in­stead of X-risk re­duc­tion.

If this con­di­tion turns out to be wrong, we should fo­cus on cur­rent gen­er­a­tion hu­man welfare or X-risk re­duc­tion (short-term and long-term hu­man welfare cause ar­eas).

If hu­mans do not go ex­tinct, it is not guaran­teed that we will de­velop and in­vent tech­nolo­gies that suffi­ciently im­prove wild an­i­mal welfare. The prob­lem of wild an­i­mal suffer­ing can sim­ply be un­solv­able.

13. Tractabil­ity of wild an­i­mal suffering

Cru­cial prob­lems of wild an­i­mal suffer­ing are solv­able, and it is pos­si­ble to make progress in the re­search for tech­nolo­gies that im­prove wild an­i­mal welfare. It im­plies that the prob­lem of wild an­i­mal suffer­ing is tractable, in­clud­ing the pos­si­bly hard­est sub­prob­lems of pro­cre­ation (r-se­lec­tion pop­u­la­tion dy­nam­ics) and pre­da­tion.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >95%. This es­ti­mate is based on progress in en­vi­ron­men­tal sci­ences, hu­man health (vac­cines), ge­netic ma­nipu­la­tion (gene drives), cul­ti­vated meat, ar­tifi­cial in­tel­li­gence,… Given the past track record of in­ven­tions to im­prove hu­man welfare (e.g. erad­i­cate dis­eases), it is un­likely that we will never find tech­nolo­gies that sig­nifi­cantly im­prove wild an­i­mal welfare.

If this con­di­tion turns out to be wrong, we should fo­cus on cur­rent gen­er­a­tion hu­man welfare or tractable X-risk re­duc­tion.

If the prob­lem of wild an­i­mal suffer­ing is large, ne­glected and tractable, which would give it a top pri­or­ity, it is still pos­si­ble that other cause ar­eas or in­ter­ven­tions (e.g. about cli­mate change, ve­g­anism,…) will au­to­mat­i­cally suffi­ciently im­prove wild an­i­mal welfare.

14. No in­di­rect interventions

There will be no other (non-wild-an­i­mal-suffer­ing re­lated) in­ter­ven­tions that au­to­mat­i­cally suffi­ciently solve the prob­lem of wild an­i­mal suffer­ing.

My prob­a­bil­ity es­ti­mate (fac­tual cer­tainty) of this con­di­tion is >95%. This es­ti­mate is based on the ap­par­ent com­plex­ity of the prob­lem of wild an­i­mal suffer­ing. It is un­likely that other in­ter­ven­tions will have larger over­all pos­i­tive effects on wild an­i­mal welfare, be­cause due to the com­plex­ity, those in­ter­ven­tions have many spillover and flow-through pos­i­tive and nega­tive side-effects.

Over­all es­ti­mate of lower bound

Mul­ti­ply­ing the above prob­a­bil­ity es­ti­mates, the lower bound of wild an­i­mal welfare be­ing a top pri­or­ity is around 25%. This is the low­est bound, be­cause even if some of the above con­di­tions are not met, wild an­i­mal welfare might still be a top pri­or­ity be­cause of other rea­sons. My lower bound es­ti­mate, when one or more of the above con­di­tions are not met, of the prob­a­bil­ity of wild an­i­mal suffer­ing still be­ing a top pri­or­ity, is be­tween 0% and 25%. Hence, the to­tal lower bound is some­where be­tween ¼ and ½.

Up­per bound prob­a­bil­ity estimate

I will also calcu­late an up­per bound on the im­por­tance of wild an­i­mal welfare, by calcu­lat­ing lower bounds for the other ma­jor effec­tive al­tru­ism cause ar­eas.

Con­sider the re­duc­tion of X-risks (the long-term hu­man welfare cause area). I will as­sume the same es­ti­mates for the first three con­di­tions (no un­wanted ar­bi­trari­ness, con­se­quen­tial­ism and in­ter­per­sonal com­pa­ra­bil­ity of welfare) as above. Next, my es­ti­mate that to­tal util­i­tar­i­anism is valid will be 10% (the con­ju­gate of the prob­a­bil­ity that it is in­valid). The prob­a­bil­ity that the to­tal fu­ture out­come will not un­avoid­ably be nega­tive, is 95% as above. Given a pos­i­tive fu­ture out­come and the val­idity of to­tal util­i­tar­i­anism, the like­li­hood that an X-risk is the worst out­come is more than 99%, be­cause by far the most net-pos­i­tive lives will be in the far fu­ture, and the value of all those lives will be lost with an X-risk. The prob­a­bil­ity es­ti­mate of the tractabil­ity (solv­abil­ity) of X-risk re­duc­tion is 95%. Fi­nally, the prob­a­bil­ity es­ti­mate that hu­man­ity will not go ex­tinct, even with­out any in­vest­ments in X-risk re­duc­tions, is 20%. Hence the like­li­hood that X-risk in­ter­ven­tions will not be fu­tile (i.e. will be nec­es­sary and have some im­pact), is 80%. To­gether, the low­est bound on X-risk re­duc­tion pri­or­ity is 5%. How­ever, there are many other situ­a­tions (with other con­di­tions be­ing met) where X-risk re­duc­tion is a top pri­or­ity. My lower bound es­ti­mate, when one or more of the above con­di­tions are not met, of the prob­a­bil­ity of X-risk re­duc­tion still be­ing a top pri­or­ity, is be­tween 0 and 25%.

The other ma­jor effec­tive al­tru­ism cause ar­eas (short-term hu­man and an­i­mal welfare) have lower prob­a­bil­ity es­ti­mates, but are not neg­ligible. There is also a prob­a­bil­ity that there are yet un­known im­por­tant cause ar­eas. To­gether, the sum of the non-wild-an­i­mal-welfare (long-term an­i­mal) cause ar­eas put an up­per bound on the like­li­hood of wild an­i­mal welfare be­ing the top pri­or­ity. I es­ti­mate this up­per bound to be be­tween 50% and 90%.

Hence, the wild an­i­mal welfare pri­or­ity has a wide-mar­gins like­li­hood be­tween 25%-90% and a nar­row-mar­gins like­li­hood around 50%.

In­ter­con­nec­tions and in­di­rect cause areas

Even if wild an­i­mal welfare is not the ma­jor cause area, there are sev­eral in­ter­con­nec­tions be­tween the differ­ent cause ar­eas. The other ma­jor cause ar­eas have pos­i­tive in­di­rect effects for wild an­i­mal welfare. This means those other cause ar­eas gain rel­a­tive im­por­tance.

Short-term an­i­mal welfare

The most im­por­tant prob­lem for short-term an­i­mal welfare, is live­stock an­i­mal suffer­ing. De­creas­ing live­stock farm­ing (in­clud­ing fish farms), by pro­mot­ing and de­vel­op­ing an­i­mal free al­ter­na­tives (e.g. plant-based egg sub­sti­tutes and cul­ti­vated meat), di­rectly re­duces live­stock an­i­mal suffer­ing. But this ve­g­anism has benefi­cial side-effects for wild an­i­mal welfare. First, if hu­mans de­crease their an­i­mal meat con­sump­tion, the cog­ni­tive dis­so­nance be­tween meat con­sump­tion (be­hav­ior) and an­i­mal welfare (at­ti­tude) de­creases, which means that the value of an­i­mal welfare be­comes less sup­pressed. This fa­cil­i­tates a moral cir­cle ex­pan­sion, where an­i­mals are in­cluded in the moral cir­cle. An­i­mal welfare val­ues can spread more eas­ily in a ve­gan so­ciety, which means peo­ple be­come more in­ter­ested in wild an­i­mal welfare. Also, the de­vel­op­ment of cul­ti­vated meat can even­tu­ally benefit wild preda­tors, sav­ing prey an­i­mals from un­nec­es­sary suffer­ing.

Short-term hu­man welfare

Eco­nomic de­vel­op­ment and poverty re­duc­tion could also in­crease wild an­i­mal welfare, by in­creas­ing re­search. If peo­ple are richer, they are more will­ing to spend some money on in­ter­ven­tions that im­prove the welfare of oth­ers, in­clud­ing the welfare of wild an­i­mals. There­fore, GDP-growth is im­por­tant. For ex­am­ple, if the poor­est 4/​5th of the world pop­u­la­tion be­comes as rich as the rich­est 1/​5th, the in­vest­ments in wild an­i­mal welfare re­search could in­crease five­fold, be­cause cur­rently all re­search is done only in the rich­est part of the world.

Long-term hu­man welfare

Avoid­ing ex­is­ten­tial risks that could wipe out hu­man­ity, is im­por­tant, be­cause if hu­mans go ex­tinct, wild an­i­mals have to wait an­other few mil­lion years be­fore other in­tel­li­gent life­forms evolve that are able to de­velop tech­nolo­gies for effec­tive wild an­i­mal welfare in­ter­ven­tions, or they have to wait for ex­trater­res­trial be­ings who care about an­i­mal welfare to ar­rive on earth.

Ar­tifi­cial su­per­in­tel­li­gence is prob­a­bly the biggest X-risk, but also offers the best solu­tions against other X-risks as well as wild an­i­mal suffer­ing. There­fore, re­search in AI-safety be­comes im­por­tant. We avoid un­wanted ar­tifi­cial su­per­in­tel­li­gence (with value mis­al­ign­ment), and be­come able to de­velop su­per­in­tel­li­gent ma­chines that fol­low our value of pro­mot­ing both hu­man and an­i­mal welfare. Safe and effec­tive in­ter­ven­tions in na­ture to im­prove wild an­i­mal welfare will dras­ti­cally im­prove with safe ar­tifi­cial su­per­in­tel­li­gence.

Sum­mary: prob­a­bil­ity es­ti­mates of ma­jor cause areas

With the above Fermi calcu­la­tions and in­ter­con­nect­ed­ness con­sid­er­a­tions of cause ar­eas, I guessti­mate the fol­low­ing prob­a­bil­ities for a ma­jor cause area to be top pri­or­ity:

Long-term an­i­mal welfare (in par­tic­u­lar re­duc­ing wild an­i­mal suffer­ing): 13 or higher.

Long-term hu­man welfare (in par­tic­u­lar re­duc­ing ex­is­ten­tial risks): 14.

Short-term an­i­mal welfare (in par­tic­u­lar re­duc­ing live­stock farm­ing and fish­ing/​aqua­cul­ture): 14.

Short-term hu­man welfare (in par­tic­u­lar re­duc­ing ex­treme poverty): 16 or lower.

Re­duc­ing wild an­i­mal suffer­ing is the most im­por­tant cause area. Un­for­tu­nately it is also by far the most ne­glected in the effec­tive al­tru­ism com­mu­nity. I es­ti­mate that the cur­rent to­tal wor­ld­wide work­force in­volved in wild an­i­mal welfare re­search is less than 10 full-time equiv­a­lents (with only a few or­ga­ni­za­tions: Wild An­i­mal Ini­ti­a­tive, An­i­mal Ethics and to a lesser de­gree Re­think Pri­ori­ties). This is or­ders of mag­ni­tudes smaller than the at­ten­tion for X-risk re­duc­tion, ve­g­anism or hu­man de­vel­op­ment.

The prob­a­bil­ity guessti­mates can be used as al­lo­ca­tion per­centages for a dona­tion port­fo­lio (or for dona­tion al­lo­ca­tions at EA Funds).


-----

[i] If in­sects are sen­tient, it is not yet clear whether grass­land for live­stock re­ally has less an­i­mal suffer­ing than, for ex­am­ple, a for­est. Also on grass­land there are birds of prey, wasps, in­sect par­a­sites and other an­i­mals that cause suffer­ing, as well as dis­eases, food short­ages,… Forests could pro­duce more food and offer more pro­tec­tion for an­i­mals, but can also in­crease an­i­mal abun­dance and hence the num­ber of an­i­mals with lives not worth liv­ing. So with live­stock farm­ing we have a situ­a­tion of di­rectly visi­ble harm and much much greater in­di­rect, in­visi­ble harm. With ve­gan agri­cul­ture we have more na­ture, which means we do not have di­rect an­i­mal harm but we still have very large in­di­rect, in­visi­ble harm. We do not know which of the two situ­a­tions has the least in­di­rect harm. We could then use a pro­vi­sional rule of thumb to limit known, di­rect, visi­ble harm and there­fore opt for ve­g­anism. This is rea­son­able: we have four num­bers: x (di­rect harm of live­stock an­i­mals), X (suffer­ing of wild an­i­mals in na­ture in a world with live­stock farm­ing), y (di­rect harm with ve­g­anism) and Y (suffer­ing of wild an­i­mals in na­ture in a ve­gan world). We know for sure that y is 0 and x is big­ger than y, but we do not know whether X is big­ger than Y. With this knowl­edge, our sub­jec­tive prob­a­bil­ity es­ti­mate that y+Y is less than x+X is strictly greater than 50%. Even if it is 50,0001%, it is still rea­son­able to opt for the full 100% for y+Y (i.e. ve­g­anism). Sup­pose a coin has a chance of 50,0001% to be heads and you can guess a mil­lion times. Most peo­ple be­lieve the best strat­egy is to al­ter­nately guess heads and tails with 500001 heads, but guess­ing heads a mil­lion times is bet­ter. In any case, the value of in­for­ma­tion about the rel­a­tive sizes of X and Y is very high, so if we pro­mote ve­g­anism, we should do much more re­search to es­ti­mate the in­di­rect harms suffered by wild an­i­mals.