[WIP] Summary Review of ITN Critiques

Note: Although this re­view draws to­gether re­views of ITN and men­tions promi­nent EA or­ga­ni­za­tions, it takes no po­si­tion on the val­idity of these cri­tiques. The pur­pose is merely to col­late, sum­ma­rize and cat­e­go­rize them for ease of refer­ence. For this re­view, only the main body of linked posts was used. Brief cita­tions are given at the end. This is a work in progress, and any ad­di­tional cor­rec­tions or refer­ences are ap­pre­ci­ated.

Introduction

The Im­por­tant, Tractable, Ne­glected frame­work is a heuris­tic or frame­work for es­ti­mat­ing the marginal util­ity of ad­di­tional re­sources ap­plied to an al­tru­is­tic cause. Per­sonal fit is some­times added as an ad­di­tional crite­rion, de­pend­ing on the pur­pose of the cause pri­ori­ti­za­tion effort. It is heav­ily fea­tured in writ­ing within the Effec­tive Altru­ist move­ment, in­clud­ing in the in­tro­duc­tion to the move­ment on Effec­tiveAltru­ism.org, the Key Ideas ar­ti­cle on 80000Hours.org, and the Fo­cus Areas page of Open Philan­thropy. The im­por­tance of cause pri­ori­ti­za­tion is a com­mon theme of the EA move­ment, and the ITN frame­work is one of the most promi­nent frame­works for car­ry­ing out this anal­y­sis.

An ar­ray of cri­tiques of the ITN frame­work has emerged, as well as re­sponses to them, which are cat­e­go­rized and sum­ma­rized here. Some cri­tiques are the­o­ret­i­cal and oth­ers are of its ap­pli­ca­tion. The­o­ret­i­cal cri­tiques gen­er­ally ex­am­ine how ITN di­verges from an ideal model of cause pri­ori­ti­za­tion. Cri­tiques of ap­pli­ca­tion fo­cus on how spe­cific in­di­vi­d­u­als or or­ga­ni­za­tions use ITN, ei­ther for cause pri­ori­ti­za­tion or as a rhetor­i­cal de­vice. In re­view­ing the use of ITN or any other cause pri­ori­ti­za­tion ap­proach, it is use­ful to keep the or­ga­ni­za­tional or so­cial con­text in mind.

There are many cri­tiques of cause pri­ori­ti­za­tion gen­er­ally that are not spe­cific to ITN. Alter­na­tives meth­ods for es­ti­mat­ing cost-effec­tive­ness ex­ist [12] [13]. Prob­lems of cause pri­ori­ti­za­tion gen­er­ally are be­yond the scope of this re­view. When they are men­tioned, it is in the con­text of how ITN leads to blind­ness or mis­taken as­sump­tions around those deeper is­sues.

The fol­low­ing is a list of cri­tiques of ITN.

Ne­glect­ed­ness im­prop­erly as­sumes diminish­ing marginal returns

Ne­glect­ed­ness as­sumes that in­creased in­vest­ment in a cause area can suffer from diminish­ing marginal re­turns (DMR) to in­vest­ment [2] [3] [4]. DMR could be rep­re­sented as a graph show­ing the re­la­tion­ship of past in­vest­ment to ex­pected marginal re­turns to ad­di­tional in­vest­ment. Ne­glect­ed­ness as a crite­rion sup­poses that ad­di­tional in­vest­ment will nec­es­sar­ily de­crease the marginal util­ity of ad­di­tional in­vest­ment. While this may be true be­yond some thresh­old spe­cific to a given cause, other effects may be more rele­vant in con­text such as:

  • Setup costs (high ex­pense and lit­tle-no value at the be­gin­ning) [1]

  • Pay­offs be­ing clus­tered at the end of an effort [2] [3]. In clus­tered value causes, it might make more sense to eval­u­ate av­er­age marginal re­turns [2].

  • Economies of scale (cost of pro­duc­tive units may de­crease with in­vest­ment, even if value pro­duced by each unit de­creases) [2] [3]

  • Abil­ity to take on high risk/​high re­ward pro­jects in­creas­ing with in­vest­ment [2]

  • Tech­nolog­i­cal de­vel­op­ment or poli­ti­cal and eco­nomic change pro­vid­ing ad­di­tional op­por­tu­ni­ties for pro­duc­tive in­vest­ment over time [2]

  • Eco­nomic the­ory that jus­tifies as­sump­tions of DMR in a profit-max­i­miz­ing mar­ket con­text may not ap­ply in char­i­ta­ble con­texts [8].

Ne­glect­ed­ness pri­ori­tizes change over mag­ni­tude of marginal returns

One cause may face diminish­ing marginal re­turns (MR), yet have high ab­solute MR. Another might have low ab­solute MR but en­joy in­creas­ing MR, per­haps due to high setup costs. Ne­glect­ed­ness fa­vors the lat­ter cause over the former, which may not be ap­pro­pri­ate [2] [3]. This may re­sult in pri­ori­ti­za­tion of smaller causes over larger ones, as small causes may have less to do and there­fore may at­tract less in­vest­ment [3]. This might be ac­counted for in mea­sures of Im­por­tance, but given the in­for­mal heuris­tic na­ture of ITN this is not guaran­teed.

Ne­glect­ed­ness may mis­lead our in­tu­itions about what counts as an “in­vest­ment”

Once an ap­pro­pri­ate DMR graph shape and scale is se­lected, the an­a­lyst must de­cide how to mea­sure the level of in­vest­ment thus far. Pre­sent in­vest­ments seem clearly to count. Past efforts may have picked the low-hang­ing fruit, leav­ing lit­tle for fur­ther pre­sent efforts to ac­com­plish [3]. Yet eco­nomic, poli­ti­cal, and tech­nolog­i­cal dy­namism may re­veal op­por­tu­ni­ties un­available to past efforts as de­scribed above [2] [3]. Some causes may be more or less likely to be ac­com­plished in the fu­ture even if they are ne­glected now, so an­ti­ci­pated fu­ture efforts may also count [3] [11].

In some ar­eas such as biolog­i­cal re­search, it is my view that nat­u­ral se­lec­tion should be counted as a “past in­vest­ment.” While nat­u­ral se­lec­tion is not a tele­olog­i­cal force, its effect is to en­hance the fit­ness of a species through bio­chem­i­cal change. This over­laps the work of med­i­cal re­search to some ex­tent. Other in­di­rect work or nat­u­ral forces may be rele­vant to other cause ar­eas. It is also un­clear how we should count di­rectly con­tra­dic­tory efforts, such as pro- and anti-nu­clear cli­mate change pre­ven­tion ad­vo­cacy [2], over­lap­ping cause ar­eas, or gen­eral com­pe­ti­tion in moral ad­vo­cacy [3].

Although this con­cern ap­plies to any cause pri­ori­ti­za­tion effort, not just ITN, the term “ne­glect­ed­ness” may lead an­a­lysts to fo­cus too nar­rowly on pre­sent and past de­liber­ate hu­man efforts.

Some trans­la­tions of ITN into equa­tions have math­e­mat­i­cal problems

80,000 Hours con­verts ITN into an equa­tion:

  • Scale (of the prob­lem we’re try­ing to solve) = Good done /​ % of the prob­lem solved

  • Solv­abil­ity = % of the prob­lem solved /​ % in­crease in resources

  • Ne­glect­ed­ness = % in­crease in re­sources /​ ex­tra per­son or $

In this con­ver­sion, “% of the prob­lem solved” and “% in­crease in re­sources” would alge­braically can­cel out, sim­plify­ing to “Good done /​ ex­tra per­son or $” and mak­ing Solv­abil­ity (an­other term for Tractabil­ity) ir­rele­vant [2] [12].

Prior to sim­plifi­ca­tion, if each fac­tor were eval­u­ated in­de­pen­dently, Solv­abil­ity would be un­defined and con­cep­tu­ally “un­solv­able” if the prob­lem had re­ceived zero in­vest­ment [2].

Per­sonal fit may be un­der­val­ued in some analy­ses [2].

80,000 Hours as­signs nu­mer­i­cal rank­ings to their ITN anal­y­sis of causes [5]. Their high­est ranked cause is “risks from ar­tifi­cial in­tel­li­gence” at a com­bined ITN score of 27. “Devel­op­ing world health” is ranked 21. In their es­ti­ma­tion, each point makes a cause three times as press­ing. As “risks from ar­tifi­cial in­tel­li­gence” is ranked 6 points higher than “de­vel­op­ing world health,” it is es­ti­mated to be roughly 36, or 729 times as im­por­tant. Other causes ranked at a score of 20 in­clude ex­treme cli­mate change risks, land use re­form, or smok­ing in the de­vel­op­ing world.

They also use a Per­sonal Fit score from 0-4 which is added to each rank [6].

Taken liter­ally, this ad­vice means that 80,000 Hours en­courages ca­reer chang­ers work­ing in world health or lower-ranked fields to fo­cus on AI safety, even if they are an ac­tively bad fit. 80,000 Hours ad­vises tak­ing the scores with “a big pinch of salt,” they also say that their scores might be off by “a cou­ple of points.” If AI safety was ac­cu­rately ranked 25 in­stead of 27, and and the lower-ranked causes were in­creased by two points to 22 or 23, per­sonal fit would win out over cause pri­ori­ti­za­tion in this hy­po­thet­i­cal case.

Ne­glect­ed­ness im­prop­erly as­sumes ir­ra­tional­ity or value di­ver­gence of other actors

Greater value over­lap and ra­tio­nal­ity among in­vestors in al­tru­is­tic causes weak­ens the re­la­tion­ship be­tween Ne­glect­ed­ness and ex­pected MR [7] [11]. Others may have in­for­ma­tion we lack. Un­der these con­di­tions, ne­glect­ed­ness is a po­ten­tial sign of in­tractabil­ity or unim­por­tance that the an­a­lyst has missed [7]. Con­versely, when there is rea­son to ex­pect ir­ra­tional­ity or value di­ver­gence be­tween the an­a­lyst and so­ciety at large, ne­glect is less likely a sign of hid­den in­tractabil­ity or unim­por­tance [2] [7] [8].

ITN can be sim­plis­tic while also mud­dy­ing the water

ITN can be a quick top-level heuris­tic, but it may also be de­scribed as a core part of a cause pri­ori­ti­za­tion anal­y­sis [2]. It can be un­clear whether ITN rank­ings or scores should be un­der­stood as the out­puts of an un­pub­lished but ex­plic­itly speci­fied for­mal quan­ti­ta­tive anal­y­sis, or an in­formed but es­sen­tially in­tu­itive judg­ment. It is some­times used, con­trary to ad­vice, as a means for as­sess­ing in­ter­ven­tions rather than causes [4]. How­ever, the bound­ary be­tween a prob­lem, solu­tion, and a cause or fo­cus area is not always clear.

The way ITN scores can re­duce com­plex ar­gu­ments to a sim­ple score, as well as its sheer prevalence, can also en­courage a dis­mis­sive at­ti­tude and a lack of crit­i­cal think­ing [2] [12].

Mul­ti­ple com­pet­ing defi­ni­tions of the three fac­tors ex­ist, and some are misleading

Some peo­ple use “room for more fund­ing” or other terms im­prop­erly to re­fer to ne­glect­ed­ness [2] [7]. In­tu­itive proxy defi­ni­tions of ne­glect­ed­ness may re­quire an effort­ful trans­la­tion to be ac­cu­rate [3].

Similar in­tu­itive but poor defi­ni­tions ex­ist for tractabil­ity [12].

It is un­clear how to as­sign rel­a­tive weights to scale, sever­ity, in­di­rect effects, and re­pur­pos­able re­sources once a cause is solved in as­sess­ing Im­por­tance [10] [11]. In eval­u­at­ing Im­por­tance, it may be best to fo­cus on the scale of the limit­ing or bot­tle­necked solu­tion fac­tor rather than the sheer scale of the prob­lem [9]. The learn­ing value and in­di­rect effects may mat­ter, but terms like “Scale,” “Scope,” or “Im­por­tance” may guide our in­tu­ition to ig­nore them [1] [4]. More broadly, the clue­less­ness liter­a­ture deals with the challenge of ac­cu­rately count­ing in­di­rect effects of ac­tions into the long term fu­ture [11]

The three fac­tors can over­lap, lead­ing to dou­ble count­ing.

“Un­for­tu­nately, us­ing the com­mon defi­ni­tions of these words they blur heav­ily into one an­other...

Nat­u­rally, a cause that’s highly ne­glected will score well on ne­glect­ed­ness. It can then go on to score very well on im­por­tance be­cause its ne­glect­ed­ness means the prob­lem re­mains se­ri­ous. Fi­nally, it can also score well on tractabil­ity, be­cause the most promis­ing ap­proaches are not yet be­ing taken, be­cause it’s ne­glected.

Most of the case in favour of the cause then boils down to it be­ing ne­glected, rather than just one third, as origi­nally in­tended.” [10]

Citations

[1] A cause can be too neglected

[2] Against neglectedness

[3] Com­pli­ca­tions in eval­u­at­ing neglectedness

[4] Eval­u­a­tion Frame­works (or: When Im­por­tance /​ Ne­glect­ed­ness /​ Tractabil­ity Doesn’t Ap­ply)

[5] List of the most ur­gent global issues

[6] How to com­pare differ­ent global prob­lems in terms of impact

[7] Is Ne­glect­ed­ness a Strong Pre­dic­tor of Marginal Im­pact?

[8] Ne­glect­ed­ness and impact

[9] How scale is of­ten mi­sused as a met­ric and how to fix it

[10] The Im­por­tant/​Ne­glected/​Tractable frame­work needs to be ap­plied with care

[11] Why Char­i­ties Usu­ally Don’t Differ Astro­nom­i­cally in Ex­pected Cost-Effectiveness

[12] The ITN frame­work, cost-effec­tive­ness, and cause prioritisation

[13] Cost-effec­tive­ness of re­search: overview