# Formalizing the cause prioritization framework

When pri­ori­tiz­ing causes, what we ul­ti­mately care about is how much good we can do per unit of re­sources. In for­mal terms, we want to find the causes with the high­est marginal util­ity per dol­lar, MU/​$(or, marginal cost-effec­tive­ness). The Im­por­tance-Tractabil­ity-Ne­glect­ed­ness (ITN) frame­work has been used as a way of calcu­lat­ing MU/​$ by es­ti­mat­ing its com­po­nent parts. In this post I dis­cuss some is­sues with the cur­rent frame­work, pro­pose a mod­ified ver­sion, and con­sider a few im­pli­ca­tions.

80,000 Hours defines ITN as fol­lows:

• Im­por­tance = util­ity gained /​ % of prob­lem solved

• Tractabil­ity = % of prob­lem solved /​ % in­crease in resources

• Ne­glect­ed­ness = % in­crease in re­sources /​ ex­tra $With these defi­ni­tions, mul­ti­ply­ing all three fac­tors gives us util­ity gained /​ ex­tra$, or MU/​$(as the mid­dle terms can­cel out). How­ever, I will make two small amend­ments to this setup. First, it seems ar­tifi­cial to have a term for “% in­crease in re­sources”, since what we care about is the per-dol­lar effect of our ac­tions.[1] Hence, we can in­stead define tractabil­ity as “% of prob­lem solved /​ ex­tra$”, and elimi­nate the third fac­tor from the main defi­ni­tion. So to calcu­late MU/​$, we sim­ply mul­ti­ply im­por­tance and tractabil­ity: This defines MU/​$ as a func­tion of the amount of re­sources al­lo­cated to a prob­lem, which brings me to my sec­ond amend­ment. Apart from the above defi­ni­tion, 80k defines ‘ne­glect­ed­ness’ in­for­mally as the amount of re­sources al­lo­cated to solv­ing a prob­lem. This defi­ni­tion is con­fus­ing, be­cause the ev­ery­day mean­ing of ‘ne­glected’ is “im­prop­erly ig­nored”. To say that a cause is ne­glected in­tu­itively means that it is ig­nored rel­a­tive to its cost-effec­tive­ness. But if ne­glect­ed­ness is sup­posed to be a proxy for cost-effec­tive­ness, this ev­ery­day mean­ing is cir­cu­lar. And re­ally, how use­ful is the ad­vice to fo­cus on causes that have been im­prop­erly ig­nored? This should go with­out say­ing.

I sug­gest we in­stead use “crowd­ed­ness” to mean the amount of re­sources al­lo­cated to a prob­lem. This cap­tures in­tu­itions about diminish­ing re­turns (other things equal, a more crowded cause is less cost-effec­tive), uses an ab­solute rather than a rel­a­tive stan­dard, and avoids the prob­lem of hav­ing the tech­ni­cal defi­ni­tion con­flict with the ev­ery­day mean­ing.

Thus, our re­vised frame­work is now ITC:

• Im­por­tance = util­ity gained /​ % of prob­lem solved

• Tractabil­ity = % of prob­lem solved /​ ex­tra $• Crowd­ed­ness =$ al­lo­cated to the problem

So how does crowd­ed­ness fit into this setup, if it’s not part of the main defi­ni­tion? In­tu­itively, tractabil­ity will be a func­tion of crowd­ed­ness: the % of the prob­lem solved per dol­lar will vary de­pend­ing on how many re­sources are already al­lo­cated. This is the phe­nomenon of diminish­ing marginal re­turns, where the first dol­lar spent on a prob­lem is more effec­tive in solv­ing it than is the mil­lionth dol­lar. Hence, crowd­ed­ness tells us where we are on the tractabil­ity func­tion.

## A graph­i­cal approach

Let’s see how this works graph­i­cally. First, we start with tractabil­ity as a func­tion of dol­lars (crowd­ed­ness), as in Figure 1. With diminish­ing marginal re­turns, “% solved/​$” is de­creas­ing in re­sources. Next, we mul­ti­ply tractabil­ity by im­por­tance to ob­tain MU/​$ as a func­tion of re­sources, in Figure 2. As­sum­ing that Im­por­tance = “util­ity gained/​% solved” is a con­stant[2], all this does is change the units on the y-axis, since we’re mul­ti­ply­ing a func­tion by a con­stant.

Now we can clearly see the amount of good done for an ad­di­tional dol­lar, for ev­ery level of re­sources in­vested. To de­cide whether we should in­vest more in a cause, we calcu­late the cur­rent level of re­sources in­vested, then eval­u­ate the MU/​$func­tion at that level of re­sources. We do this for all causes, and al­lo­cate re­sources to the high­est MU/​$ causes, ul­ti­mately equal­iz­ing MU/​$across all causes as diminish­ing re­turns take effect. (Note the similar­ity to the util­ity max­i­miza­tion prob­lem from in­ter­me­di­ate microe­co­nomics, where you choose con­sump­tion of goods to max­i­mize util­ity, given their prices and sub­ject to a bud­get con­straint.) While MU/​$ is suffi­cient for pri­ori­tiz­ing across causes, we can also look at to­tal util­ity, by in­te­grat­ing the MU/​$func­tion over re­sources spent. Figure 3 plots the to­tal util­ity gained from spend­ing on a prob­lem, as a func­tion of re­sources spent. Note that the slope is equal to MU/​$, which is de­creas­ing in $. ## Implications (1) All three fac­tors in the ITC frame­work are nec­es­sary to draw a con­clu­sion about which cause is best. Con­sider this pas­sage from the 80k ar­ti­cle: [M]ass im­mu­ni­sa­tion of chil­dren is an ex­tremely effec­tive in­ter­ven­tion to im­prove global health, but it is already be­ing vi­gor­ously pur­sued by gov­ern­ments and sev­eral ma­jor foun­da­tions, in­clud­ing the Gates Foun­da­tion. This makes it less likely to be a top op­por­tu­nity for fu­ture donors. This last sen­tence is not strictly true. To be pre­cise, all we can say is that other things equal, a cause with more re­sources has lower MU/​$. That is, for two causes with the same MU/​$func­tion, the cause with higher re­sources will be farther along the func­tion, and hence have a lower MU/​$. If other things are not equal, the cause with more re­sources may have a higher or lower MU/​$. (And gen­er­ally, if a cause scores low on one of the three fac­tors, it can still have the high­est MU/​$, through high scores on one or both of the other two fac­tors.)

(2) With this setup, we can clearly see how MU/​$de­pends on con­text (in par­tic­u­lar, re­sources spent). To make up a hy­po­thet­i­cal ex­am­ple, AI risk might have had the high­est MU/​$ in 2013, but the fund­ing boost from OpenAI pushed it down the tractabil­ity curve to a lower value of MU/​$. Hence, claims about “cause C is the high­est pri­or­ity” should be framed as “cause C is the high­est pri­or­ity, given cur­rent fund­ing lev­els”. We should ex­pect the “best” cause (defined as high­est MU/​$) to change over time as spend­ing changes, which we could in­di­cate by us­ing a time sub­script, .

(3) This model also in­cor­po­rates Joey Savoie’s ar­gu­ment about us­ing the limit­ing fac­tor in­stead of im­por­tance. Here, a limit­ing fac­tor would show up as strongly diminish­ing re­turns in the tractabil­ity func­tion at some level of spend­ing. That is, the per­cent of the prob­lem solved per dol­lar would drop off sharply af­ter spend­ing some level of re­sources on the prob­lem.

(4) The sys­temic change cri­tique ar­gues that the stan­dard cause pri­ori­ti­za­tion frame­work can­not han­dle in­creas­ing marginal re­turns. For ex­am­ple, large-scale poli­ti­cal re­form yields no re­sults un­til a crit­i­cal mass is reached and mas­sive change oc­curs. But in fact this is eas­ily mod­eled as a tractabil­ity func­tion (Fig. 1) that is in­creas­ing for some part of its do­main. That is, when near­ing the crit­i­cal mass, each ad­di­tional dol­lar solves a larger per­cent of the prob­lem than the pre­vi­ous dol­lar. While this case re­quires a differ­ent de­ci­sion rule than “al­lo­cate re­sources to the cause with the high­est MU/​$”, it is a straight­for­ward ex­ten­sion of the stan­dard model. ## Conclusion I pro­pose a model of cost-effec­tive­ness us­ing Im­por­tance, Tractabil­ity, and Crowd­ed­ness. Tractabil­ity is a func­tion of crowd­ed­ness, and mul­ti­ply­ing im­por­tance and tractabil­ity gives us marginal util­ity per dol­lar. So is the 80k model wrong? No. I sim­ply find it more in­tu­itive to think about tractabil­ity as “% of prob­lem solved /​ ex­tra$” in­stead of “% of prob­lem solved /​ % in­crease in re­sources”, and this is the re­sult­ing model.

Notes

[1] Also, the Ne­glect­ed­ness term “% in­crease in re­sources /​ ex­tra $” is always equal to (1/​re­sources)%, which seems a bit re­dun­dant. That is, given re­sources, an ex­tra dol­lar always in­creases your re­sources by . Eg, given$100, an ex­tra dol­lar in­creases your re­sources by 1%.

[2] This seems to be a defi­ni­tional is­sue: we can define im­por­tance as a con­stant, so that “util­ity gained /​ % of prob­lem solved” is a con­stant func­tion of “% of prob­lem solved”. That is, solv­ing 1% of the prob­lem just means gain­ing 1% of the to­tal util­ity from solv­ing the en­tire prob­lem.

• Nice ar­ti­cle Michael. Im­prove­ments to EA cause pri­ori­ti­za­tion frame­works can be quite benefi­cial and I’d like to see more ar­ti­cles like this.

One thing I fo­cus on when try­ing to make ITC more prac­ti­cal is ways to re­duce its com­plex­ity even fur­ther. I do this by look­ing for which fac­tors in­tu­itively seem to have wider ranges in prac­tice. Im­pact can vary by fac­tors of mil­lions or trillions, from harm­ful to helpful, from nega­tive billions to pos­i­tive billions. Tractabil­ity can vary by fac­tors of mil­lions, from nega­tive mil­lionths to pos­i­tive digits. The Crowd­ed­ness com­po­nent gen­er­ally im­plies diminish­ing or in­creas­ing marginal re­turns only vary by fac­tors of thou­sands, from nega­tive tens to pos­i­tive thou­sands.

In sum­mary the ranges are in­tu­itively roughly:

• Im­por­tance (util/​%progress): (-10^9, 10^9)

• Tractabil­ity (%progress/​$): (-10^-6, 1) • Crowd­ed­ness ad­just­ment fac­tor ($/​$in): (-10, 10^3) Let’s as­sume in­ter­ven­tions have ran­domly as­so­ci­ated with them sam­ples from prob­a­bil­ity dis­tri­bu­tions over these ranges. Roughly speak­ing then we should care about these fac­tors based on the de­gree to which they help us clearly see which in­ter­ven­tion is bet­ter than an­other. The ex­tent to which these let us dis­t­in­guish be­tween the value in­ter­ven­tions is based on our un­cer­tainty per fac­tor for each in­ter­ven­tion and how the value de­pends on each fac­tor. Be­cause the value is equal to Im­por­tance*Tractabil­ity*Crowd­ed­nessAd­just­men­tFac­tor each fac­tor is treated the same (there is ab­stract sym­me­try). Thus we only need to con­sider how big each fac­tor range is in terms of our typ­i­cal in­ter­ven­tion fac­tor un­cer­tainty. This then tells us how use­ful each fac­tor is at dis­t­in­guish­ing in­ter­ven­tions based on im­por­tance. Pul­ling num­bers out the the in­tu­itive hat for the typ­i­cal in­ter­ven­tion un­cer­tainty I get: • Im­por­tance (util/​%progress un­cer­tainty unit): 10 • Tractabil­ity (%progress/​$ un­cer­tainty unit): 10^-6

• Crowd­ed­ness ad­just­ment fac­tor ($/​$in un­cer­tainty unit): 1

Di­vid­ing the ranges into these units lets us mea­sure the dis­t­in­guish­ing power of each fac­tor:

• Im­por­tance nor­mal­ized range (dis­t­in­guish­ing units): 10^8

• Tractabil­ity nor­mal­ized range (dis­t­in­guish­ing units): 10^6

• Crowd­ed­ness ad­just­ment fac­tor nor­mal­ized range (dis­t­in­guish­ing units): 10^3

As a rule of thumb then it looks like fo­cus­ing on Im­por­tance is bet­ter than Tractabil­ity is bet­ter than Crowd­ed­ness. This lends it­self to a se­quence of im­prov­ing heuris­tics for com­par­ing the value of in­ter­ven­tions then:

• Im­por­tance only

• Im­por­tance and Tractability

• The full ITC framework

(The above anal­y­sis is only ap­prox­i­mately cor­rect and will de­pend on de­tails like the pre­cise prob­a­bil­ity dis­tri­bu­tion over in­ter­ven­tions you’re com­par­ing and your un­cer­tainty dis­tri­bu­tions over in­ter­ven­tions for each fac­tor.

The ITC frame­work can be fur­ther ex­tended in sev­eral ways like: mak­ing pre­cise curves in­ter­ven­tions on the fac­tors of ITC, ex­tend­ing the de­tail of the anal­y­sis of re­sources to other pos­si­ble bot­tle­necks like time and peo­ple, in­cor­po­rat­ing the ideas of com­par­a­tive ad­van­tage and mar­ket places, …. I hope some­one does this!)

(PS I’m think­ing of mak­ing this into a short post and en­joy writ­ing col­lab­o­ra­tions so if some­one is in­ter­ested send me an EA fo­rum mes­sage.)

• Hi Justin, thanks for the com­ment.

I’m in fa­vor of re­duc­ing the com­plex­ity of the frame­work, but I’m not sure if this is the right way to do it. In par­tic­u­lar, es­ti­mat­ing “im­por­tance only” or “im­por­tance and tractabil­ity only” isn’t helpful, be­cause all three fac­tors are nec­es­sary for calcu­lat­ing MU/​$. A cause that scores high on I and T could be low MU/​$ over­all, due to be­ing highly crowded. Or is your ar­gu­ment that the var­i­ance (across causes) in crowd­ed­ness is neg­ligible, and there­fore we don’t need to ac­count for diminish­ing re­turns in prac­tice?

• My ar­gu­ment is about the later; the var­i­ances de­crease in size from I to T to C. The unit anal­y­sis still works be­cause the other parts are still im­plic­itly there but treated as con­stants when dropped from the frame­work.

• I guess I’m ex­pect­ing diminish­ing re­turns to be an im­por­tant fac­tor in prac­tice, so I wouldn’t place much weight on an anal­y­sis that ex­cludes crowd­ed­ness.

• I think some images don’t dis­play for me. This is what it looks like for me:

For fu­ture refer­ence, this is what worked for me, us­ing Drop­box:

• Open in incog­nito browser (reg­u­lar browser doesn’t work)

• I still can’t see them. This is what it looks like now.

As men­tioned here, copy­ing images from Google Doc and past­ing them seems to work re­li­ably.

It would be good if there were more visi­ble guides on how to post, as dis­cussed in that thread.

• The google docs method worked, but you can’t con­trol image size.

I’m now us­ing imgur, which should be recom­mended some­where here for au­thors.

• Click­ing on ‘Open Image in New Tab’ in­di­cates that the image is hosted by Google Pho­tos, so I sus­pect the pri­vacy set­tings are pre­vent­ing us from see­ing them. Maybe Google read Rob’s an­gry post and have now taken things to the other ex­treme. :P

• None of the images dis­play for me ei­ther. This is what it looks like for me:

Let’s see how this works graph­i­cally. First, we start with tractabil­ity as a func­tion of dol­lars (crowd­ed­ness), as in Figure 1. With diminish­ing marginal re­turns, “% solved/​$” is de­creas­ing in re­sources. Next, we mul­ti­ply tractabil­ity by im­por­tance to ob­tain MU/​$ as a func­tion of re­sources, in Figure 2. As­sum­ing that Im­por­tance = “util­ity gained/​% solved” is a con­stant[2], all this does is change the units on the y-axis, since we’re mul­ti­ply­ing a func­tion by a con­stant.

Now we can clearly see the amount of good done for an ad­di­tional dol­lar, for ev­ery level of re­sources in­vested. To de­cide whether we should in­vest more in a cause, we calcu­late the cur­rent level of re­sources in­vested, then eval­u­ate the MU/​$func­tion at that level of re­sources. We do this for all causes, and al­lo­cate re­sources to the high­est MU/​$ causes, ul­ti­mately equal­iz­ing MU/​\$ across all causes as diminish­ing re­turns take effect. (Note the similar­ity to the util­ity max­i­miza­tion prob­lem from in­ter­me­di­ate microe­co­nomics, where you choose con­sump­tion of goods to max­i­mize util­ity, given their prices and sub­ject to a bud­get con­straint.)

• Up­date: The pic­tures load for me now

• Michael, thanks for this post. I have been fol­low­ing the dis­cus­sion about INT and pri­ori­ti­sa­tion frame­works with in­ter­est.

Ex­actly how should I ap­ply the re­vised frame­work you sug­gest? There are a num­ber of equa­tions, dis­cus­sions of defi­ni­tions and cir­cu­lar­i­ties in this post, but a (hy­po­thet­i­cal?) worked ex­am­ple would be very use­ful.

• Yes, the difficult part is ap­ply­ing the ITC frame­work in prac­tice; I don’t have any spe­cial in­sight there. But the goal is to es­ti­mate im­por­tance and the tractabil­ity func­tion for differ­ent causes.

You can see how 80k tries to rank causes here.