Updates from the Global Priorities Institute and how to get involved

The Global Pri­ori­ties In­sti­tute (GPI) con­ducts re­search in philos­o­phy and eco­nomics on how to do the most good. GPI aims to found global pri­ori­ties re­search as an aca­demic field, in or­der to en­able policy mak­ers and in­ter­gov­ern­men­tal bod­ies around the world to make de­ci­sions based on what will im­prove the world most. You can learn more about GPI in Hilary Greaves’ re­cent epi­sode of the 80,000 Hours pod­cast, Michelle Hutch­in­son’s ear­lier epi­sode, or Michelle’s EAG talk.

GPI is look­ing for a Head of Re­search Oper­a­tions to lead its next stage of growth. Find­ing the right per­son for this role is cru­cial to GPI achiev­ing its mis­sion. If it sounds in­ter­est­ing to you, please con­sider ap­ply­ing!

Re­cent Progress

  • GPI offi­cially be­came an in­sti­tute within the Univer­sity of Oxford in Jan­uary this year. We’ve had ini­tial suc­cess in build­ing a strong re­search team in philos­o­phy, in pro­duc­ing pa­pers, and in build­ing re­la­tion­ships with aca­demics based at other in­sti­tu­tions. In the com­ing year we are pri­ori­tis­ing build­ing up the eco­nomics side of GPI, which has been more challeng­ing due to GPI be­ing founded by philoso­phers.

  • Will MacAskill re­cently gave the keynote talk at the 2018 Con­fer­ence for the In­ter­na­tional So­ciety for Utili­tar­ian Stud­ies, which cov­ered a ma­jor part of GPI’s re­search agenda. The full talk is available here.

  • We’ve set up two dis­t­in­guished lec­ture se­ries in Eco­nomics and Philos­o­phy. The inau­gu­ral Atk­in­son Me­mo­rial Lec­ture was given by Pro­fes­sor Yew-Kwang Ng. The inau­gu­ral Parfit Me­mo­rial Lec­ture will take place in early 2019 and will be given by As­so­ci­ate Pro­fes­sor Lara Buchak, who is known for her work on risk aver­sion.

  • We’ve run our first sum­mer pro­gram for early-ca­reer re­searchers (Sum­mer Re­search Visi­tor Pro­gramme), which is in­tended to at­tract top early-ca­reer re­searchers and grad­u­ate stu­dents to our re­search in­ter­ests. We’ve already filled all of the slots for philoso­phers on our 2019 vis­i­tor pro­gramme, but still have some slots re­main­ing for economists.

  • We’ve set up the Parfit and Atk­in­son Schol­ar­ship pro­grammes for grad­u­ate stu­dents in eco­nomics and philos­o­phy to come to Oxford and work on global pri­ori­ties re­search top­ics through the DPhil pro­gramme. We’re also offer­ing prizes for stu­dents already study­ing a DPhil at Oxford (in ei­ther eco­nomics or philos­o­phy) to do the same.

Our Research

At the end of last year, we re­leased our re­search agenda, which lays out a pre­limi­nary sketch of the top­ics which we think are most im­por­tant, ne­glected, and tractable for GPI to work on. We will soon be re­leas­ing a sec­ond ver­sion of the re­search agenda, drafted with GPI’s new eco­nomics team.

In car­ry­ing out re­search, we’ve tested a novel model for aca­demic in­sti­tutes us­ing col­lab­o­ra­tive work­ing groups. Our re­searchers meet to dis­cuss and brain­storm a par­tic­u­lar topic from the re­search agenda. Based on these brain­storms, re­searchers draw up a pri­ori­tised list of pos­si­ble re­search ar­ti­cles. This al­lows us to can­vas a wide ar­ray of po­ten­tial ideas and then pri­ori­tise those which are most promis­ing and which will be most im­pact­ful (in terms of en­gag­ing other aca­demics and also of pro­vid­ing value to the EA com­mu­nity). So far, we’ve found this model to be more effi­cient at iden­ti­fy­ing the best av­enues of re­search and pro­duc­ing high-qual­ity pa­pers, and we plan to con­tinue us­ing it for the fore­see­able fu­ture.

In 2018, our re­searchers fo­cussed on the fol­low­ing top­ics in their work­ing groups:

  • Long-ter­mism: the view that the pri­mary de­ter­mi­nant of the moral value of our ac­tions to­day is the effect of those ac­tions on the very long-run fu­ture. What are the most com­pel­ling ob­jec­tions to long-ter­mism? If we ex­clude ac­tions which re­duce ex­is­ten­tial risk, is it still true or should we ex­pect the con­se­quences of our ac­tions to wash out in the long run?

  • Ex­tinc­tion risk and risk/​am­bi­guity aver­sion: Should agents who pre­fer prospects with less risk (or am­bi­guity) pri­ori­tise short-term, highly cer­tain in­ter­ven­tions (such as in global health) over longer-term, highly un­cer­tain in­ter­ven­tions such as those which miti­gate ex­is­ten­tial risk? This work­ing group has already gen­er­ated a pa­per by An­dreas Mo­gensen, which shows that this de­pends on whether the agent is con­cerned just with the im­pact of their ac­tions or with the to­tal value in the world.

  • Fa­nat­i­cism: al­low­ing de­ci­sions to be de­ter­mined by small prob­a­bil­ities of ex­tremely large pay­offs (or ex­tremely dire con­se­quences). This may be seen as an ob­jec­tion to the use of ex­pected value. But jus­tifi­ca­tions of ex­is­ten­tial risk miti­ga­tion rely on the use of ex­pected value, some­times even ap­pear to en­dorse fa­nat­i­cism. Is this a prob­lem for those jus­tifi­ca­tions? Should we en­dorse a prin­ci­ple of ‘timidity’? Teruji Thomas is cur­rently writ­ing up our find­ings on this.

  • Indi­rect effects: The cost-effec­tive­ness of char­i­ta­ble in­ter­ven­tions is typ­i­cally eval­u­ated by com­par­ing the in­ter­ven­tion’s di­rect benefits, and only its di­rect benefits, to its costs. How do our eval­u­a­tions change we in­cor­po­rate in­di­rect effects? Could in­di­rect effects be the most im­por­tant de­ter­mi­nant of the moral value of most ac­tions?

  • De­liber­a­tion lad­ders: Sup­pose you have un­der­gone a se­ries of sig­nifi­cant changes in your moral views. Should you ex­pect to un­dergo fur­ther changes and, if so, how should you act now? Should we be far less con­fi­dent in our moral views than we are?

  • Donor co­or­di­na­tion: Given mul­ti­ple ac­tors de­cid­ing how to dis­tribute re­sources for al­tru­is­tic pur­poses, how will they, and how should they, act? How can we use donor co­or­di­na­tion strate­gies to lev­er­age more dona­tions to effec­tive causes and to re­duce spend­ing in zero-sum games such as poli­ti­cal cam­paign fund­ing?

  • Long-run eco­nomic growth: How is stan­dard growth the­ory al­tered when we con­sider the catas­trophic risks of new tech­nolo­gies rather than just the in­creases in con­sump­tion they cause? Given these risks, what is the op­ti­mal rate of growth? And what can we say about the op­ti­mal rate of growth in any given coun­try, when growth in one coun­try im­poses risks on other coun­tries?

Join GPI

GPI is cur­rently hiring for a new Head of Re­search Oper­a­tions.

The Head of Re­search Oper­a­tions role is a cen­tral part of GPI, and nec­es­sary for mak­ing GPI’s vi­sion a re­al­ity. They will man­age all op­er­a­tional as­pects of GPI. They will have a great deal of au­ton­omy, but the key re­spon­si­bil­ities are:

  • Helping to de­velop GPI’s long-term strat­egy and plan the In­sti­tute’s ac­tivi­ties over the com­ing years, e.g., sem­i­nars, vis­i­tor pro­grammes, schol­ar­ships, and con­fer­ences.

  • Do­ing the nec­es­sary lo­gis­ti­cal work to make those ac­tivi­ties hap­pen.

  • Manag­ing com­mu­ni­ca­tions—rep­re­sent­ing GPI ex­ter­nally, pro­mot­ing global pri­ori­ties re­search to aca­demics, and pre­sent­ing GPI’s work to pub­lic au­di­ences.

  • Fundrais­ing, par­tic­u­larly from pri­vate donors.

  • Manag­ing GPI’s fi­nances.

  • Re­cruit­ing and man­ag­ing a larger op­er­a­tional team to share these re­spon­si­bil­ities as GPI con­tinues to grow.

We’re look­ing for some­one with an an­a­lytic and en­trepreneurial mind­set with a demon­strated track record for in­de­pen­dently plan­ning and man­ag­ing com­plex pro­jects, ex­cel­lent oral com­mu­ni­ca­tions skills, and ex­pe­rience of work­ing well in a team.

If you’re in­ter­ested in learn­ing more about the role, you can find more de­tail on what the role in­volves, what we’re look­ing for and how to ap­ply here.


In ad­di­tion to the Head of Re­search Oper­a­tions role, there are op­por­tu­ni­ties for aca­demics to get in­volved with GPI through our schol­ar­ships, prizes, and vis­i­tor pro­gram. You can see the full list of op­por­tu­ni­ties we have open at any time here.


Ap­pendix—GPI’s cur­rent work­ing papers

Here’s a snap­shot of some of the pa­pers the GPI team are work­ing on cur­rently.

An­dreas Mo­gensen—Long-ter­mism for risk averse al­tru­ists (full pa­per)

Ab­stract:

Ac­cord­ing to Long-ter­mism, al­tru­is­tic agents should try to benefi­cially in­fluence the long-run fu­ture, as op­posed to aiming at short-term benefits. The like­li­hood that I can sig­nifi­cantly im­pact the long-term fu­ture of hu­man­ity is ar­guably very small, whereas I can be rea­son­ably con­fi­dent of achiev­ing sig­nifi­cant short-term goods. How­ever, the po­ten­tial value of the far fu­ture is so enor­mous that even an act with only a tiny prob­a­bil­ity of pre­vent­ing an ex­is­ten­tial catas­tro­phe should ap­par­ently be as­signed much higher ex­pected value than an al­ter­na­tive that re­al­izes some short-term benefit with near cer­tainty. This pa­per ex­plores whether agents who are risk averse should be more or less will­ing to en­dorse Long-ter­mism, look­ing in par­tic­u­lar at agents who can be mod­el­led as risk avoidant within the frame­work of risk-weighted ex­pected util­ity the­ory. I find that risk aver­sion may be more friendly to Long-ter­mism than risk neu­tral­ity. How­ever, I find that there is some rea­son to sup­pose that am­bi­guity aver­sion dis­favours Long-ter­mism.

Chris­tian Tarsney—Ex­ceed­ing ex­pec­ta­tions: Stochas­tic dom­i­nance as a gen­eral de­ci­sion the­ory (full pa­per)

Ab­stract:

The prin­ci­ple that ra­tio­nal agents should max­i­mize ex­pec­ta­tions is in­tu­itively plau­si­ble with re­spect to many or­di­nary cases of de­ci­sion-mak­ing un­der un­cer­tainty. But it be­comes in­creas­ingly im­plau­si­ble as we con­sider cases of more ex­treme, low-prob­a­bil­ity risk (like Pas­cal’s Mug­ging), and in­tol­er­ably para­dox­i­cal in cases like the St. Peters­burg Lot­tery and the Pasadena Game. In this pa­per I show that, un­der cer­tain as­sump­tions, stochas­tic dom­i­nance rea­son­ing can cap­ture many of the plau­si­ble im­pli­ca­tions of ex­pec­ta­tional rea­son­ing while avoid­ing its im­plau­si­ble im­pli­ca­tions. More speci­fi­cally, when an agent starts from a con­di­tion of back­ground un­cer­tainty about the choice­wor­thi­ness of her op­tions rep­re­sentable by a prob­a­bil­ity dis­tri­bu­tion over pos­si­ble de­grees of choice­wor­thi­ness with ex­po­nen­tial or heav­ier tails and a suffi­ciently large scale pa­ram­e­ter, many ex­pec­ta­tion-max­i­miz­ing gam­bles that would not stochas­ti­cally dom­i­nate their al­ter­na­tives “in a vac­uum” turn out to do so in virtue of this back­ground un­cer­tainty. Nonethe­less, even un­der these con­di­tions, stochas­tic dom­i­nance will gen­er­ally not re­quire agents to ac­cept ex­treme gam­bles like Pas­cal’s Mug­ging or the St. Peters­burg Lot­tery. I ar­gue that the sort of back­ground un­cer­tainty on which these re­sults de­pend is ap­pro­pri­ate for any agent who as­signs nor­ma­tive weight to ag­grega­tive con­se­quen­tial­ist con­sid­er­a­tions, i.e., who mea­sures the choice­wor­thi­ness of an op­tion in part by the to­tal amount of value in the re­sult­ing world. At least for such agents, then, stochas­tic dom­i­nance offers a plau­si­ble gen­eral prin­ci­ple of choice un­der un­cer­tainty that can ex­plain more of the ap­par­ent ra­tio­nal con­straints on such choices than has pre­vi­ously been rec­og­nized.

Rossa O’Keeffe O’Dono­van—Water, spillovers and free rid­ing: Pro­vi­sion of lo­cal pub­lic goods in a spa­tial net­work (full pa­per)

Ab­stract:

Both state and non-gov­ern­men­tal or­ga­ni­za­tions provide pub­lic goods in de­vel­op­ing coun­tries, po­ten­tially gen­er­at­ing in­effi­cien­cies where they lack co­or­di­na­tion. In ru­ral Tan­za­nia, more than 500 or­ga­ni­za­tions have in­stalled hand-pow­ered wa­ter pumps in a de­cen­tral­ized fash­ion. I es­ti­mate the costs of this frag­mented pro­vi­sion by study­ing how com­mu­ni­ties’ pump main­te­nance de­ci­sions are shaped by strate­gic in­ter­ac­tions be­tween them. I model the main­te­nance of pumps as a net­work game be­tween neigh­bor­ing com­mu­ni­ties, and es­ti­mate this model us­ing geo-coded data on the lo­ca­tion, char­ac­ter­is­tics and func­tion­al­ity of wa­ter sources, and hu­man cap­i­tal out­comes. Es­ti­ma­tion com­bines max­i­mum simu­lated like­li­hood with a clus­ter­ing al­gorithm that par­ti­tions the data into ge­o­graphic clusters. Us­ing ex­oge­nous vari­a­tion in the similar­ity of wa­ter sources to iden­tify spillover and free rid­ing effects be­tween com­mu­ni­ties, I find ev­i­dence of main­te­nance cost-re­duc­tion spillovers among pumps of the same tech­nol­ogy and strong wa­ter source free-rid­ing in­cen­tives. As a re­sult, stan­dard­iza­tion of pump tech­nolo­gies would in­crease pump func­tion­al­ity rates by 6 per­centage points. More­over, wa­ter col­lec­tion fees dis­cour­age free rid­ing and would in­crease pump func­tion­al­ity rates by 11 per­centage points if adopted uni­ver­sally. This in­creased availa­bil­ity of wa­ter would have a mod­est pos­i­tive effect on child sur­vival and school at­ten­dance rates.

An­dreas Mo­gensen—Mean­ing, medicine, and merit (full pa­per)

Ab­stract:

Given the in­evita­bil­ity of scarcity, should pub­lic in­sti­tu­tions ra­tion health­care re­sources so as to pri­ori­tize those who con­tribute more to so­ciety? In­tu­itively, we may feel that this would be some­how ine­gal­i­tar­ian. I be­gin by show­ing that it is sur­pris­ingly hard to sub­stan­ti­ate this be­lief. I then ar­gue that the egal­i­tar­ian ob­jec­tion to pri­ori­tiz­ing treat­ment on the ba­sis of pa­tients’ use­ful­ness to oth­ers is best thought of as semiotic: i.e., as hav­ing to do with what this prac­tice would mean, con­vey, or ex­press about each per­son’s stand­ing. I ex­plore the im­pli­ca­tions of this con­clu­sion when taken in con­junc­tion with the ob­ser­va­tion that semiotic ob­jec­tions are gen­er­ally flimsy, failing to iden­tify any­thing wrong with a prac­tice as such and hav­ing limited ca­pac­ity to gen­er­al­ize be­yond par­tic­u­lar con­texts. In par­tic­u­lar, I con­sider the im­pli­ca­tions for eval­u­at­ing ra­tioning de­ci­sions con­cern­ing life and health in the sphere of pri­vate philan­thropy, where donors might wish to give prefer­ence to benefi­cia­ries with greater in­stru­men­tal value to oth­ers.

Philip Tram­mell—Fixed-point solu­tions to the regress prob­lem in nor­ma­tive un­cer­tainty (full pa­per)

Ab­stract:

When we are faced with a choice among acts, but are un­cer­tain about the true state of the world, we may be un­cer­tain about the acts’ “choice­wor­thi­ness”. De­ci­sion the­o­ries guide our choice by mak­ing nor­ma­tive claims about how we should re­spond to this un­cer­tainty. If we are un­sure which de­ci­sion the­ory is cor­rect, how­ever, we may re­main un­sure of what we ought to do. Given this de­ci­sion-the­o­retic un­cer­tainty, meta-the­o­ries at­tempt to re­solve the con­flicts be­tween our de­ci­sion the­o­ries… but we may be un­sure which meta-the­ory is cor­rect as well. This rea­son­ing can launch a regress of ever-higher-or­der un­cer­tainty, which may leave one for­ever un­cer­tain about what one ought to do. There is, for­tu­nately, a class of cir­cum­stances un­der which this regress is not a prob­lem. If one holds a car­di­nal un­der­stand­ing of sub­jec­tive choice­wor­thi­ness, and ac­cepts cer­tain other crite­ria (which are too weak to spec­ify any par­tic­u­lar de­ci­sion the­ory), one’s hi­er­ar­chy of metanor­ma­tive un­cer­tainty ul­ti­mately con­verges to pre­cise defi­ni­tions of “sub­jec­tive choice­wor­thi­ness” for any finite set of acts. If one al­lows the metanor­ma­tive regress to ex­tend to the trans­finite or­di­nals, the con­ver­gence crite­ria can be weak­ened fur­ther. Fi­nally, the struc­ture of these re­sults ap­plies straight­for­wardly not just to de­ci­sion-the­o­retic un­cer­tainty, but also to other va­ri­eties of nor­ma­tive un­cer­tainty, such as moral un­cer­tainty.

An­dreas Mo­gensen & Will MacAskill—The paral­y­sis argument

Ab­stract:

This pa­per ex­plores the difficul­ties that arise when we ap­ply the Doc­trine of Do­ing and Allow­ing (DDA) to the in­di­rect and un­fore­see­able long-run con­se­quences of our ac­tions. Given some plau­si­ble em­piri­cal as­sump­tions about the long-run im­pact of our ac­tions, the DDA ap­pears to en­tail that we should aim to do as lit­tle as pos­si­ble be­cause we can­not know the dis­tri­bu­tion of benefits and harms that re­sult from our ac­tions over the long term. We con­sider a num­ber of ob­jec­tions to the ar­gu­ment and sug­gest what we think is the most promis­ing re­sponse. This in­volves ac­cept­ing a highly de­mand­ing moral­ity of benefi­cence with a long-ter­mist fo­cus. This may be taken to rep­re­sent a strik­ing point of con­ver­gence be­tween con­se­quen­tial­ist and de­on­tolog­i­cal moral the­o­ries.

An­dreas Mo­gensen—Dooms­day redux

Ab­stract:

This pa­per con­sid­ers the ar­gu­ment that be­cause we should re­gard it as a pri­ori very un­likely that we are among the most im­por­tant peo­ple who will ever ex­ist, we should de­crease our con­fi­dence in the­o­ries on which we are liv­ing dur­ing a pe­riod of high ex­tinc­tion risk that will be fol­lowed by a long pe­riod of high safety. This may in­volve sub­stan­tially in­creas­ing our con­fi­dence that the hu­man species will be­come ex­tinct within the near fu­ture. The ar­gu­ment is a de­scen­dant of the Carter-Les­lie Dooms­day Ar­gu­ment. In show­ing why the lat­ter ar­gu­ment fails, I ar­gue that the former fails to in­herit its defects, and should there­fore be taken se­ri­ously even if we re­ject the Dooms­day Ar­gu­ment.

Chris­tian Tarsney—Me­tanor­ma­tive regress: An es­cape plan

Ab­stract:

How should an agent de­cide what to do when she is un­cer­tain about ba­sic nor­ma­tive prin­ci­ples? Sev­eral philoso­phers have sug­gested that such an agent should fol­low some sec­ond-or­der norm: e.g., she should com­ply with the first-or­der nor­ma­tive the­ory she re­gards as most prob­a­ble, choose the op­tion that’s most likely to be ob­jec­tively right, or max­i­mize ex­pected ob­jec­tive value. But such pro­pos­als face a po­ten­tially-fatal difficulty: If an agent who is un­cer­tain about first-or­der norms must in­voke sec­ond-or­der norms to reach a ra­tio­nally guided de­ci­sion, then an agent who is un­cer­tain about sec­ond-or­der norms must in­voke third-or­der norms—and so on ad in­fini­tum, such that an agent who is at least a lit­tle un­cer­tain about any nor­ma­tive prin­ci­ple will never be able to reach a ra­tio­nally guided de­ci­sion at all. This pa­per tries to solve this “metanor­ma­tive regress” prob­lem. I first elab­o­rate and defend Brian Weather­son’s ar­gu­ment that the regress prob­lem forces us to ac­cept the view he calls nor­ma­tive ex­ter­nal­ism, ac­cord­ing to which some norms are in­cum­bent on an agent re­gard­less of her be­liefs. But, con­tra Weather­son, I ar­gue that we need not ac­cept ex­ter­nal­ism about first-or­der (e.g. moral) norms, thus clos­ing off any ques­tion of what an agent should do in light of her nor­ma­tive be­liefs. Rather, it is more plau­si­ble to as­cribe ex­ter­nal force to a sin­gle, sec­ond-or­der ra­tio­nal norm: the enkratic prin­ci­ple, cor­rectly for­mu­lated. In the sec­ond half of the pa­per, I ar­gue that this mod­est form of ex­ter­nal­ism can solve the regress prob­lem. More speci­fi­cally, I dis­t­in­guish two regress prob­lems, af­flict­ing ideal and non-ideal agents re­spec­tively, and offer solu­tions to both.

Chris­tian Tarsney—Non-iden­tity, times infinity

Ab­stract:

This pa­per de­scribes a new difficulty for con­se­quen­tial­ist ethics in in­finite wor­lds. Although in­finite wor­lds in and of them­selves have been thought to challenge ag­grega­tive con­se­quen­tial­ism, I be­gin by ar­gu­ing that, for agents who can only make a finite differ­ence to an in­finite world, there is a sim­ple prin­ci­ple (namely, to com­pare pairs of wor­lds by sum­ming the differ­ences in value re­al­ized at each pos­si­ble value lo­ca­tion) that yields all the con­clu­sions an ag­grega­tive con­se­quen­tial­ist would in­tu­itively want. But, if the world is not merely in­finite in spa­tial ex­tent but con­tains in­finitely many value-bear­ing en­tities in our causal fu­ture, then this prin­ci­ple breaks down. Speci­fi­cally, be­cause our choices are likely to be “iden­tity-af­fect­ing” with re­spect to all or nearly all the value-bear­ing en­tities in our causal fu­ture, any two op­tions in a given choice situ­a­tion will re­sult in wor­lds whose sum of value differ­ences is non-con­ver­gent and hence un­defined. There is an ap­par­ently-nat­u­ral anonymity prin­ci­ple that seem­ingly must be true if we are to make any com­par­i­sons at all be­tween “in­finite non-iden­tity” wor­lds. But in com­bi­na­tion with other very mod­est as­sump­tions, this prin­ci­ple gen­er­ates ax­iolog­i­cal cy­cles. From this cyclic­ity prob­lem, I draw out sev­eral sim­ple im­pos­si­bil­ity re­sults sug­gest­ing that, if the pop­u­la­tion of the causal fu­ture is in­finite, then we will have to pay a very high the­o­ret­i­cal price to hang onto the idea that our ac­tions to mat­ter from an im­par­tial per­spec­tive.

Chris­tian Tarsney—Vive la différence? Struc­tural di­ver­sity as a challenge for metanor­ma­tive theories

Ab­stract:

How should agents de­cide what to do when they’re un­cer­tain about ba­sic nor­ma­tive prin­ci­ples? Most an­swers to this ques­tion in­volve some form of in­terthe­o­retic value ag­gre­ga­tion, i.e., some way of com­bin­ing the rank­ings of op­tions given by ri­val nor­ma­tive the­o­ries into a sin­gle rank­ing that tells an agent what to do given her un­cer­tainty. An im­por­tant ob­sta­cle to any form of in­terthe­o­retic value ag­gre­ga­tion, how­ever, is the struc­tural di­ver­sity of nor­ma­tive the­o­ries: The rank­ings given by first-or­der the­o­ries, which serve as in­puts to in­terthe­o­retic ag­gre­ga­tion, may have any num­ber of struc­tures, in­clud­ing or­di­nal, in­ter­val, ra­tio, mul­ti­di­men­sional, and (I claim) many more. But it is of­ten not ob­vi­ous how to com­bine rank­ings with differ­ent struc­tures. In this pa­per, I sur­vey and eval­u­ate three gen­eral ap­proaches to this prob­lem. Struc­tural de­ple­tion solves the prob­lem by strip­ping the­o­ries of all but some min­i­mum, uni­ver­sal struc­ture for pur­poses of ag­gre­ga­tion. Struc­tural en­rich­ment, on the other hand, adds struc­ture to the­o­ries, e.g. by map­ping or­di­nal rank­ings onto a car­di­nal scale. Fi­nally, multi-stage ag­gre­ga­tion ag­gre­gates classes of iden­ti­cally-struc­tured the­o­ries first, then takes the re­sult as in­put to one or more fur­ther stages of ag­gre­ga­tion that com­bine larger classes of more dis­tantly re­lated the­o­ries. I ten­ta­tively defend multi-stage ag­gre­ga­tion as the least bad of these op­tions, but all three ap­proaches have se­ri­ous draw­backs. This “prob­lem of struc­tural di­ver­sity” needs more at­ten­tion, both since it rep­re­sents a se­ri­ous challenge to the pos­si­bil­ity of in­terthe­o­retic ag­gre­ga­tion and since what­ever ap­proach we adopt will sub­stan­tively con­strain other as­pects of our metanor­ma­tive the­o­ries.