A Long-run perspective on strategic cause selection and philanthropy

Co-writ­ten by Nick Beck­stead and Carl Shul­man

Introduction

A philan­thropist who will re­main anony­mous re­cently asked us about what we would do if we didn’t face fi­nan­cial con­straints. We gave a de­tailed an­swer that we thought we might as well share with oth­ers, who may also find our per­spec­tive in­ter­est­ing. We gave the an­swer largely in hope of cre­at­ing some in­ter­est in our way of think­ing about philan­thropy and some of the causes that we find in­ter­est­ing for fur­ther in­ves­ti­ga­tion, and be­cause we thought the an­swer would be fruit­ful for con­ver­sa­tion.

Our hon­est an­swer to your question

Our hon­est an­swer to your ques­tion is that we would sys­tem­at­i­cally ex­am­ine a wide va­ri­ety of causes and op­por­tu­ni­ties with the in­ten­tion of iden­ti­fy­ing the ones which could use ad­di­tional money and tal­ent to pro­duce the best long-run out­comes. This would look a lot like set­ting up a ma­jor foun­da­tion—which is un­sur­pris­ing, given that many peo­ple in this situ­a­tion do set up foun­da­tions—so we will con­cen­trate on the dis­t­in­guish­ing or less typ­i­cal fea­tures of our ap­proach:
  1. Un­like many foun­da­tions, we would place a great deal of em­pha­sis on se­lect­ing the high­est im­pact pro­gram ar­eas, rather than se­lect­ing pro­gram ar­eas for other rea­sons and work­ing hard­est to find the best op­por­tu­ni­ties within those ar­eas. Like GiveWell, we be­lieve that the choice of pro­gram ar­eas may be one of the most im­por­tant de­ci­sions a ma­jor philan­thropist makes and is con­sis­tently un­der­em­pha­sized.

  2. We would in­vest heav­ily in learn­ing, fund­ing sys­tem­atic ex­am­i­na­tion of the spec­trum of op­por­tu­ni­ties, and the trans­par­ent pub­li­ca­tion of our pro­cess and find­ings.

  3. In ad­di­tion to shar­ing in­for­ma­tion about giv­ing op­por­tu­ni­ties, we would share de­tailed in­for­ma­tion about tal­ent gaps, en­courag­ing peo­ple with the right abil­ities to seek out op­por­tu­ni­ties in promis­ing ar­eas that are con­strained by peo­ple rather than money.

  4. We would mea­sure im­pact pri­mar­ily in terms of very long-run pos­i­tive con­se­quences for hu­man­ity, as out­lined in Nick’s PhD the­sis.

  5. We would be skep­ti­cal of our in­tu­itions, and check them through such means as ex­ter­nal re­view, the col­lec­tion of track records for our pre­dic­tions, struc­tured eval­u­a­tions, and the use of sim­ple and so­phis­ti­cated meth­ods of ag­gre­gat­ing and im­prov­ing on ex­pert opinion (e.g. the fore­cast­ing train­ing and ag­gre­ga­tion meth­ods de­vel­oped by Philip Tet­lock, cal­ibra­tion train­ing, pre­dic­tion mar­kets, and anony­mous sur­veys of ap­pro­pri­ate ex­perts).

We un­der­stand that you prob­a­bly aren’t con­tact­ing us about set­ting up a foun­da­tion, though you might be in­ter­ested in hear­ing more about the ap­proach and as­sump­tions above, so we’ll say a few things about how we would go about strate­gi­cally se­lect­ing causes, and our lead­ing hy­pothe­ses about which causes are most promis­ing to in­ves­ti­gate fur­ther.

Briefly,

  1. We be­lieve that max­i­miz­ing good ac­com­plished largely re­duces to do­ing what is best in terms of very long-run out­comes for hu­man­ity. We think this has sig­nifi­cant prac­ti­cal im­pli­ca­tions when mak­ing trade-offs be­tween short-term welfare and the broad func­tion­ing of so­ciety, our abil­ity to face ma­jor global challenges and op­por­tu­ni­ties, and in­creas­ing so­ciety’s re­silience to global catas­tro­phes.

  2. Five causes we are in­ter­ested in in­ves­ti­gat­ing first in­clude im­mi­gra­tion re­form, meth­ods for im­proved fore­cast­ing, an area we call “philan­thropic in­fras­truc­ture,” catas­trophic risks to hu­man­ity, and re­search in­tegrity. Th­ese would be ar­eas for in­ves­ti­ga­tion and ex­per­i­men­ta­tion, and we would pur­sue them in the short run pri­mar­ily for the sake of gain­ing in­for­ma­tion about how at­trac­tive they are in com­par­i­son with other ar­eas. There many other causes we would like to in­ves­ti­gate early on, and would be­gin in­ves­ti­gat­ing those causes less deeply and in par­allel with our in­ves­ti­ga­tions of the causes we are most en­thu­si­as­tic about. We’d be happy to dis­cuss the other causes with you as well.

We elab­o­rate on these ideas be­low.

Is the long run ac­tion­able in the short run?

As just men­tioned, we be­lieve that max­i­miz­ing good ac­com­plished largely re­duces to do­ing what is best in terms of very long-run out­comes for hu­man­ity, and that this has strate­gic im­pli­ca­tions for peo­ple aiming to max­i­mize good ac­com­plished with their re­sources. We think these im­pli­ca­tions are sig­nifi­cant when choos­ing be­tween causes or pro­gram ar­eas, and less sig­nifi­cant when com­par­ing op­por­tu­ni­ties within pro­gram ar­eas.

There is a lot of de­tail be­hind this per­spec­tive and it is hard to sum­ma­rize briefly. But here is an at­tempt to quickly ex­plain our rea­son­ing:

  1. We think hu­man­ity has a rea­son­able prob­a­bil­ity of last­ing a very long time, be­com­ing very large, and/​or even­tu­ally en­joy­ing a very high qual­ity of life. This could hap­pen through rad­i­cal (or even mod­er­ate) tech­nolog­i­cal change, if in­dus­trial civ­i­liza­tion per­sists as long as agri­cul­ture has per­sisted (though up­per limits for life on Earth are around a billion years), or if fu­ture gen­er­a­tions colonize other re­gions of space. Though we wouldn’t bet on very spe­cific de­tails, we think some of these pos­si­bil­ities have a rea­son­able prob­a­bil­ity of oc­cur­ring.

  2. Be­cause of this, we think that, from an im­par­tial per­spec­tive, al­most all of the po­ten­tial good we can ac­com­plish comes through in­fluenc­ing very long-run out­comes for hu­man­ity.

  3. We be­lieve long-run out­comes may be highly sen­si­tive to how well hu­man­ity han­dles key challenges and op­por­tu­ni­ties, es­pe­cially challenges from new tech­nol­ogy, in the next hun­dred years or so.

  4. We be­lieve that (es­pe­cially with sub­stan­tial re­sources) we could have small but sig­nifi­cant pos­i­tive im­pacts on how effec­tively we face these challenges and op­por­tu­ni­ties, and thereby af­fect ex­pected long-run out­comes for hu­man­ity.

  5. We could face these challenges and op­por­tu­ni­ties more effec­tively by prepar­ing for spe­cific challenges and op­por­tu­ni­ties (such as nu­clear se­cu­rity and cli­mate change in the past and pre­sent, and ad­vances in syn­thetic biol­ogy and ar­tifi­cial in­tel­li­gence in the fu­ture), or by en­hanc­ing hu­man­ity’s gen­eral ca­pac­i­ties to deal with these challenges and op­por­tu­ni­ties when we face them (through higher rates of eco­nomic growth, im­proved poli­ti­cal co­or­di­na­tion, im­proved use of in­for­ma­tion and de­ci­sion-mak­ing for in­di­vi­d­u­als and groups, and in­creases in ed­u­ca­tion and hu­man cap­i­tal).

We be­lieve that this per­spec­tive di­verges from the recom­men­da­tions of a more short-run fo­cus in a few ways.

First, when we con­sider at­tempts to pre­pare for global challenges and op­por­tu­ni­ties in gen­eral, we weigh such fac­tors as eco­nomic out­put, log in­comes, ed­u­ca­tion, qual­ity-ad­justed life-years (QALYs), sci­en­tific progress, and gov­er­nance qual­ity differ­ently than if we would if we put less em­pha­sis on long-run out­comes for hu­man­ity. In par­tic­u­lar, a more short-term fo­cus would lead to a much stronger em­pha­sis on QALYs and log in­comes, which we sus­pect could be pur­chased more cheaply through in­ter­ven­tions tar­get­ing peo­ple in de­vel­op­ing coun­tries, e.g. through pub­lic health or more open mi­gra­tion. At­tend­ing to long-run im­pacts cre­ates a closer con­test be­tween such in­ter­ven­tions and those which in­crease eco­nomic out­put or in­sti­tu­tional qual­ity (and thus the qual­ity of our re­sponse to fu­ture challenges and op­por­tu­ni­ties). Our per­spec­tive would place an es­pe­cially high pre­mium on in­ter­me­di­ate goals such as the qual­ity of fore­cast­ing and the trans­mis­sion of sci­en­tific knowl­edge to policy mak­ers, which are dis­pro­por­tionately helpful for nav­i­gat­ing global challenges and op­por­tu­ni­ties.

Se­cond, when there are op­por­tu­ni­ties for iden­ti­fy­ing spe­cific ma­jor challenges or op­por­tu­ni­ties for af­fect­ing long-run out­comes for hu­man­ity, our per­spec­tive fa­vors treat­ing these challenges and op­por­tu­ni­ties with the ut­most se­ri­ous­ness. We be­lieve that re­duc­ing the risk of catas­tro­phes with the po­ten­tial to de­stroy hu­man­ity—which we call “global catas­trophic risks” or some­times “ex­is­ten­tial risks”—has an un­usu­ally clear and pos­i­tive con­nec­tion with long-run out­comes, and this is a rea­son we are un­usu­ally in­ter­ested in prob­lems in this area.

Third, the long-run per­spec­tive val­ues re­silience against per­ma­nent dis­rup­tion or wors­en­ing of civ­i­liza­tion over and above re­silience to short-term catas­tro­phe. From a long-run per­spec­tive, there is an enor­mous differ­ence be­tween a col­lapse of civ­i­liza­tion fol­lowed by even­tual re­cov­ery, ver­sus a per­ma­nent col­lapse of civ­i­liza­tion. This point has been made by philoso­phers like Derek Parfit (very mem­o­rably at the end of his book Rea­sons and Per­sons) and Peter Singer (in a short piece he wrote with Nick Beck­stead and Matt Wage).

Five causes we would like to in­ves­ti­gate more deeply

Im­mi­gra­tion reform

What it is: By “im­mi­gra­tion re­form,” we mean loos­en­ing im­mi­gra­tion re­stric­tions in rich coun­tries with stronger poli­ti­cal in­sti­tu­tions, es­pe­cially for peo­ple who are mi­grat­ing from poor coun­tries with weaker poli­ti­cal in­sti­tu­tions. We in­clude both efforts to al­low more high-skill im­mi­gra­tion and efforts to al­low more im­mi­gra­tion in gen­eral. Some peo­ple to talk to in this area in­clude Michael Cle­mens, Lant Pritch­ett, and oth­ers at the Cen­ter for Global Devel­op­ment. Fwd.us and the Krieble Foun­da­tion are two ex­am­ples of or­ga­ni­za­tions work­ing in this area.

Why we think it is promis­ing: Many in­di­vi­d­ual work­ers in poor coun­tries could pro­duce much more eco­nomic value and bet­ter re­al­ize their po­ten­tial in other ways if they lived in rich coun­tries, mean­ing that much of the world’s hu­man cap­i­tal is be­ing severely un­der­uti­lized. This claim is un­usu­ally well sup­ported by ba­sic eco­nomic the­ory and the views of a large ma­jor­ity of economists. Many con­cerns have been raised, but we think the most plau­si­ble ones in­volve poli­ti­cal fea­si­bil­ity and poli­ti­cal and cul­tural con­se­quences of mi­gra­tion.

Philan­thropic infrastructure

What it is: By “philan­thropic in­fras­truc­ture,” we mean ac­tivi­ties that ex­pand the flex­ible ca­pa­bil­ities of those try­ing to do good in a cause-neu­tral, out­come-ori­ented way. Some or­ga­ni­za­tions in this area we are most fa­mil­iar with in­clude char­ity eval­u­a­tor GiveWell, dona­tion pledge or­ga­ni­za­tions (Giv­ing What We Can, The Life You Can Save, the Giv­ing Pledge), and 80,000 Hours (an or­ga­ni­za­tion that pro­vides in­for­ma­tion to help peo­ple make ca­reer choices that max­i­mize their im­pact). There are many ex­am­ples we are less fa­mil­iar with, such as the Bridges­pan Group and the Cen­ter for Effec­tive Philan­thropy. (Dis­clo­sure: Nick Beck­stead is on the board of trustees for the Cen­tre for Effec­tive Altru­ism, which houses Giv­ing What We Can, The Life You Can Save, and 80,000 Hours, though The Life You Can Save is sub­stan­tially in­de­pen­dent.)

Why we think it is promis­ing: We are in­ter­ested in this area be­cause we want to build up re­sources which are flex­ible enough to ul­ti­mately sup­port the causes and op­por­tu­ni­ties that are later found to be the most promis­ing, and be­cause we see a lot of growth in this area and think early in­vest­ments may re­sult in more money and tal­ent available for very promis­ing op­por­tu­ni­ties later on.

Meth­ods for im­proved forecasting

What it is: Fore­cast­ing is challeng­ing, and very high ac­cu­racy is difficult to ob­tain in many of the do­mains of great­est in­ter­est. How­ever, a num­ber of meth­ods have been de­vel­oped to im­prove fore­cast­ing ac­cu­racy through train­ing, ag­gre­ga­tion of opinion, in­cen­tives, and other means. Some ex­am­ples in­clude ex­pert judg­ment ag­gre­ga­tion al­gorithms, prob­a­bil­ity and cal­ibra­tion train­ing, and pre­dic­tion mar­kets. We are ex­cited about re­cent progress in this area in a pre­dic­tion tour­na­ment spon­sored by IARPA, which Philip Tet­lock’s Good Judg­ment Pro­ject is cur­rently win­ning.

Why we think it is promis­ing: Im­proved fore­cast­ing could be use­ful in a wide va­ri­ety of poli­ti­cal and busi­ness con­texts. Im­proved fore­cast­ing over a pe­riod of mul­ti­ple years could im­prove over­all pre­pared­ness for many global challenges and op­por­tu­ni­ties. More­over, strong ev­i­dence of the su­pe­rior perfor­mance of some meth­ods of fore­cast­ing over oth­ers could help poli­cy­mak­ers base de­ci­sions on the best available ev­i­dence. We cur­rently have limited in­for­ma­tion about room for more fund­ing for ex­ist­ing or­ga­ni­za­tions in this area.

Global catas­trophic risk

What it is: Op­por­tu­ni­ties in this area fo­cus on iden­ti­fy­ing and miti­gat­ing spe­cific threats of hu­man ex­tinc­tion, such as large as­ter­oid im­pact and tail risks of cli­mate change and nu­clear win­ter. Ex­am­ples of in­ter­ven­tions in this cat­e­gory in­clude track­ing as­ter­oids (which has largely been com­pleted for as­ter­oids that threaten civ­i­liza­tion, though not for comets), im­prov­ing re­silience of the food sup­ply through cel­lu­lose-to-food con­ver­sion, dis­ease surveillance (for nat­u­ral or man-made pan­demics), ad­vo­cacy for non-pro­lifer­a­tion of nu­clear weapons, and re­search on other pos­si­ble risks and meth­ods for miti­gat­ing them. An un­usual view we take se­ri­ously is that some of the most sig­nifi­cant risks in this area will come from new tech­nolo­gies that may emerge this cen­tury, such as ad­vanced ar­tifi­cial in­tel­li­gence and ad­vanced biolog­i­cal weapons. (We also be­lieve tech­nolo­gies of this type have mas­sive up­side po­ten­tial which must be thought about care­fully as we think about the risks.) Notable defen­ders of views in this vicinity in­clude Martin Rees, Richard Pos­ner, and Nick Bostrom. (Dis­clo­sure: Nick Bostrom is the Direc­tor at the Fu­ture of Hu­man­ity In­sti­tute, where Nick Beck­stead is a re­search fel­low and Carl Shul­man is a re­search as­so­ci­ate.)

Why we think it is promis­ing: Progress in this area has a clear re­la­tion­ship with long-run out­comes for hu­man­ity. There have been some very good buys in this area in the past, such as early as­ter­oid track­ing pro­grams. Apart from cli­mate change, to­tal foun­da­tion spend­ing in this area is around 0.1%, and lit­tle of that care­fully dis­t­in­guishes be­tween large catas­tro­phes and catas­tro­phes with the po­ten­tial to sig­nifi­cant change long-run out­comes for hu­man­ity.

Meta-research

What it is: We will make use of GiveWell’s ex­pla­na­tion of the cause area here and here.

Why we think it is promis­ing: We be­lieve that many im­prove­ments in meta-re­search can ac­cel­er­ate sci­en­tific progress and make it eas­ier for non-ex­perts to dis­cern what is known in a field. We be­lieve this is likely to sys­tem­at­i­cally im­prove our abil­ity to nav­i­gate global challenges and op­por­tu­ni­ties. From a long-term per­spec­tive the im­por­tance of differ­ent im­pacts of meta-re­search di­verges from a short-term anal­y­sis be­cause, e.g. the de­gree to which poli­cy­mak­ers can un­der­stand the state of sci­en­tific knowl­edge at any given level of progress looms larger in com­par­i­son to sim­ple ac­cel­er­a­tion of progress.