Doing good while clueless

This is the fourth (and fi­nal) post in a se­ries ex­plor­ing con­se­quen­tial­ist clue­less­ness and its im­pli­ca­tions for effec­tive al­tru­ism:

  • The first post de­scribes clue­less­ness & its rele­vance to EA; ar­gu­ing that for many pop­u­lar EA in­ter­ven­tions we don’t have a clue about the in­ter­ven­tion’s over­all net im­pact.

  • The sec­ond post con­sid­ers a po­ten­tial re­ply to con­cerns about clue­less­ness.

  • The third post ex­am­ines how tractable clue­less­ness is – to what ex­tent we can grow more clue­ful about an in­ter­ven­tion through in­ten­tional effort?

  • This post dis­cusses how we might do good while be­ing clue­less to an im­por­tant ex­tent.

Con­sider read­ing the pre­vi­ous posts (1, 2, 3) first.


The last post looked at whether we could grow more clue­ful by in­ten­tional effort. It con­cluded that, for the fore­see­able fu­ture, we will prob­a­bly re­main clue­less about the long-run im­pacts of our ac­tions to a mean­ingful ex­tent, even af­ter tak­ing mea­sures to im­prove our un­der­stand­ing and fore­sight.

Given this state of af­fairs, we should act cau­tiously when try­ing to do good. This post out­lines a frame­work for do­ing good while be­ing clue­less, then looks at what this frame­work im­plies about cur­rent EA cause pri­ori­ti­za­tion.

The fol­low­ing only make sense if you already be­lieve that the far fu­ture mat­ters a lot; this ar­gu­ment has been made el­e­gantly el­se­where so we won’t re­hash it here.[1]

An anal­ogy: in­ter­stel­lar travel

Con­sider a space­craft, jour­ney­ing out into space. The oc­cu­pants of the craft are search­ing for a star sys­tem to set­tle. Promis­ing des­ti­na­tion sys­tems are all very far away, and the voy­agers don’t have a com­plete map of how to get to any of them. In­deed, they know very lit­tle about the space they will travel through.

To have a good jour­ney, the voy­agers will have to suc­cess­fully steer their ship (both liter­ally & metaphor­i­cally). Let’s use “steer­ing ca­pac­ity” as an um­brella term that refers to the ca­pac­ity needed to have a suc­cess­ful jour­ney.[2] “Steer­ing ca­pac­ity” can be bro­ken down into the fol­low­ing five at­tributes:[3]

  • The voy­agers must have a clear idea of what they are look­ing for. (In­tent)

  • The voy­agers must be able to reach agree­ment about where to go. (Co­or­di­na­tion)

  • The voy­agers must be dis­cern­ing enough to iden­tify promis­ing sys­tems as promis­ing, when they en­counter them. Similarly, they must be dis­cern­ing enough to ac­cu­rately iden­tify threats & ob­sta­cles. (Wis­dom)

  • Their craft must be pow­er­ful enough to reach the des­ti­na­tions they choose. (Ca­pa­bil­ity)

  • Be­cause the voy­agers travel through un­mapped ter­ri­tory, they must be able to see far enough ahead to avoid ob­sta­cles they en­counter. (Pre­dic­tive power)

This space­craft is a use­ful anal­ogy for think­ing about our civ­i­liza­tion’s tra­jec­tory. Like us, the space voy­agers are some­what clue­less – they don’t know quite where they should go (though they can make guesses), and they don’t know how to get there (though they can plot a course and make ad­just­ments along the way).

The five at­tributes given above – in­tent, co­or­di­na­tion, wis­dom, ca­pa­bil­ity, and pre­dic­tive power – de­ter­mine how suc­cess­ful the space voy­agers will be in ar­riv­ing at a suit­able des­ti­na­tion sys­tem. Th­ese same at­tributes can also serve as a use­ful frame­work for con­sid­er­ing which al­tru­is­tic in­ter­ven­tions we should pri­ori­tize, given our pre­sent situ­a­tion.

The ba­sic point

The ba­sic point here is that in­ter­ven­tions whose main known effects do not im­prove our steer­ing ca­pac­ity (i.e. our in­tent, wis­dom, co­or­di­na­tion, ca­pa­bil­ity, and pre­dic­tive power) are not as im­por­tant as in­ter­ven­tions whose main known effects do im­prove these at­tributes.

An im­pli­ca­tion of this is that in­ter­ven­tions whose effec­tive­ness is driven mainly by their prox­i­mate im­pacts are less im­por­tant than in­ter­ven­tions whose effec­tive­ness is driven mainly by in­creas­ing our steer­ing ca­pac­ity.

This is be­cause any ac­tion we take is go­ing to have in­di­rect & long-run con­se­quences that bear on our civ­i­liza­tion’s tra­jec­tory. Many of the long-run con­se­quences of our ac­tions are un­known, so the fu­ture is un­pre­dictable. There­fore, we ought to pri­ori­tize in­ter­ven­tions that im­prove the wis­dom, ca­pa­bil­ity, and co­or­di­na­tion of fu­ture ac­tors, so that they are bet­ter po­si­tioned to ad­dress fu­ture prob­lems that we did not fore­see.

What be­ing clue­less means for al­tru­is­tic prioritization

I think the steer­ing ca­pac­ity frame­work im­plies a port­fo­lio ap­proach to do­ing good – si­mul­ta­neously pur­su­ing a large num­ber of di­verse hy­pothe­ses about how to do good, pro­vided that each ap­proach main­tains re­versibil­ity.[4]

This ap­proach is similar to the Open Philan­thropy Pro­ject’s hits-based giv­ing frame­work – in­vest in many promis­ing ini­ti­a­tives with the ex­pec­ta­tion that most will fail.

Below, I look at how this frame­work in­ter­acts with fo­cus ar­eas that effec­tive al­tru­ists are already work­ing on. Other causes that EA has not looked into closely (e.g. im­prov­ing ed­u­ca­tion) may also perform well un­der this frame­work; as­sess­ing causes of this sort is be­yond the scope of this es­say.

My think­ing here is pre­limi­nary, and very prob­a­bly con­tains er­rors & over­sights.

EA fo­cus ar­eas to prioritize

Broadly speak­ing, the steer­ing ca­pac­ity frame­work sug­gests pri­ori­tiz­ing in­ter­ven­tions that:[5]

  • Fur­ther our un­der­stand­ing of what matters

  • Im­prove governance

  • Im­prove pre­dic­tion-mak­ing & foresight

  • Re­duce ex­is­ten­tial risk

  • In­crease the num­ber of well-in­ten­tioned, highly ca­pa­ble people

To pri­ori­tize – bet­ter un­der­stand­ing what matters

In­creas­ing our un­der­stand­ing of what’s worth car­ing about is im­por­tant for clar­ify­ing our in­ten­tions about what tra­jec­to­ries to aim for. For many moral ques­tions, there is already broad agree­ment in the EA com­mu­nity (e.g. the view that all cur­rently ex­ist­ing hu­man lives mat­ter is un­con­tro­ver­sial within EA). On other ques­tions, fur­ther think­ing would be valuable (e.g. how best to com­pare hu­man lives to the lives of an­i­mals).

Myr­iad thinkers have done valuable work on this ques­tion. Par­tic­u­larly worth men­tion­ing is the work of the Foun­da­tional Re­search In­sti­tute, the Global Pri­ori­ties Pro­ject, the Qualia Re­search In­sti­tute, as well the Open Philan­thropy Pro­ject’s work on con­scious­ness & moral pa­tient­hood.

To pri­ori­tize – im­prov­ing governance

Im­prov­ing gov­er­nance is largely aimed at im­prov­ing co­or­di­na­tion – our abil­ity to me­di­ate di­verse prefer­ences, de­cide on col­lec­tively held goals, and work to­gether to­wards those goals.

Effi­cient gov­er­nance in­sti­tu­tions are ro­bustly use­ful in that they keep fo­cus ori­ented on solv­ing im­por­tant prob­lems & min­i­mize re­source ex­pen­di­ture on zero-sum com­pet­i­tive sig­nal­ing.

Two routes to­wards im­proved gov­er­nance seem promis­ing: (1) im­prov­ing the func­tion­ing of ex­ist­ing in­sti­tu­tions, and (2) ex­per­i­ment­ing with al­ter­na­tive in­sti­tu­tional struc­tures (Robin Han­son’s futarchy pro­posal and seast­eading ini­ti­a­tives are ex­am­ples here).

To pri­ori­tize – im­prov­ing foresight

Im­prov­ing fore­sight & pre­dic­tion-mak­ing abil­ity is im­por­tant for in­form­ing our de­ci­sions. The fur­ther we can see down the path, the more in­for­ma­tion we can in­cor­po­rate into our de­ci­sion-mak­ing, which in turn leads to higher qual­ity out­comes with fewer sur­prises.

Fore­cast­ing abil­ity can definitely be im­proved from baseline, but there are prob­a­bly hard limits on how far into the fu­ture we can ex­tend our pre­dic­tions while re­main­ing be­liev­able.

Philip Tet­lock’s Good Judg­ment Pro­ject is a promis­ing fore­cast­ing in­ter­ven­tion, as are pre­dic­tion mar­kets like Pre­dic­tIt and pol­ling ag­gre­ga­tors like 538.

To pri­ori­tize – re­duc­ing ex­is­ten­tial risk

Re­duc­ing ex­is­ten­tial risk can be framed as “avoid­ing large ob­sta­cles that lie ahead.” Avoid­ing ex­tinc­tion and “lock-in” of sub­op­ti­mal states is nec­es­sary for re­al­iz­ing the full po­ten­tial benefit of the fu­ture.

Many ini­ti­a­tives are un­der­way in the x-risk re­duc­tion cause area. Larks’ an­nual re­view of AI safety work is ex­cel­lent; Open Phil has good ma­te­rial about pro­jects fo­cused on other x-risks.

To pri­ori­tize – in­crease the num­ber of well-in­ten­tioned, highly ca­pa­ble people

Well-in­ten­tioned, highly ca­pa­ble peo­ple are a scarce re­source, and will al­most cer­tainly con­tinue to be highly use­ful go­ing for­ward. In­creas­ing the num­ber of well-in­ten­tioned, highly ca­pa­ble peo­ple seems ro­bustly good, as such peo­ple are able to di­ag­no­sis & co­or­di­nate to­gether on fu­ture prob­lems as they arise.

Pro­jects like CFAR and SPARC are in this cat­e­gory.

In a differ­ent vein, psychedelic ex­pe­riences hold promise as a treat­ment for treat­ment-re­sis­tant de­pres­sion, and may also im­prove the in­ten­tions of highly ca­pa­ble peo­ple who have not re­flected much about what mat­ters (“the bet­ter­ment of well peo­ple”).

EA fo­cus ar­eas to de­pri­ori­tize, maybe

The steer­ing ca­pac­ity frame­work sug­gests de­pri­ori­tiz­ing an­i­mal welfare & global health in­ter­ven­tions, to the ex­tent that these in­ter­ven­tions’ effec­tive­ness is driven by their prox­i­mate im­pacts.

Un­der this frame­work, pri­ori­tiz­ing an­i­mal welfare & global health in­ter­ven­tions may be jus­tified, but only on the ba­sis of im­prov­ing our in­tent, wis­dom, co­or­di­na­tion, ca­pa­bil­ity, or pre­dic­tive power.

To de­pri­ori­tize, maybe – an­i­mal welfare

To the ex­tent that an­i­mal welfare in­ter­ven­tions ex­pand our civ­i­liza­tion’s moral cir­cle, they may hold promise as in­ter­ven­tions that im­prove our in­ten­tions & un­der­stand­ing of what mat­ters (the Sen­tience In­sti­tute is do­ing work along this line).

How­ever, fol­low­ing this frame­work, the case for an­i­mal welfare in­ter­ven­tions has to be made on these grounds, not on the ba­sis of cost-effec­tively re­duc­ing an­i­mal suffer­ing in the pre­sent.

This is be­cause the an­i­mals that are helped in such in­ter­ven­tions can­not help “steer the ship” – they can­not con­tribute to mak­ing sure that our civ­i­liza­tion’s tra­jec­tory is headed in a good di­rec­tion.

To de­pri­ori­tize, maybe – global health

To the ex­tent that global health in­ter­ven­tions im­prove co­or­di­na­tion, or re­duce x-risk by in­creas­ing so­cio-poli­ti­cal sta­bil­ity, they may hold promise un­der the steer­ing ca­pac­ity frame­work.

How­ever, the case for global health in­ter­ven­tions would have to be made on the grounds of in­creas­ing co­or­di­na­tion, re­duc­ing x-risk, or im­prov­ing an­other steer­ing ca­pac­ity at­tribute. Ar­gu­ments for global health in­ter­ven­tions on the grounds that they cost-effec­tively help peo­ple in the pre­sent day (with­out con­sid­er­a­tion of how this bears on our fu­ture tra­jec­tory) are not com­pet­i­tive un­der this frame­work.

Conclusion

In sum, I think the fact that we are in­tractably clue­less im­plies a port­fo­lio ap­proach to do­ing good – pur­su­ing, in par­allel, a large num­ber of di­verse hy­pothe­ses about how to do good.

In­ter­ven­tions that im­prove our un­der­stand­ing of what mat­ters, im­prove gov­er­nance, im­prove pre­dic­tion-mak­ing abil­ity, re­duce ex­is­ten­tial risk, and in­crease the num­ber of well-in­ten­tioned, highly ca­pa­ble peo­ple are all promis­ing. Global health & an­i­mal welfare in­ter­ven­tions may hold promise as well, but the case for these cause ar­eas needs to be made on the ba­sis of im­prov­ing our steer­ing ca­pac­ity, not on the ba­sis of their prox­i­mate im­pacts.

Thanks to mem­bers of the Mather es­say dis­cus­sion group and an anony­mous col­lab­o­ra­tor for thought­ful feed­back on drafts of this post. Views ex­pressed above are my own. Cross-posted to LessWrong & my per­sonal blog.


Footnotes

[1]: Nick Beck­stead has done the best work I know of on the topic of why the far fu­ture mat­ters. This post is a good in­tro­duc­tion; for a more in-depth treat­ment see his PhD the­sis, On the Over­whelming Im­por­tance of Shap­ing the Far Fu­ture.

[2]: I’m grate­ful to Ben Hoff­man for dis­cus­sion that fleshed out the “steer­ing ca­pac­ity” con­cept; see this com­ment thread.

[3]: Note that this list of at­tributes is not ex­haus­tive & this metaphor isn’t perfect. I’ve found the space travel metaphor use­ful for think­ing about cause pri­ori­ti­za­tion given our un­cer­tainty about the far fu­ture, so am de­ploy­ing it here.

[4]: Main­tain­ing re­versibil­ity is im­por­tant be­cause given our clue­less­ness, we are un­sure of the net im­pact of any ac­tion. When un­cer­tain about over­all im­pact, it’s im­por­tant to be able to walk back ac­tions that we come to view as net nega­tive.

[5]: I’m not sure of how to pri­ori­tize these things amongst them­selves. Prob­a­bly im­prov­ing our un­der­stand­ing of what mat­ters & our pre­dic­tive power are high­est pri­or­ity, but that’s a very weakly held view.