What to do with people?

I would like to offer one pos­si­ble an­swer to the on­go­ing dis­cus­sion in the effec­tive al­tru­ism com­mu­nity, cen­tered around the ques­tion about scale­able use of the peo­ple (“Task Y”).

The fol­low­ing part of the 80000h pod­cast with Nick Beck­stead is a suc­cinct in­tro­duc­tion of the prob­lem (as em­pha­sized by alxjrl)

Nick Beck­stead: (… ) I guess, the way I see it right now is this com­mu­nity doesn’t have cur­rently a scal­able use of a lot of peo­ple. There’s some groups that have found effi­cient scal­able uses of a lot of peo­ple, and they’re us­ing them in differ­ent ways.
For ex­am­ple, if you look at some­thing like Teach for Amer­ica, they iden­ti­fied an area where, “Man, we could re­ally use tons and tons of tal­ented peo­ple. We’ll train them up in a spe­cific prob­lem, im­prov­ing the US ed­u­ca­tion sys­tem. Then, we’ll get tons of them to do that. Var­i­ous of them will keep work­ing on that. Some of them will un­der­stand the prob­lems the US ed­u­ca­tion sys­tem faces, and fix some of its policy as­pects.” That’s very much a scal­able use of peo­ple. It’s a very clear in­struc­tion, and a way that there’s an ob­vi­ous role for ev­ery­one.
I think, the Effec­tive Altru­ist Com­mu­nity doesn’t have a scal­able use of a lot of its high­est value … There’s not re­ally a scal­able way to ac­com­plish a lot of these high­est val­ued ob­jec­tives that’s stan­dard­ised like that. The clos­est thing we have to that right now is you can earn to give and you can donate to any of the causes that are most fa­vored by the Effec­tive Altru­ist Com­mu­nity. I would feel like the mass move­ment ver­sion of it would be more com­pel­ling if we’d have in mind a re­ally effi­cient and valuable scal­able use of peo­ple, which I think is some­thing we’ve figured out less.
I guess what I would say is right now, I think we should figure out how to pro­duc­tively use all of the peo­ple who are in­ter­ested in do­ing as much good as they can, and fo­cus on filling a lot of higher value roles that we can think of that aren’t always so stan­dard­ised or some­thing. We don’t need 2000 peo­ple to be work­ing on AI strat­egy, or should be work­ing on tech­ni­cal AI safety ex­actly. I would fo­cus more on figur­ing out how we can best use the peo­ple that we have right now.

Rele­vant posts and dis­cus­sions on the topic are un­der sev­eral posts on the fo­rum:

Hier­ar­chi­cal net­worked structure

The an­swer I’d like to offer is ab­stract, but gen­eral and scal­able. The an­swer is: “build a hi­er­ar­chi­cal net­worked struc­ture”, for lack of bet­ter name. It is best un­der­stood as a mild shift of at­ti­tude. A con­cept on a similar level of gen­er­al­ity as “pri­ori­ti­za­tion” or “cru­cial con­sid­er­a­tions”.

The hi­er­ar­chi­cal struc­ture can be in phys­i­cal space, func­tional space or re­search space.

An ex­am­ple of a hi­er­ar­chy in phys­i­cal space could be the struc­ture of lo­cal effec­tive al­tru­ism groups: it is hard to co­or­di­nate an un­struc­tured group of 10 thou­sands peo­ple. It is less hard, but still difficult to co­or­di­nate a struc­ture of 200 “lo­cal groups” with widely differ­ent sizes, cul­tures and mem­ber­ships. The op­ti­mal solu­tion likely is to co­or­di­nate some­thing like 5-25 “re­gional” co­or­di­na­tors/​ hub lead­ers, who then co­or­di­nate with the lo­cal groups. The un­der­ly­ing the­o­ret­i­cal rea­sons for such a struc­ture are sim­ple con­sid­er­a­tions like “net­work dis­tance” or “band­width con­straints”.

A hi­er­ar­chy in func­tional space could be for ex­am­ple a hi­er­ar­chy of or­ga­ni­za­tions and pro­jects pro­vid­ing peo­ple ca­reer ad­vice. It is difficult to give per­son­al­ized ca­reer ad­vice to tens of thou­sands of peo­ple as a small and lean or­ga­ni­za­tion. Scal­able hi­er­ar­chi­cal ver­sion of ca­reer ad­vice may look like this: based on gen­eral re­quest, a stu­dent con­sid­er­ing fu­ture study plans is redi­rected to e.g. Effec­tive Th­e­sis, spe­cial­ized on the prob­lem. Fur­ther, the stu­dent is con­nected with a spe­cial­ist coach with ob­ject-level knowl­edge. The hi­er­ar­chi­cal struc­ture could in my guess scale ap­prox­i­mately 100x more than a sin­gle or­ga­ni­za­tion fo­cus­ing just on pick­ing the few most im­pact­ful peo­ple.

A hi­er­ar­chy in re­search space could be a struc­ture of groups work­ing on var­i­ous sub-prob­lems and sub-ques­tions. For ex­am­ple, part of the an­swers to the ques­tion “how to in­fluence the long-term fu­ture” de­pend on the ex­tent to which the world is world, or ran­dom, or pre­dictable. It would be great to have a group of peo­ple work­ing on this. There are thou­sands of rele­vant ques­tions and tens of thou­sands of sub-ques­tions which should be stud­ied from an effec­tive al­tru­ist per­spec­tives.

In gen­eral, hi­er­ar­chi­cal net­worked struc­tures are the way how com­plex func­tional sys­tems are or­ga­nized and can scale. Closely re­lated con­cept is “mod­u­lar de­com­po­si­tion”.

Why net­worked? I want to point to­ward the net­work prop­er­ties of struc­tures. It is pos­si­ble to think about some cru­cial prop­er­ties of com­plex sys­tem us­ing con­cepts from net­work sci­ence. E.g. av­er­age and max­i­mal dis­tance be­tween nodes in the net­work, “band­width” of links, mechanisms for new link cre­ation, and similar.

Why struc­tures? To put struc­tural as­pects in fo­cus. The word hi­er­ar­chy has many other mean­ings or con­no­ta­tions like sta­tus hi­er­ar­chy or top-down, com­mand-and-con­trol style of man­age­ment, which I do not want to recom­mend.

How is this different

It may be helpful to con­trast cre­at­ing hi­er­ar­chi­cal struc­ture with other or­ga­ni­za­tional prin­ci­ples.

Effec­tive al­tru­ism has in its heart a prin­ci­ple of pri­ori­ti­za­tion: where pure hi­er­ar­chiza­tion tells you to de­com­pose the whole into sub­parts, and as­sign some­one to deal with each of the parts, pure pri­ori­ti­za­tion tells you se­lect just the best ac­tion, and as­sign just the best per­son to do it. Taken to the ex­treme, pri­ori­ti­za­tion leads to recipes like “find the bright­est prodigy, make him or her work on the most im­por­tant prob­lems in AI safety”. Taken to the ex­treme, hi­er­ar­chiza­tion leads peo­ple to work on ob­scure ques­tions.

Do not get me wrong: pri­ori­ti­za­tion is a great prin­ci­ple, but I would sug­gest effec­tive al­tru­ism should use hi­er­ar­chiza­tion more than it does.

Another com­pet­ing (self-)or­ga­ni­za­tional prin­ci­ple is ho­mophily, that is, peo­ple’s ten­dency to form ties with peo­ple who are similar to them­selves. Where hi­er­ar­chiza­tion leads to differ­ent lev­els of spe­cial­iza­tion, ho­mophily leads to ho­moge­nous clusters of peo­ple. Start­ing with sev­eral Oxford util­i­tar­ian philoso­phers, you at­tract more Oxford util­i­tar­ian philoso­phers (so called founder’s effect). Good ML re­searchers are more likely to know other good AI re­searchers. Peo­ple crit­i­cal of EA’s or­ga­ni­za­tional land­scape will more likely talk to other peo­ple dis­satis­fied with the same prob­lems.

Ho­mophily is in gen­eral nei­ther good nor bad—in some ways, it pro­vides im­mense benefits to the move­ment (like: we want smart al­tru­is­tic peo­ple). But from a struc­tural per­spec­tive, it also has sig­nifi­cant draw­backs.

Taken to­gether, pri­ori­ti­sa­tion and ho­mophily lead to prob­lems. For ex­am­ple, let’s sup­pose there is a pool of sev­eral hun­dreds EAs, who are in some ways quite similar—elite uni­ver­sity ed­u­ca­tion, good an­a­lytic thinkers, con­cerned about the long-term fu­ture, look­ing mainly for high-im­pact­ful jobs, with­out much prac­ti­cal ex­pe­rience in pro­ject man­age­ment, tech­ni­cal dis­ci­plines, grant-mak­ing, and many other more spe­cial­ized skills. All of them do the pri­ori­ti­za­tion of their ca­reer op­tions, and all of them ap­ply to the re­search an­a­lyst role at OpenPhil. At the same time, de­spite the pool of tal­ent, or­ga­ni­za­tions have trou­ble find­ing peo­ple who would fit in spe­cific roles, and there is always much more work than peo­ple.

I hope you have the gen­eral di­rec­tion now. If not, to get more of the back­ground this is re­lated:

https://​​en.wikipe­dia.org/​​wiki/​​Hier­ar­chy#Ex­am­ples_of_other_applications

https://​​en.wikipe­dia.org/​​wiki/​​Hier­ar­chi­cal_net­work_model

https://​​en.wikipe­dia.org/​​wiki/​​Effi­ciency_(net­work_sci­ence)

In practice

While it may be more difficult to turn an an­swer in the form “go and build hi­er­ar­chi­cal net­worked struc­ture” into ac­tion, than, let’s say “go and teach”, I’m op­ti­mistic that the cur­rent effec­tive al­tru­ism com­mu­nity is com­pe­tent enough to be able to use such high-level prin­ci­ples. More­over, it is not nec­es­sary for ev­ery­one to work on “struc­ture build­ing”—many peo­ple would just “fit into the struc­ture”.

I would ex­pect that a lot would be achiev­able just by a change of at­ti­tude in this di­rec­tion, both among the tal­ented EAs, and among the move­ment lead­ers.

By a rough es­ti­mate, for some EA jobs, liter­ally years of work are spend in ag­gre­gate by the tal­ented peo­ple just com­pet­ing for the po­si­tions. I’m con­fi­dent that similar effort di­rected to­ward figur­ing out what hi­er­ar­chi­cal struc­tures we need would lead to at least some good plans, and think­ing about where one can fit in the struc­ture could lead more peo­ple to do use­ful work.

Note: this re­quires ac­tual, real, in­tel­lec­tual work. There aren’t any ready-made recipes, or lists of what struc­tures to cre­ate, net­work maps, or similar re­sources.

What we already have and what we should do

To some ex­tent, hi­er­ar­chies emerge nat­u­rally. From the above de­scribed ex­am­ples, lo­cal effec­tive al­tru­ism group struc­ture would likely de­velop to­ward 2-lay­ered hi­er­ar­chy even with­out much plan­ning. In the re­search do­main, we can see grad­ual de­vel­op­ment of more spe­cial­ized sub-groups, such as the Cen­ter for the Gover­nance of AI within FHI.

What I’m try­ing to say is that hi­er­ar­chi­cal struc­ture may be grown more de­liber­ately, and can pro­duc­tively use peo­ple.

How is this de­ci­sion relevant

If the above still sounds very the­o­ret­i­cal, I’ll try to illus­trate the pos­si­ble shift of at­ti­tude on sev­eral ex­am­ples.

Let’s say you are in the situ­a­tion of hun­dreds of EAs ap­ply­ing for jobs—with good uni­ver­sity ed­u­ca­tion, good an­a­lyt­i­cal skills, fo­cus on the long-term fu­ture, look­ing mainly for high-im­pact jobs. Look­ing on your situ­a­tion mainly with the “pri­ori­ti­za­tion” at­ti­tude, you can eas­ily ar­rive at the con­clu­sion that some of your best ca­reer op­tions are, for ex­am­ple, re­search an­a­lyst job in OpenPhil, re­search-man­age­ment roles in FHI, CHAI, BERI, or var­i­ous po­si­tions in CEA. Maybe less at­trac­tive are jobs in, for ex­am­ple, GiveWell.

What hap­pens if you take your “build hi­er­ar­chi­cal net­worked struc­ture” hat? You pick, for ex­am­ple, “effec­tive al­tru­ism move­ment build­ing” as an area/​task (it is likely some­where near the top of pri­ori­ti­za­tion). In the next step, you at­tempt to do the hi­er­ar­chi­cal “de­com­po­si­tion” of the area. You can get started just by look­ing on past and pre­sent in­ter­nal struc­tures of CEA, with sub-groups or sub-tasks like Events, Grants or Groups. Each of these “parts” usu­ally needs all of the­o­ret­i­cal work, re­search and de­vel­op­ment, and ex­e­cu­tion and ops. After a bit of look­ing around, you may find, for ex­am­ple, there are just a few peo­ple sys­tem­at­i­cally try­ing to cre­ate amaz­ing events. There are op­por­tu­ni­ties to prac­tice: CFAR is of­ten open to ops vol­un­teers, EAG as well, you may run an event for your group, or cre­ate some new event which would be use­ful to have for the broader com­mu­nity. All of this is im­pact­ful work, if not im­pact­ful job. Or, you may find out there isn’t any­one around ex­actly work­ing on re­search of EA events. By that, I mean ques­tions like: “How do events lead to im­pact? How we can mea­sure it? Are there some char­ac­ter­is­tic pat­terns in how peo­ple meet each other? What are the rele­vant non-EA refer­ence classes for var­i­ous EA events?” When you try to work on this you may find out it de­pends on spe­cific skills, or re­quires con­tact with peo­ple work­ing on events, so it may be less tractable—but it’s still worth try­ing. I would also ex­pect good work on this topic to have im­pact, at­tract at­ten­tion, and pos­si­bly fund­ing.

While I picked up ex­am­ples from the “EA move­ment build­ing” cause area which can ul­ti­mately lead to work­ing in effec­tive al­tru­ism pro­fes­sion­ally, that’s not the point. In differ­ent cause ar­eas the build hi­er­ar­chi­cal net­worked struc­ture at­ti­tude can lead to work doesn’t have the EA la­bel in the name at all, yet is still quite im­pact­ful. We need EA ex­perts and pro­fes­sion­als in many fields. Also, of­ten the most im­pact­ful ac­tion may be not do­ing some­thing di­rectly, but cre­at­ing a struc­ture, or op­ti­miz­ing some net­work. Short ex­am­ple: x-risk seems to be a ne­glected con­sid­er­a­tion in most of the eco­nomics liter­a­ture. One good op­tion could be to pur­sue an aca­demic ca­reer, and work on the topic. Pos­si­bly an even bet­ter op­tion is to some­how link re­searchers in academia who are already think­ing about these top­ics in differ­ent in­sti­tu­tions, e.g. by or­ga­niz­ing a sem­i­nar.

How can the shift look like for some­one in cen­tral po­si­tions? One change could be de­scribed as match­ing “2nd best op­tions” and “3rd best op­tions” with peo­ple. Del­e­gat­ing. Sup­port­ing growth of more spe­cial­ized efforts.

How a good prac­tice may look like: the Cen­ter for the Gover­nance of AI has an ex­ten­sive re­search agenda. Ob­vi­ously the core re­searchers in the in­sti­tu­tion should fo­cus on top pri­or­ity prob­lems, but as even some of the sub-prob­lems are still quite im­por­tant, it may make sense to en­courage oth­ers to work on them. How may this hap­pen in prac­tice? For ex­am­ple, via the re­search af­fili­ates pro­gram, hav­ing AI Safety Camp par­ti­ci­pant work on the top­ics.

Another ex­am­ple: let’s say you are 80.000h, an effec­tive al­tru­ist or­ga­ni­za­tion try­ing to help peo­ple have im­pact with their ca­reer. You pri­ori­tize fo­cus­ing mainly on mov­ing ML PhDs to AI safety, and im­pres­sive policy peo­ple to the gov­er­nance of AI. At the same time, you are run­ning the cur­rently largest EA mass out­reach pro­ject. The un­for­tu­nate re­sult is that al­most all the peo­ple in­ter­ested in hav­ing im­pact­ful ca­reers have to rely just on the web­site, and only a tiny frac­tion gets some per­sonal sup­port.

What might a hi­er­ar­chi­cal net­worked struc­ture ap­proach look like? For ex­am­ple, dis­till­ing the coach­ing knowl­edge, and cre­at­ing a guide for pro­fes­sional EA groups or­ga­niz­ers to provide coach­ing to a less ex­clu­sive group of effec­tive al­tru­ists. There are now dozens of pro­fes­sional EA com­mu­nity builders, EA ca­reer coach­ing is part of their daily jobs, yet as far as there is more knowl­edge than on the web­site, they are mostly left to re­dis­cover it.

How can the shift look for some­one work­ing in the fund­ing part of the ecosys­tem? One ob­vi­ous way is to en­courage re-grant­ing. This is to some ex­tent hap­pen­ing: ob­vi­ously it likely does not make sense for OpenPhil to eval­u­ate $10000 grant ap­pli­ca­tions, so such pro­jects are a bet­ter fit for EA Grants. Yet there are small things which can be im­pact­ful which are so small that it does not make sense to eval­u­ate them even as EA grants, and could be sup­ported e.g. by com­mu­nity builders in larger EA groups.

Another op­por­tu­nity for net­worked hi­er­ar­chi­cal struc­tures is in pro­ject eval­u­a­tions and tal­ent scout­ing. In­stead of rely­ing mainly on in­for­mal per­sonal net­works of grant eval­u­a­tors, there could be more for­mal struc­tures of trusted ex­perts.

Pos­si­ble problems

It is pos­si­ble that some im­por­tant tasks are not de­com­pos­able in a way which would be good for del­e­gat­ing them to hi­er­ar­chi­cal struc­tures.

Hier­ar­chi­cal struc­tures com­posed of a large num­ber of peo­ple have sig­nifi­cant in­er­tia, and when they gain mo­men­tum, it may be hard to steer them. (Think about bu­reau­cra­cies.)

  • I agree this is true, but in my view it would be good to have some parts of the effec­tive al­tru­ism move­ment which have more of this prop­erty. It seems to me in the cur­rent state too many EAs are too “fluid”, will­ing to of­ten change plans, based on the lat­est pri­ori­ti­za­tion re­sults or 80000h posts (e.g. some­one switch­ing from re­search ca­reer to EtG, then switch­ing back to study of x-risks, then con­sid­er­ing ops roles, etc.)

  • Also I would con­sider it a good re­sult if the “trail” be­hind the core of effec­tive al­tru­ism move­ment was dot­ted with struc­tures and or­ga­ni­za­tions work­ing on highly im­pact­ful prob­lems, even if the prob­lems are no longer on the ex­actly first place in cur­rent pri­ori­ti­za­tion.

It is difficult to cre­ate such struc­tures and very few peo­ple have the rele­vant skills.

  • I’m gen­er­ally scep­ti­cal of such ar­gu­ments. The effec­tive al­tru­ism move­ment man­aged to gather an im­pres­sively com­pe­tent group of peo­ple, and many of the “new” EAs do not seem to be less com­pe­tent than “old” EAs who built the ex­ist­ing struc­tures. For ex­am­ple, I would ex­pect the cur­rent com­mu­nity to con­tain a num­ber of peo­ple gen­er­ally as com­pe­tent as Robert Wiblin or Nick Beck­stead, which makes me op­ti­mistic about the struc­tures they would cre­ate.

Re­marks and discussion

The pre­vi­ous is a rough sketch, point­ing to one pos­si­ble di­rec­tion how more peo­ple can do as much good as pos­si­ble. It is not in­tended as sug­ges­tion for scal­ing effec­tive al­tru­ism to truly mass pro­por­tions, not speak­ing of hun­dreds of mil­lions peo­ple. But that is also not the situ­a­tion we are in: the re­al­ity is cur­rently effec­tive al­tru­ism does not know how to uti­lize even thou­sands peo­ple, apart from earn­ing to give. My hope is that shift to­ward build­ing hi­er­ar­chi­cal net­work struc­tures would help.

Big weak­ness of this set of ideas is it is likely not memet­i­cally fit in the pre­sent form. Build­ing hi­er­ar­chi­cal net­work struc­ture is a bad name. Also, this post isn’t a nice one para­graph in­tro­duc­tion. Just find­ing a bet­ter name could be big im­prove­ment (for var­i­ous rea­sons, it is also hard for me—I would re­ally ap­pre­ci­ate sug­ges­tions).

I would like to thank many EAs for com­ments and dis­cus­sions on the topic.