The Values-to-Actions Decision Chain: a lens for improving coordination

This post con­tains:
1. an ex­po­si­tion of a high-level model
2. some claims on what this might mean strate­gi­cally for the EA community

Effec­tive Altru­ism is challeng­ing. Some con­sid­er­a­tions re­quire you to zoom out to take an ea­gle-eye view across a vast land­scape of pos­si­bil­ity (e.g. to re­search moral un­cer­tainty), while other con­sid­er­a­tions re­quire you to swoop in to see the de­tails (e.g. to wel­come some­one new). The dis­tance from which you’re look­ing down at a prob­lem is the con­strual level you’re think­ing at.

Peo­ple in­volved in the EA com­mu­nity can gain a lot from im­prov­ing their grasp of con­strual lev­els – the lev­els they or oth­ers nat­u­rally in­cline to­wards and even the level they’re op­er­at­ing at any given mo­ment (lead­ing, for in­stance, to less dis­con­nect in con­ver­sa­tions). A lack of con­strual-level sense com­bined with a lack of how-to-in­ter­act-in-large-so­cial-net­works sense has left a ma­jor blindspot in how we col­lec­tively make de­ci­sions, in my view.

The Values-to-Ac­tions De­ci­sion Chain (in short: ‘de­ci­sion chain’) is an ap­proach for you to start solv­ing the awe-in­spiring prob­lem of ‘do­ing the most good’ by split­ting it into a se­ries of de­ci­sions you will make from high to low con­strual lev­els (cre­at­ing in effect, a hi­er­ar­chy of goals). It is also a lens through which you can more clearly see your own limits to do­ing this and com­pen­sate by co­or­di­nat­ing bet­ter with oth­ers. Be­ware though that it’s a blurry and dis­torted lens – this post it­self is a high con­strual-level ex­er­cise and many im­por­tant nu­ances have been elimi­nated in the pro­cess. I’d be un­sur­prised if I ended up re­vis­ing many of the ideas and im­pli­ca­tions in here af­ter an­other year of think­ing.

Chain up

To illus­trate how to use the V2ADC (see di­a­gram be­low):

Sup­pose an ethics pro­fes­sor de­cided that...

  • ...from the point of view of the uni­verse… (meta-ethics)

  • ...he­do­nis­tic util­i­tar­i­anism made sense… (moral the­o­ries)

  • ...and that there­fore hu­mans in de­vel­op­ing coun­tries should not suffer un­nec­es­sar­ily… (wor­ld­view)

  • ...lead­ing him to work on re­duc­ing global poverty… (fo­cus area)

  • ...by re­duc­ing a va­ri­ety of eas­ily treat­able dis­eases… (prob­lems)

  • ...by ad­vo­cat­ing for cit­i­zens of rich coun­tries to pledge 1% of their in­come… (in­ter­ven­tion)

  • ...by start­ing a non-profit… (pro­ject)

  • ...where he works with a team of staff mem­bers… (work­flow)

  • ...to pre­pare his TED talk… (task batches)

  • ...to cover the de­ci­sion to donate for guide dogs vs. treat­ing tra­choma… (task)

  • ...which he men­tions… (ex­e­cute)

  • ...by vibrat­ing his vo­cal chords (ac­tu­ate)

The Values-to-Ac­tions De­ci­sion Chain. Note that the cat­e­gori­sa­tion here is some­what ar­bi­trary and open to in­ter­pre­ta­tion; let me know if some­thing’s un­clear or should be changed (or if you dis­agree with the di­a­gram on some fun­da­men­tal level). Also see the head­ings on ei­ther side of the di­a­gram; it could also be called the Ob­ser­va­tions-to-Epistemics In­te­gra­tion Chain but that’s less catchy.

It should be ob­vi­ous at this point that the pro­fes­sor isn’t some lone ranger bat­tling out to stop world suffer­ing by him­self. Along the way, the pro­fes­sor stud­ies clas­si­cal moral philoso­phers and works with an ex­ec­u­tive di­rec­tor, who in turn hires a web­site de­vel­oper who cre­ates a dona­tion but­ton, which a donor clicks to donate money to a malaria pre­ven­tion char­ity, which the char­ity’s trea­surer uses to pay out the salary of a new lo­cal worker, who hands out a bed­net to a mother of three chil­dren, who hangs the net over the bed in the des­ig­nated way.

It takes a chain of chains to af­fect those whose lives were origi­nally in­tended to be im­proved. As you go down from one chain to the next, the work a per­son fo­cuses on also gets more con­crete (lower con­strual level).

How­ever, the chain of chains can eas­ily be bro­ken if any per­son doesn’t pay suffi­cient at­ten­tion to mak­ing de­ci­sions at other con­strual lev­els (al­though some are more re­place­able than oth­ers). For ex­am­ple, if in­stead of start­ing a char­ity, the philoso­pher de­cided to be con­tent with voic­ing his grievances at a lec­ture, prob­a­bly lit­tle would have come out of it. Or if the ex­ec­u­tive di­rec­tor or web­site de­vel­oper de­cided that work­ing at any for­eign aid char­ity was fine. Or if the donor de­cided not to read into the un­der­ly­ing vi­sion of the web­site and in­stead donated to a lo­cal char­ity. And so on.

Mak­ing the en­tire op­er­a­tion hap­pen re­quires tight co­or­di­na­tion be­tween nu­mer­ous peo­ple who are able to both skilfully con­duct their work, and see the im­por­tance of more ab­stract or con­crete work done by oth­ers they are in­ter­act­ing with. If in­stead, each re­lied on the use of fi­nan­cial and so­cial in­cen­tives to mo­ti­vate oth­ers, it would be a most daunt­ing en­deav­our.

Cross the chasm

Imag­ine some­one vis­it­ing an EA meetup for the first time. If the per­son step­ping through the door­way was an aca­demic (or LessWrong ad­dict), they might be thrilled to see a bunch of nerdy peo­ple en­gag­ing in in­tel­lec­tual dis­cus­sions. But an ac­tivist (or so­cial en­trepreneur) step­ping in would feel more at home see­ing ag­ile pro­ject teams en­er­get­i­cally typ­ing away at their lap­tops. Right now, most lo­cal EA groups em­pha­sise the former for­mat, I sus­pect in part due to CEA’s deep en­gage­ment stance, and in part be­cause it’s hard to find vol­un­teer pro­jects that have suffi­cient di­rect im­pact.

If the aca­demic and ac­tivist hap­pened to strike up a con­ver­sa­tion, both sides could have trou­ble see­ing the value pro­duced by the other, be­cause each is stuck at an en­tirely differ­ent level of ab­strac­tion. An or­ganiser could help solve this dis­con­nect by grad­u­ally en­courag­ing the ac­tivist to go meta (chunk­ing up) and the aca­demic to ap­ply their brilli­ant in­sights to real-life prob­lems (chunk­ing down).

As a com­mu­nity builder, I’ve found that this chasm seems es­pe­cially wide for new­com­ers. Although the chasm gets filled up over time, peo­ple who’ve been in­volved in the EA com­mu­nity for years still en­counter it, with some em­pha­sis­ing work on epistemics and val­ues, and oth­ers em­pha­sis­ing work to gather data and get things done. The mid­dle area seems ne­glected (i.e. de­cid­ing on the prob­lem, in­ter­ven­tion, pro­ject, and work­flow).

Put sim­ply, EAs tend to em­pha­sise one of two cat­e­gories:

  • Pri­ori­ti­sa­tion (high level/​far mode): figur­ing out in what ar­eas to do work

  • Ex­e­cu­tion (low level/​near mode): get­ting re­sults on the ground

Note: I don’t base this on any psy­cholog­i­cal stud­ies, which seems like the weak­est link of this post. I’m cu­ri­ous to get an im­par­tial view from some­one well-versed in the liter­a­ture.

Those who in­stead ap­pre­ci­ate the im­por­tance of mak­ing de­ci­sions across all lev­els tend to do more good. For ex­am­ple, in my opinion:

  • Peter Singer is ex­cep­tion­ally im­pact­ful not just be­cause he is an ex­cel­lent util­i­tar­ian philoso­pher, but be­cause he ar­gues for changes in for­eign aid, fac­tory farm prac­tices, and so on.

  • Tanya Singh is ex­cep­tion­ally im­pact­ful not just be­cause she was ex­cel­lent at op­er­a­tions at an on­line shop­ping site, but be­cause she later ap­plied to work at the Fu­ture of Hu­man­ity In­sti­tute.

Com­mit & correct

The com­mon thread so far is that you need to link up work done at var­i­ous con­strual lev­els in or­der to do more good.

To make this more spe­cific:
In­di­vi­d­u­als who in­crease their im­pact the fastest tend to be

  • cor­rigible – quickly in­te­grate new in­for­ma­tion into their be­liefs from ob­ser­va­tions through to epistemics

  • com­mit­ted – prop­a­gate the de­ci­sions they make from high-level val­ues down to the ac­tions they take on the ground (put an­other way: ac­tu­ally pur­sue the goals they set for them­selves)

To put this into pseu­do­maths:

Where level refers to the level num­bers as in the di­a­gram.

Note: This mul­ti­pli­ca­tion be­tween lev­els as­sumes that if you change de­ci­sions on higher lev­els (e.g. by chang­ing what char­ity or­gani­sa­tion you work at), you are able to trans­fer the skills and know-how you’ve ac­quired on lower lev­els (this works well with e.g. op­er­a­tions skills but not as well with e.g. street protest skills). Also, though the mul­ti­pli­ca­tion of traits in this for­mula im­plies that the im­pact of some EAs is or­ders of a mag­ni­tude higher than oth­ers, it’s hard to eval­u­ate these traits as an out­sider.

Spe­cial­ise & transact

Does be­com­ing good at all con­strual lev­els mean we should all be­come gen­er­al­ists? No, I ac­tu­ally think that as a grow­ing com­mu­nity we’re do­ing a poor job at di­vid­ing up labour com­pared to what I see as the gold stan­dard – de­cen­tral­ised mar­ket ex­change. This is where we can use V2ADC to zoom out over the en­tire com­mu­nity:

The cir­cles illus­trate net­work clusters of peo­ple that ex­change a lot with each other. In­ter­est­ingly, a cluster of agents work­ing to­ward (shared) goals can be seen as a su­per-agent with re­sult­ing su­per-goals, as can one per­son with sub-goals be seen as be­ing the amalga­ma­tion of sub-agents in­ter­act­ing with each other.

A more re­al­is­tic but messy de­pic­tion of our com­mu­nity’s in­ter­ac­tions would look like this (+ line weights to de­note how much peo­ple ex­change). Also, peo­ple tend to ac­tu­ally cluster at each con­strual level – e.g. there are pro­ject team clusters, cor­po­rate an­i­mal welfare out­reach clusters, fac­tory farm­ing re­duc­tion clusters, an­i­mal welfare clusters, and so on. Many of these clusters con­tain peo­ple who are not (yet) com­mit­ted to EA, which makes sense, both in our abil­ity to do good to­gether and for in­tro­duc­ing new peo­ple to the prin­ci­ples of EA.

See here for a great aca­demic in­tro to so­cial net­works.

As more in­di­vi­d­u­als be­come con­nected within the EA net­work, each should spe­cial­ise at con­strual lev­els in a role they ex­cel at. They can then trans­act with oth­ers to ac­quire other needed in­for­ma­tion and del­e­gate re­main­ing work (e.g. an op­er­a­tions staff mem­ber can both learn from a philoso­pher and take over or­gani­sa­tional tasks that are too much for them). Ex­cep­tions: lead­ers and pro­fes­sional net­work­ers of­ten need to have a broad fo­cus across all of these lev­els as they func­tion as hubs – ju­di­ciously re­lay­ing in­for­ma­tion and work re­quests from one net­work cluster to an­other.

With a trans­ac­tion, I mean a method of giv­ing re­sources to some­one (fi­nan­cial, hu­man, so­cial, and tem­po­ral cap­i­tal) in re­turn for progress on your goals (i.e. to get some­thing back that satis­fies your prefer­ences).

Here are three trans­ac­tions com­mon to EAs:

  1. Col­lab­o­ra­tions in­volve giv­ing re­sources to some­one whom you trust, has the same goals and is ca­pa­ble of us­ing your re­sources to make progress on those goals. When this is the case, EAs will tend to trans­act more and cluster in groups to cap­ture the added value (the higher al­ign­ment in col­lab­o­ra­tions re­duces prin­ci­pal agent prob­lems).

  2. Re­cip­ro­cal favours in­volves giv­ing up a tiny amount of your re­sources to help some­one make dis­pro­por­tional progress on goals that are un­al­igned with yours (e.g. con­nect some­one work­ing on an­other prob­lem with a col­league), with an im­plicit ex­pec­ta­tion of that per­son re­turn­ing the favour at some point in the fu­ture (put an­other way, it in­creases your so­cial cap­i­tal). It’s a prac­ti­cal al­ter­na­tive to moral trade (in­stead of e.g. sign­ing con­tracts, which is both time con­sum­ing and so­cially awk­ward). The down­side of re­cip­ro­cal favours is that you of­ten won’t be able to offer a spe­cific re­source that the other party wants. This is where a medium of ex­change comes in use­ful in­stead:

  3. Pay­ments in­volve giv­ing some­one money based on the re­cip­i­ent’s stated in­tent of what they will do with that money.

EAs are never perfectly (mis)al­igned – they will be more al­igned at some lev­els and less al­igned at oth­ers. For ex­am­ple:

  1. You can of­ten col­lab­o­rate with other EAs on shared goals if you pick the lev­els right. For one, most EAs strongly value in­ter­nal con­sis­tency and rigour, so it’s easy to start a con­ver­sa­tion as a high-level col­lab­o­ra­tion to get closer to ‘the truth’. To illus­trate, though Dick­ens and To­masik clearly dis­agreed here on moral re­al­ism vs. anti-re­al­ism, they still col­lab­o­rated on un­der­stand­ing the prob­lem of AI al­ign­ment bet­ter.

  2. Where goals di­verge, how­ever, re­cip­ro­cal favours can cre­ate shared value. This can hap­pen when e.g. a for­eign aid policy maker types out a quick email for an an­i­mal char­ity di­rec­tor to con­nect her with a col­league (though if the policy maker made the de­ci­sion be­cause he’s un­cer­tain whether global poverty is the best fo­cus area, it’s a col­lab­o­ra­tion). But in a fun­da­men­tal way, we’re all un­al­igned: EAs, like other hu­mans, have an in­trin­sic drive to show off their al­tru­is­tic deeds through sig­nal­ling. The re­cip­i­ents of these sig­nals have the choice whether or not to en­courage them to make these de­ci­sions again by giv­ing back com­pli­ments, gifts and other to­kens of so­cial sta­tus. This in turn in­fluences the com­mu­nity’s norms at large.

  3. Although em­ploy­ers who pay money to cover the salaries tend to roughly agree with em­ploy­ees on e.g. what prob­lems and in­ter­ven­tions to work on, the em­ploy­ees also have sub-goals (such as feel­ing safe, com­fortable and ad­mired by oth­ers) that are per­sonal to them.

By ‘trans­act­ing’ with oth­ers you’re able to com­pen­sate for your per­sonal limi­ta­tions to achiev­ing your goals: the fact that you’ll never be able to ac­quire all re­quired knowl­edge your­self nor be able to do all work as skil­lfully as the few things you can be­come par­tic­u­larly ca­pa­ble at.

In­te­grate the low-level into the high-level

Most of this post has been about push­ing val­ues down into ac­tions, which im­plies that peo­ple do­ing low-con­strual level work should merely fol­low in­struc­tions from above. Although it’s in­deed use­ful for those peo­ple to use pri­ori­ti­sa­tion ad­vice to de­cide where to do work, they also fulfill the es­sen­tial func­tion of feed­ing back in­for­ma­tion that can be used to up­date over­ar­ch­ing mod­els.

We face a ma­jor risk of ide­olog­i­cal rust in our com­mu­nity. This is where peo­ple who are work­ing out high-level de­ci­sions ei­ther don’t re­ceive enough in­for­ma­tion from be­low or no longer re­spond to it. As a re­sult, their mod­els sep­a­rate from re­al­ity and their pri­ori­ti­sa­tion ad­vice be­comes mis­guided. To illus­trate this…

At a Strate­gies level, you find that much of AI al­ign­ment re­search is built on paradigms like ‘the in­tel­li­gence ex­plo­sion’ and ‘util­ity func­tions’ that arose from pi­o­neer­ing work done by the Fu­ture of Hu­man­ity In­sti­tute and Ma­chine In­tel­li­gence Re­search In­sti­tute. For­tu­nately, lead­ers within the com­mu­nity are aware of the in­for­ma­tion cas­cades this can lead to, but the ques­tion re­mains whether they’re in­te­grat­ing in­sights on ma­chine learn­ing progress fast enough into their or­gani­sa­tions’ strate­gies.

At a Causes level, a sig­nifi­cant pro­por­tion of the EA com­mu­nity cham­pi­ons work on AI safety. But then there’s the ques­tion: how many oth­ers are do­ing spe­cial­ised re­search on the risks of pan­demics, nan­otech­nol­ogy, and so on? And how much of this gets in­te­grated into new cause rank­ings?

At a Values level, it is crazy how one per­son’s wor­ld­view leads them to work on safe­guard­ing the ex­is­tence of fu­ture gen­er­a­tions, an­other on pre­vent­ing their suffer­ing and an­other to work on nei­ther. This re­flects ac­tual moral un­cer­tainty – to build up a wor­ld­view, you ba­si­cally need to in­te­grate most of your life ex­pe­riences into a work­able model. Hav­ing philoso­phers and re­searchers ex­plore di­verse wor­ld­views and ex­change ar­gu­ments is es­sen­tial in en­sur­ing that we don’t rust in our cur­rent con­jec­ture.

Now ex­tend it to or­gani­sa­tional struc­ture:
We should also use the prin­ci­ple of de­cen­tral­ised ex­per­i­men­ta­tion, ex­change and in­te­gra­tion of in­for­ma­tion more in how we struc­ture EA or­gani­sa­tions. There has been a ten­dency to con­cen­trate re­sources (fi­nan­cial, hu­man and so­cial cap­i­tal) within a few or­gani­sa­tions like the Open Philan­thropy Pro­ject and the Cen­tre for Effec­tive Altru­ism who then set the agenda for the rest of the com­mu­nity (i.e. push de­ci­sions down their chains).

This seems some­what mis­guided. Larger or­gani­sa­tions do have less re­dun­dancy and can di­vide up tasks bet­ter in­ter­nally. But a team of 24 staff mem­bers is still at a clear cog­ni­tive dis­ad­van­tage at gath­er­ing and pro­cess­ing low-level data com­pared to a de­cen­tral­ised ex­change be­tween 1000 com­mit­ted EAs. By them­selves, they can’t zoom in closely on enough de­tails to up­date their high-level de­ci­sion mod­els ap­pro­pri­ately. In other words, con­cen­trated de­ci­sion-mak­ing leads to frag­ile de­ci­sion-mak­ing – just as it has done for cen­tral plan­ning.

Granted, it is hard to find peo­ple you can trust to del­e­gate work to. OpenPhil and CEA are mak­ing head­way in al­lo­cat­ing fund­ing to spe­cial­ised ex­perts (e.g. OpenPhil’s al­lo­ca­tion to CEA, which in turn al­lo­cated to EA Grants) and col­lab­o­rat­ing with or­gani­sa­tions who gather and analyse more de­tailed data (e.g. CEA’s in­di­vi­d­ual out­reach team work­ing with the Lo­cal Effec­tive Altru­ism Net­work). My worry is that they’re not del­e­gat­ing enough.

Given the un­cer­tainty they are fac­ing, most of OpenPhil’s char­ity recom­men­da­tions and CEA’s com­mu­nity-build­ing poli­cies should be over­turned or rad­i­cally al­tered in the next few decades. That is, if they ac­tu­ally dis­cover their mis­takes.
This means it’s cru­cial for them to en­courage more peo­ple to do lo­cal, con­tained ex­per­i­ments and then in­te­grate their re­sults into more ac­cu­rate mod­els.

EDIT: see these com­ments on where they could cre­ate bet­ter sys­tems to fa­cil­i­tate this:

Pri­vate donors who have the time and ap­ti­tude to re­search spe­cial­ised prob­lem niches and feed up their find­ings to big­ger fun­ders should do so (like­wise, CEA should ac­tively dis­cour­age these spe­cific donors from donat­ing to EA Funds). Lo­cal com­mu­nity builders should test out differ­ent event for­mats and feed up out­comes to LEAN. And so on.

How­ever, if big­ger or­gani­sa­tions would hardly use this in­for­ma­tion, most of the ex­plo­ra­tion value would get lost. Un­der­stand­ably, this out­pour of data is too over­whelming for any hu­man (tribe) to man­u­ally pro­cess.

We there­fore need to build ver­ti­cally in­te­grated data anal­y­sis plat­forms that sep­a­rate the sig­nal from the noise, up­date higher-level mod­els, and then share those mod­els with rele­vant peo­ple. On these plat­forms, peo­ple can then up­load data, use rigor­ous data min­ing tech­niques and share the re­sults through tar­geted chan­nels like the Pri­or­i­tyWiki.

I in­tro­duced this de­ci­sion chain in a pre­vi­ous post, which I re­vised af­ter chat­ting with thought­ful peo­ple at two CEA re­treats. Thanks to Kit Har­ris for his in­ci­sive feed­back on the cat­e­gori­sa­tion, Siebe Rozen­dal and Fokel Ellen for their in­sight­ful text cor­rec­tions, and Max Dal­ton, Vic­tor Sint Ni­co­laas and all the other peo­ple who beared with me last year when I was still form­ing these ideas.

If you found this anal­y­sis valuable, con­sid­er­ing mak­ing a dona­tion or email me at rem­melt@effec­tiefaltru­isme.nl (also if you want to avoid trans­ac­tion costs through a bank trans­fer). My stu­dent loan pay­ments are stop­ping in Au­gust so you’re close to fund­ing my work at peak marginal re­turns (note: I have also re­cently ap­plied to EA Grants to ex­tend my fi­nan­cial run­way).

Also, if you’re at EAGx Nether­lands, feel free to grill me at my work­shop. :-)
Cross-posted on LessWrong.