The Values-to-Actions Decision Chain: a rough model

I’m cu­ri­ous to hear your thoughts on the rough model de­scribed be­low. The text is taken from our one-pager on or­ganis­ing the EAGx Nether­lands con­fer­ence in late June.

EDIT: By im­ply­ing be­low that, for ex­am­ple, a so­cial en­trepreneur should learn about pop­u­la­tion ethics from a Oxford pro­fes­sor to in­crease im­pact (and the pro­fes­sor can learn more about or­gani­sa­tional pro­cesses and per­sonal effec­tive­ness), I don’t mean to say that they should both be­come gen­er­al­ists. Rather, I mean to con­vey that the EA net­work en­ables peo­ple here to di­vide labour at par­tic­u­lar de­ci­sion lev­els and then pass on tasks and learned in­for­ma­tion to each other through col­lab­o­ra­tions, re­cip­ro­cal favours and pay­ments.
In a similar vein, I think it makes sense for CEA’s Com­mu­nity Team to spe­cial­ise in en­gag­ing ex­ist­ing com­mu­nity mem­bers on high-level EA con­cepts at week­end events and for Lo­cal Effec­tive Altru­ism Net­work to help lo­cal groups get ac­tive and provide them with ICT sup­port. How­ever, I can think of 6 past in­stances where it seems that ei­ther CEA or LEAN could have po­ten­tially avoided mak­ing a mis­take by in­cor­po­rat­ing the think­ing of the other party at de­ci­sion lev­els where it was stronger.

EA Nether­lands’ fo­cus in 2018 is on build­ing up a tight-knit and ac­tive core group of in­di­vi­d­u­als that are ex­cep­tion­ally ca­pa­ble at do­ing good. We as­sume that ‘ca­pac­ity to do good’ is roughly log-nor­mal dis­tributed, and that an in­di­vi­d­ual’s po­si­tion on this curve re­sults from effec­tive traits act­ing as mul­ti­pli­ers. We’ve found this ‘val­ues-to-ac­tions chain’ use­ful for de­com­pos­ing it:

ca­pac­ity val­ues x episte­mol­ogy x causes x strate­gies x sys­tems x actions

That is, ca­pac­ity to do good in­creases with the rigour of chained de­ci­sions – from higher meta-lev­els (e.g. on moral un­cer­tainty and cru­cial con­sid­er­a­tions) to get­ting things done on the ground.

How­ever, the ca­pa­bil­ities of in­di­vi­d­u­als in the Dutch EA com­mu­nity – and the so­cial net­works they are em­bed­ded in – tend to be un­evenly dis­tributed across these lev­els. On one end, peo­ple at LessWrong mee­tups seem rel­a­tively strong at de­liber­at­ing what cause ar­eas are promis­ing to them (AI-al­ign­ment, men­tal health, etc.) but this of­ten re­sults in in­tel­lec­tual ban­ter rather than next ac­tions to take. The aca­demics and stu­dent EA group lead­ers that we’re in touch with face similar prob­lems. On the other end, some of the young pro­fes­sion­als in our com­mu­nity (grad­u­ates from uni­ver­sity col­leges, so­cial en­trepreneurs, etc.) as well as philan­thropists and busi­ness lead­ers (through Effec­tive Giv­ing) have im­pres­sive track records in scal­ing or­gani­sa­tions, but haven’t de­liber­ated much yet on what do­main they should fo­cus their en­trepreneurial efforts on.

Our ten­ta­tive opinion is that the in­di­vi­d­u­als who build their ca­pac­ity to do good the fastest are those most ca­pa­ble of ra­tio­nally cor­rect­ing and prop­a­gat­ing their de­ci­sions through a broad set of lev­els (since a per­son’s cor­rigi­bil­ity at a given level of ab­strac­tion de­ter­mines how fast they up­date their be­liefs there with feed­back).