The Values-to-Actions Decision Chain: a rough model

I’m curious to hear your thoughts on the rough model described below. The text is taken from our one-pager on organising the EAGx Netherlands conference in late June.

EDIT: By implying below that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don’t mean to say that they should both become generalists. Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.
In a similar vein, I think it makes sense for CEA’s Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support. However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.

EA Netherlands’ focus in 2018 is on building up a tight-knit and active core group of individuals that are exceptionally capable at doing good. We assume that ‘capacity to do good’ is roughly log-normal distributed, and that an individual’s position on this curve results from effective traits acting as multipliers. We’ve found this ‘values-to-actions chain’ useful for decomposing it:

capacity values x epistemology x causes x strategies x systems x actions

That is, capacity to do good increases with the rigour of chained decisions – from higher meta-levels (e.g. on moral uncertainty and crucial considerations) to getting things done on the ground.

However, the capabilities of individuals in the Dutch EA community – and the social networks they are embedded in – tend to be unevenly distributed across these levels. On one end, people at LessWrong meetups seem relatively strong at deliberating what cause areas are promising to them (AI-alignment, mental health, etc.) but this often results in intellectual banter rather than next actions to take. The academics and student EA group leaders that we’re in touch with face similar problems. On the other end, some of the young professionals in our community (graduates from university colleges, social entrepreneurs, etc.) as well as philanthropists and business leaders (through Effective Giving) have impressive track records in scaling organisations, but haven’t deliberated much yet on what domain they should focus their entrepreneurial efforts on.

Our tentative opinion is that the individuals who build their capacity to do good the fastest are those most capable of rationally correcting and propagating their decisions through a broad set of levels (since a person’s corrigibility at a given level of abstraction determines how fast they update their beliefs there with feedback).