The Values-to-Actions Decision Chain: a rough model
I’m curious to hear your thoughts on the rough model described below. The text is taken from our one-pager on organising the EAGx Netherlands conference in late June.
EDIT: By implying below that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don’t mean to say that they should both become generalists. Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.
In a similar vein, I think it makes sense for CEA’s Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support. However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.
“EA Netherlands’ focus in 2018 is on building up a tight-knit and active core group of individuals that are exceptionally capable at doing good. We assume that ‘capacity to do good’ is roughly log-normal distributed, and that an individual’s position on this curve results from effective traits acting as multipliers. We’ve found this ‘values-to-actions chain’ useful for decomposing it:
capacity ∝ values x epistemology x causes x strategies x systems x actions
That is, capacity to do good increases with the rigour of chained decisions – from higher meta-levels (e.g. on moral uncertainty and crucial considerations) to getting things done on the ground.
However, the capabilities of individuals in the Dutch EA community – and the social networks they are embedded in – tend to be unevenly distributed across these levels. On one end, people at LessWrong meetups seem relatively strong at deliberating what cause areas are promising to them (AI-alignment, mental health, etc.) but this often results in intellectual banter rather than next actions to take. The academics and student EA group leaders that we’re in touch with face similar problems. On the other end, some of the young professionals in our community (graduates from university colleges, social entrepreneurs, etc.) as well as philanthropists and business leaders (through Effective Giving) have impressive track records in scaling organisations, but haven’t deliberated much yet on what domain they should focus their entrepreneurial efforts on.
Our tentative opinion is that the individuals who build their capacity to do good the fastest are those most capable of rationally correcting and propagating their decisions through a broad set of levels (since a person’s corrigibility at a given level of abstraction determines how fast they update their beliefs there with feedback).”
Could you be a little more specific about the levels/traits you name? I’m interpreting them roughly as follows:
Values: “how close are they to the moral truth or our current understanding of it” (replace moral truth with whatever you want values to approximate).
Epistemology: how well do people respond to new and relevant information?
Causes: how effective are the causes in comparison to other causes?
Strategies: how well are strategies chosen withing those causes?
Systems: how well are the actors embedded in a supportive and complementary system?
Actions: how well are the strategies executed?
I think a rough categorisation of these 6 traits would be Prioritisation (Values, Epistemology, Causes) & Execution (Strategies, Systems, Actions), and I suppose you’d expect a stronger correlation within these two branches than between?
Yeah, I more or less agree with your interpretations.
The number (as well as scope) of decision levels are arbitrary because they can be split. For example:
Values: meta-ethics, normative ethics
Epistemology: defining knowledge, approaches to acquiring it (Bayes, Occam’s razor...), applications (scientific method, crucial considerations...)
Causes: the domains can be made as narrow or wide as seems useful for prioritising
Strategies: career path, business plan, theory of change...
Systems: organisational structure, workflow, to-do list...
Actions: execute intention (“talk with Jane”), actuate (“twitch vocal chords”)
(Also, there are weird interdependencies here. E.g. if you change the cause area you work on, the career skills acquired before might not be as effective there. Therefore, the multiplier changes. I’m assuming that they tend to be fungible enough for the model still to be useful.)
Your two categories of Prioritisation and Execution seem fitting. Perhaps some people lean more towards wanting to see concrete results, and others more towards wanting to know what results they want to get?
Does anyone disagree with the hypothesis that individuals – especially newcomers – in the international EA community tend to lean one way or the other in terms of attention spent and the rigour with which they make decisions?
To clarify: by implying that, for example, a social entrepreneur should learn about population ethics from a Oxford professor to increase impact (and the professor can learn more about organisational processes and personal effectiveness), I don’t mean to say that they should both become generalists.
Rather, I mean to convey that the EA network enables people here to divide labour at particular decision levels and then pass on tasks and learned information to each other through collaborations, reciprocal favours and payments.
In a similar vein, I think it makes sense for CEA’s Community Team to specialise in engaging existing community members on high-level EA concepts at weekend events and for Local Effective Altruism Network to help local groups get active and provide them with ICT support.
However, I can think of 6 past instances where it seems that either CEA or LEAN could have potentially avoided making a mistake by incorporating the thinking of the other party at decision levels where it was stronger.
I think it would be better to include this in the OP.
Will do!