Complexity and the Search for Leverage

For me, two of the core elements of effective altruism are:

  1. Do the most good

  2. Have good reasons for believing it is the most good

This was fairly straightforward to strive for when I was comparing the cost effectiveness of charities. This got harder when I decided to found a new not-for-profit to work on challenges not yet fully explored. As I continue to see more and more of the interconnected complex world, these two elements feel more and more at odds with each other.

Building a spreadsheet works to compare charities, but breaks with circular reference errors if you try to model a complex system.


Dave Snowden has developed the Cynefin (ke-nev-in) framework (article explainer, 5-min video), which offers a lens on how we might make sense of different systems:

Creating a cost-benefit breakdown of various interventions might be a complicated task—we can see the inputs and outputs of a given charity, make some expected-value assumptions, and figure out what to do. Designing a new intervention to transform the education system in a country is more complex—there are so many variables that can greatly affect the impact of any individual project, it quickly becomes combinatorially explosive to assess how each variable interacts, and how each relationship is affected by other contextual factors over time.

In my time in the rationality and EA communities, I learned a lot about working on problems through a complicated lens, and built an effective analytical toolkit. But in my pursuit to do the most good, I found this toolkit was not always up to the task.


In which quadrant might we expect to be able to ‘do the most good’? I don’t believe the Cynefin framework is a good tool to answer this question. How we experience qualities of ‘complex’ vs ‘complicated’ depend on our own ability to understand systems and processes. A young child may [subjectively] experience much of the world as largely chaotic, but an engineer may experience the world as more complicated.

Let’s instead explore Michael Commons’ Model of Hierarchical Complexity. Paraphrased from Wikipedia:

The model of hierarchical complexity (MHC) is a formal theory and a mathematical psychology framework for scoring how complex a behavior is. It quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. Its forerunner was the general stage model.

Behaviors that may be scored include those of individual humans or their social groupings (e.g., organizations, governments, societies), animals, or machines. It enables scoring the hierarchical complexity of task accomplishment in any domain. It is based on the very simple notions that higher order task actions:

  1. are defined in terms of the next lower ones (creating hierarchy);

  2. organize the next lower actions;

  3. organize lower actions in a non-arbitrary way (differentiating them from simple chains of behavior).

Without getting into the mathematical specifics, I’ll illustrate with a simple example of someone learning language, from less complex to more complex:

  • A child learns words—she ties each word to an object and can communicate simply

  • The child starts to combine words into sentences—the order of the words is important to the meaning being communicated

  • Sentences are strung into paragraphs—Multiple sentences communicating something more rich than can be shared in just one sentence

  • In school, she learns to put paragraphs into stories or essays, and again—the order of the paragraphs matters. Each paragraph is organized by (and contributes to) the story or essay

  • In university, she starts to conduct literature reviews, identifying themes between multiple essays, and journals—seeing how each contributes to a greater paradigm of thought

  • Early in her career, she encounters paradigms that seem to contradict each other, creating a tension she feels until she starts to see a broader system that includes both paradigms

  • Later, she starts to integrate multiple paradigms into a new field. This field is able to make sense of multiple ways of seeing, and gives new purpose to the work being done across several disciplines, universities, and thousands of people

We could use this model to assess the [more objective] complexity of various goals/​tasks involved in ‘doing the most good’. We could also use this model to assess our own ability to understand and accomplish these goals.

In example, we might categorize a hierarchy of goals in the education field [from more complex to less]:

  1. Shift humanity from a competitive/​rivalrous platform to one based off of cooperation (A very complex goal, requiring the coordination of economic systems shift and cultural narrative shift.)

  2. In service to the more complex goal, we may work on a sub-goal of shifting cultural narrative through re-imaging the role of education in raising our next generation. This may involve coordinating change in parenting, media, and formal education

  3. Looking at just the formal education space, how might we coordinate a movement of change through-out various areas of the educational system? A network may be appropriate (who do we invite, and how do they work together?)

  4. Spotting a gap in our network, we might fund a new not-for-profit company to tackle an important challenge. The creation of this NFP is given purpose because it is organized from a higher level of systemic understanding

  5. What goals do we assign the individuals working together in this NFP? Each must be coordinated to achieve the goals of the NFP

  6. How do I personally schedule out my week? Each of these components are in service to my support of this NFP, and in turn, the network, the movement, and even the broader shift in humanity

Which of these levels is ‘doing the most good’? They are all necessary, though not everyone is suited to managing each level. I believe it is in our collective ability to navigate these nested systems that much of our opportunity lies.

  • There is lost value in the person who feels their work is meaningless and boring—because they don’t see the bigger picture of how their work contributes

  • There is missed opportunity in the local charity that did groundbreaking work in one community, but never shared what they learned with the hundreds of other charities attempting similar things

  • It would be a shame if huge swaths of humanity were spending their hours on things that contributed to our own extinction, unable to connect the dots between their individual actions and the larger systemic effects


I have a sense that EA as a movement often tends towards the lower levels of complexity—finding great opportunities in complicated spaces, where our ability to predict outcomes gives us many good reasons to follow the paths we choose. At higher levels of complexity, we have less ‘evidence’ to believe that any of our work will directly contribute to a better world, yet these spaces are necessary for effective coordination in service of higher-leverage goals.

There is currently an EA Systems Change group on Facebook, with ~900 members. The activity there is sporadic, far from something that could be seen as coordinated activity. What might be possible if this group started to coordinate? The groups that offer grant money to EA orgs—what level of system do they hold as they distribute their funding? In which ways might we improve our collective ability to create a whole that is greater than the sum of its parts?

I suspect this is an important growth edge for our community; to enter into dialogue with the network thinking groups, the complexity wizards, and the systems change communities. I’ve encountered many of them in my own quest to do the most good: the person mapping networks to create collective awareness, the person who connects two community leaders for a dialogue over brunch, the person who asks a key question that shifts the mission of a charity… These people exist and operate from a different playbook, doing good in ways that are largely invisible, yet very high leverage.

One of the most beautiful things I see in the EA community is its curiosity and willingness to learn and grow. I’m hoping this piece may act as a nudge towards vertical learning, building off the rich horizontal learning that the community does so well. Would love to hear what you make of this, and I’m happy to connect with anyone interested in further conversations around complexity, leverage, and systems change!