Complexity science is a field which aims to understand systems composed of many interacting parts. I believe it is relevant to a number of EA cause areas, and has several features that could help in realising the goals of effective altruism:
A set of useful concepts and mental models for approaching broad and challenging systemic problems.
Tools such as computer simulation for understanding and analysing complex systems.
An example of building a successful interdisciplinary intellectual movement.
A note about me
I am not an academic expert in complexity science, however I have several years of experience building computer simulations to model complex systems. As such, this post focuses disproportionately on computer simulations as a tool, however it is still intended as an introduction to complexity science more broadly. I strongly believe that complex systems simulation is a powerful tool for institutional decision making. I also have a weaker belief that complexity science could benefit many other EA cause areas. My aim with this post is to introduce this field to more EA people in order to get feedback and ideas on where this could be applied within EA.
What is complexity science?
What do ant colonies, the immune system, the economy, the energy grid and the internet all have in common? According to the field of complexity science these are all examples of complex systems, composed of many interacting components, which communicate and coordinate with each other, adapt to changing circumstances and can process information in a decentralised manner. These can be contrasted with systems that are merely complicated, but not complex. Take for example complicated human-engineered systems such as airplanes or microprocessors, these are impressively intricate but they have been designed in a top down, controlled way and their operation is fairly predictable. It is straightforward to connect the high level and low level phenomena observed in these systems. Conversely you can have apparently simple complex systems, such as cellular automata, which are made up of very simple behavioural rules at the micro level, but the individual components interact and combine in a way that is hard to understand and predict at the overall system level. Complexity science is an ambitious endeavour to understand and explain such phenomena.
I have not seen much prior discussion of complexity science within EA, but I think there are several parallels and areas of overlap. One similarity is that much like effective altruism it is highly interdisciplinary; it draws from physics, biology, computer science, artificial intelligence and social science. In this post I aim to give a quick introduction to the world of complexity science and why I believe it holds a lot of value for effective altruism.
The website Complexity Explained contains a short introduction to the core ideas of complexity science, with interactive examples. One key concept is the idea of interactions, a complex system cannot be understood just by aggregating its component parts, since you also have to understand the interactions between these parts. This leads to system characteristics such as nonlinearity, feedback loops and tipping points, meaning a small input to the system can result in wildly different behaviour.
To illustrate this concept imagine something non-complex such as a bag of marbles. If you want to know the weight of the whole bag then you can simply sum the weight of all the individual marbles. In contrast, if you take a complex system such as a financial market composed of many individual traders, you can’t get the macro level properties of this market simply by summing the actions of all the traders, since these traders are going to react and interact with each other. Another way of thinking about this is “more is different”, if you add more components or agents to a system then you get qualitatively different effects that can’t be predicted just by the sum of those parts.
A follow-on concept is the idea of emergence; some complex systems exhibit surprisingly complex behaviour at the macro level as a result of relatively simple behaviour at the individual level. Ant colonies are a classic example of this. Each individual ant is following very simple rules and has no understanding of the overall intent of the colony, however ant colonies are able to coordinate to build intricate nests and cooperate on a large scale to forage for food, among other sophisticated behaviours. This shows how complex information processing can emerge from very simple individual rules. Murmurations of starlings are another example of how beautiful and complex-seeming patterns can arise from simple local behaviour. Each bird can only perceive a few of its neighbours, and will simply adjust itself to match their direction and speed, however the combined action of all these birds following simple rules results in complex patterns with no central coordination. Knowing that a system exhibits emergence does not necessarily mean we can use this to predict its behaviour, however it is a useful way to classify systems in which it is hard to link the micro and macro behaviour.
Systems which exhibit complexity and emergence cannot be understood by breaking them down via analytical reductionism, but neither are they random systems, so we cannot rely solely on statistics to understand their macro-level properties. In general such systems prove intractable for analytical mathematical tools which rely on aggregation and simplification, for a more technical introduction to this idea see this textbook. As a partial solution to this problem complexity science provides us with helpful concepts for thinking about these systems, such as emergence, hierarchy and self organization. This isn’t as precise as analytical reductionism, but no other approaches will work. This requires approaching complex systems in a holistic way, it can be helpful to think “from the bottom up”; modelling the individual components as well as how they combine together.
An example of this can be seen in economics, by contrasting classical economic methods, e.g. in macroeconomics, with a new approach inspired by complexity science, known as complexity economics. Traditional economic models rely on strong assumptions such as homogeneity (all agents are the same) and rationality, agents act to maximise their own self interest and have perfect knowledge. These models solve for a global optimum which represents an equilibrium state, however complex systems such as the economy rarely reach equilibrium in reality, they are examples of non-equilibrium dynamical systems. Complexity economics takes a different approach, which focuses on bottom-up modelling of agents. This includes agent-based models (ABMs), which are computer simulation models with heterogeneous agents that can interact with each other and follow heuristic behavioural rules, with limited knowledge rather than global optimisation. A way of thinking about this is modelling “verbs” rather than “nouns”, as explained in a recent paper by W. Brian Arthur, which means capturing dynamic processes rather than static quantities. I think the idea of working with non-equilibrium systems is particularly relevant to EA, since many of the real world systems EAs are trying to impact, such as global politics or national healthcare systems, are constantly changing and never settle down into a static equilibrium.
Complexity economics is still far from the mainstream of economics, however it has been gaining acceptance in recent years, particularly since the 2008 financial crisis, which resulted in a lot of criticism for mainstream macroeconomics for failing to foresee the crisis (the finance system wasn’t even included in most macroeconomic models before then). Complexity Economics was recently featured in two Nature Reviews articles here and here.
Complex systems tend to be characterised by “fat tail” distributions which include extreme events, caused by feedback loops and cascade effects due to interactions between components and systems. This can take the form of “cascading failures”. A classic example of this is the major power blackout in the Northeastern US and Canada in 2003. This was initially triggered by short circuits in transmission lines in Ohio due to overgrown trees; the outage was then compounded by a software bug in the alarm system. Due to insufficient coordination and containment strategies the blackout spread to overload the power grid across much of the Northeastern and Midwestern US as well as the Canadian province of Ontario. This cascading failure was due to the interaction of multiple systems; a forest ecological network, the power grid made up of physical cables and software systems as well as a human coordination network.
The ongoing Covid-19 pandemic is another example of interacting complex systems, for example disease propagation amongst interacting agents, international travel patterns and the economy.
Of particular relevance to EA is how this may apply to analysing potential global catastrophic risks (GCRs) or existential risks. Most of the plausible scenarios for such risks involve multiple systemic problems interacting with each other, with hard to predict feedback loops and cascading failures.
Complexity science includes many more areas which I don’t have space to cover in depth here, for example information theory, evolution, genetic algorithms, fractals, chaotic systems and network theory. All of these areas exist as academic fields in their own right, however complexity science weaves a thread through them all. It can come across as overly broad when viewed through the existing ontology used in academia to demarcate separate areas of study such as physics, chemistry, biology, economics etc. However it is focused on a particular set of ideas and phenomena that can be found in many of these different areas, for example interactions and micro-macro scale relationships.
The term “Complexity Science” as an umbrella for this interdisciplinary set of ideas is only a few decades old, however these ideas have a long lineage from other areas, particularly in areas of physics such as dynamical systems and chaos theory, as well as graph theory and network science. There are many people who use this lens to study such systems even if they don’t use the term complexity science. The process of connecting all these different domains and ideas is still underway, for example formalising the notion of complexity, which arguably remains an informal concept. However I would propose that there is a set of strong underlying themes, ideas and tools which could be beneficial for effective altruism.
A particularly promising tool from complexity science is computer simulation, including agent-based models. Since this is the specific area I have the most experience in, I will spend much of the rest of this post exploring how computer simulation could be used within effective altruism. Although it’s important to note that computer simulation is just one tool from complexity science, and not all complexity scientists would place the same emphasis on simulation that I do. There are other aspects of complexity science which could be valuable for EA, for example simply being able to recognise that you are dealing with a complex, emergent system is useful because it is likely to change what assumptions are appropriate to make, your priors on what behaviour you expect to observe and the viability of any potential solutions.
Applications within Effective Altruism
When I think about the biggest challenges in effective altruism, many of them involve intervening in complex systems. For example health interventions in the developing world, often these can have 2nd and 3rd order consequences which are hard to reason about, and are hard to test with RCTs, given the complex interactions between different systems. A lot of the time the straightforward “non-complex” interventions have already been tried, and all that is left are the thornier, more complex challenges.
I think there is an opportunity for effective altruism minded people to work on building and advocating for simulations of complex social systems, and using these to explore the effects of policy interventions. This has a lot of overlap with the area of Improving Institutional Decision Making (IIDM). Our world is more interconnected than ever before, and many of the most pressing challenges of the 21st century involve complex systems, for example pandemics, misinformation in social networks, and the critical infrastructure which enables modern life, such as the transport network, the internet and the power grid.
There is also a clear link to longtermism. Parts of the complexity science community are exploring foundational questions such as the origin of life. The Interplanetary Project at the Santa Fe Institute (a well known complexity science organisation), touches on many themes which would be familiar to anyone interested in longtermism within EA.
Below I have sketched out a few examples of where I think complexity science and simulation are relevant to EA cause areas. As a warning; these are just initial ideas to prompt discussion and are not necessarily fully thought through.
Cause Areas
Biosecurity
Complex systems simulation has already had a large impact on decision making during the Covid-19 pandemic (discussed in more detail later in this post). In my view there are many untapped opportunities for better simulations to help handle such events. These could explore the 2nd order unanticipated effects of pandemics and the interactions between systems such as the disease itself, the economy and the fragility or resilience of global supply chains. Governments could use simulations to test out interventions and prevention policies, such as the move to remote work. Simulations could also model hypothetical scenarios such as an engineered pathogen with a range of different possible disease parameters.
Governance and institutional decision making
IIDM is perhaps the most obvious cause area in which to apply simulations, as they can be effective tools for decision making, by promoting collaboration and shared understanding of a situation. Simulations allow decision makers to test the effects of novel policies. In a more meta way ABMs can also simulate the mechanisms of decision making and voting themselves. There is also the closely related area of “Operations Research”, which focuses on improving decision making and planning, and has already made use of tools such as ABMs and systems thinking.
Economics
The rapidly growing field of complexity economics provides additional capabilities beyond the limitations of traditional economic modelling. Such techniques could allow a better understanding of the robustness of the economy and financial system, and help to understand the potential harm of black swan events such as the 2008 financial crisis. They could help to distinguish between short term variation and noise in the economy and financial markets, versus longer term structural trends. OpenPhil already mentions improving macroeconomic policy as a focus area.
A speculative application of complexity economics and ABMs is building simulations of hypothetical future worlds, which could help to understand what aspects of our current economic theory might still hold true in a vastly different context.
Climate change
Climate change is a promising area of application for complex systems modelling, in particular complexity economics. Simulations can help to understand the complex interactions and feedback loops between the economy and natural systems. Existing climate models are already complex system simulations which factor in multiple different geological and climate dynamics and feedback effects, however at present there are not many serious efforts to combine these with economic models and simulations of social systems. This paper by Doyne Farmer, one of the pioneers of Complexity Economics, argues that complexity economics and ABMs can address the shortcomings of existing economic models of climate change. An economic agent-based model such as the EUROMOD tax model could be used to understand the impact of policies such as a carbon tax.
Nuclear weapons
Agent-based models could be built to simulate hypothetical nuclear weapons proliferation scenarios, modelling the incentives and behaviours at the agent level, with agents representing different countries or other actors. This could highlight which policies may have counterintuitive or unexpectedly negative effects. This would be a form of dynamic game theory simulation, and could help test the robustness of different equilibria and sensitivity to different parameters and assumptions. Related work has been conducted during the Cold War which could be expanded upon with more data and computational power.
AI Safety
It is plausible that the arrival of powerful AI, whether gradual or sudden, could be modelled as a complex system of many interacting humans and AI agents, similar to the scenarios discussed in “What failure looks like” by Paul Christiano. This could look like an “AI ecosystem” of powerful but narrow AI agents. The ARCHES paper by Andrew Critch and David Krueger sets out many such scenarios, including the challenges of “multi-multi” delegation and control between multiple humans and AI agents. Viewing these multi-agent dynamics as a complex system seems like a natural way to think about this.
I am not completely sure that we understand these hypothetical scenarios well enough right now to build a useful simulation, however even the process of trying to build such a simulation could promote new ways of thinking about the problem. Similarly to simulating nuclear weapon scenarios, simulations may illuminate some of the power dynamics involved with developing military AI, for example race dynamics and potential destabilising effects.
Complexity science also overlaps directly with the fields of AI and cognitive science. For example this article explores the links between deep learning and complexity / chaos. I am optimistic that there are some interesting links to AI safety here, however this requires more investigation to flesh out.
Building more complex and realistic simulation environments for AI training is an active area of AI research, which clearly overlaps with complex systems modelling. Multi-agent simulations are particularly relevant here. The development of increasingly sophisticated training simulations has many implications for AI safety, and this could potentially increase the risk of poorly aligned AI unless approached in a careful way.
Additionally it strikes me that in order to avoid a worst case scenario when transformative AI does arise we want to ensure that it does not destroy what we might describe as “complex life”, yet another example of a complex system. So perhaps there are deep links between measuring complexity and ensuring that AI agents preserve the existing complex life in our world.
Longtermism
A big opportunity I see for complexity science applied to longtermism is in understanding the dynamics of societies which are distant to us in time, either in the past or future. Simulations can be used to encode assumptions and hypothetical scenarios even with little or no hard data. Simulations have already been used in archaeology to study past societies, for example the influential artificial anasazi agent-based model, which simulates an ancient native american civilisation. Importantly simulations could be used to understand long-term historical trends and previous societal collapses (e.g. this paper). This seems very relevant for understanding how to avoid such societal collapses in future.
You could also imagine modelling hypothetical future worlds, for example a simulation incorporating a high amount of detail about a future society, similar to Robin Hanson’s Age of Em, but encoded in an agent-based model. This would impose a degree of rigour, and would test the internal consistency of any hypotheses about what such a world would look like. There is even a nascent project to create ABMs of worlds from science fiction, to provide interactive and dynamic explorations of these worlds.
Tools—Simulation and computational experiments
I have explored several possible applications for complex systems modelling and simulation within EA cause areas, however it is worth digging into why these are appropriate tools. Complex systems are often unsuited to standard mathematical modelling tools which rely on aggregation and simplification, so we need a different way of modelling these systems. Computer simulations are an alternative approach, enabling agents and components of a system to be represented individually and simulating the interactions between them, capturing nonlinear effects such as feedback. A sub-field of this type of simulation is agent-based modelling, which has been employed in various fields such as biology, epidemiology, social science and economics. Computer simulation can be viewed as a new way of doing science, in addition to experiment and theory. This can help us study emergent effects, as we can recreate in the lab how complex macro-behaviours result from micro-level behaviour at the agent level.
There are many benefits to using computer simulations for decision making, and indeed many of these benefits are shared with traditional mathematical models. In an article called Why Model? Joshua Epstein, one of the pioneers of agent-based models for social science, sets out 16 separate reasons for why computer models can be useful, in addition to simply predicting the future, which is often assumed to be the sole purpose of modelling. One major advantage is that it forces us to formalise our understanding of a system, as Epstein points out in Why Model? without explicitly defined models we typically have to rely on informally specified mental models of complex systems, so there is value in attempting to formalise our mental models to expose any contradictory assumptions and explicitly combine these assumptions with observed data. These arguments apply to all mathematical modelling, however I think they are particularly applicable in the context of ABMs, where we may have some approximate understanding of the behaviour of individual agents, consisting of simple rules, and we want to understand the implied consequences of those rules at the macro-level. Epstein is a major proponent of using bottom-up models in order to understand social systems, a process he refers to as generative social science, the motto of which is “If you can’t grow it, then you don’t understand it”.
Computer simulations allow us to test interventions in a system. In particular we can test interventions at a more granular level, which may affect different agents in different ways. This allows governments to test policies in a safe simulated environment before implementing them in the real world. For example with Covid-19 a detailed simulation of a country’s population could be used to test interventions such as lockdowns, this would allow us to encode the assumption that a certain fraction of people won’t adhere to restrictions. We can also model heterogeneous populations that have varying behaviours and different vulnerabilities to the virus. This is particularly important with pandemics such as COVID-19, since often the only data we have access to are macro variables such as total caseload, number of deaths and reproduction rate. However these aggregate variables are driven by social networks and the behaviour of agents within them at the micro level, which is in turn affected by policy. Connecting aggregate data to individual behaviour of populations is something that agent-based models are well suited to.
A further advantage of this type of model is that they tend to be interpretable, and the agent-level logic can correspond neatly to our intuitive understanding of the mechanisms of the system we want to model. This contrasts with other methods of predictive modelling, such as aggregated analytical mathematical models, or black-box machine learning methods such as deep neural networks, where it can be difficult to understand the internal workings of the model, or map this to our own mental models.
Computer simulations can be used to aid decision making through the process of wargaming, where hypothetical scenarios are tested in a simulated environment in a collaborative way. This can foster mutual understanding of a problem between multiple parties. When such simulations are combined with interactive visualisations and user interfaces they can allow decision makers to try out many different scenarios and generate intuitions about the system being modelled, and how it may respond to certain actions. Most models have a lot of uncertainty, requiring assumptions to be made, however the right user interfaces can expose these assumptions alongside unknown parameters, and allow decision makers to tweak these and put in their own assumptions and estimates of parameters.
Nowadays increasingly large and complex simulations of the real world are being built for such planning and training activities, often referred to as “digital twins”, which combine large amounts of data on real world systems with simulation models.
Covid-19 provides an informative example of how such simulations can be employed to improve critical decision making. Early on in the pandemic agent-based models such as the one built by Neil Ferguson and his team at Imperial College heavily influenced the UK government’s response, and were credited with being one of the reasons that eventually pushed them to go ahead with lockdown, due to the large number of fatalities predicted by the model. Admittedly there were several criticisms laid against this model in particular, such as the fact that it didn’t take into account how people would adjust their behaviour in the absence of government intervention, and also that the model code was over-complicated and not well tested. These criticisms are accurate to some extent, however they only apply to the specific model in question and not to ABMs more generally. Addressing these criticisms by building improved ABMs is a huge opportunity for tackling future pandemics.
There is a lot of potential value in combining models from different domains to understand the connections between real world systems. ABMs such as the Imperial model only take into account disease propagation between abstract agents, however future ABMs could incorporate detailed behavioural models of how people respond to infection levels. Many more dynamics could be added, such as realistic people movement based on data from the UK population. I recently worked on a Covid-19 ABM which combines census and population movement data with a disease model, as part of the Royal Society’s RAMP initiative. If the models used by governments had taken into account factors such as population movement and behavioural response then this might have narrowed the uncertainty in the forecasts of disease spread.
I should sound a note of caution here: there will always be uncertainty in models, both in the data and in whether the model logic accurately reflects the real world dynamics. Some sources of uncertainty may dominate others, for example in the early days of the pandemic perhaps the lack of knowledge of disease parameters (eg. R0) was the largest source of uncertainty, so there would be little value in adding more detail and refining other aspects of the model until this had been narrowed down. However models themselves can help to investigate which are the main sources of uncertainty, using techniques such as sensitivity analysis.
Neil Ferguson and his team did commendable scientific work with the tools they had available, however their Covid model was cobbled together in a short amount of time by repurposing the code from a decade old flu model. Investing in scalable and flexible agent-based models ahead of time would result in sophisticated models which are a major asset for future pandemics or other unforeseen emergency situations requiring government intervention in complex systems.
An even larger opportunity is in combining disease dynamics with an agent-based model of the economy. The real challenge of Covid-19 policy was making difficult tradeoffs between public health and the economy over both the short and long term. Combining disease dynamics and economics in the same simulation could have helped to forecast and quantify the true consequences of government action or lack thereof.
Creating modern user interfaces could further increase the value of such simulations. These could allow decision makers to interact with these simulations directly, rather than relying on a slow feedback loop of requesting scientists to re-run the model with different parameters. This allows people to inject their own assumptions and priorities into a model, and to build up an intuition for how a given system behaves in different scenarios.
In my view a valuable aspect of ABMs is not in perfectly predicting the future, but forcing people to face up to the stark reality of what their current data and assumptions are telling them. Computational models of complex systems are rarely able to produce accurate point predictions of future events, due to uncertainty and chaotic effects, however they can generate useful distributions of plausible outcomes.
Complex systems scientists led the way in advocating for a well-informed Covid-19 response from the very early days of the pandemic. Yaneer Bar-Yam, a complexity scientist at the New England Complex Systems Institute, was one of the earliest and most prominent advocates of a “zero covid” strategy, through his website endcoronavirus.org. He was strongly recommending controls on international movement as early as January 2020.
Reflections on modelling and simulation
From a technical point of view what I am recommending are large scale simulations which incorporate multiple different domains. These techniques are still relatively new, it’s only recently that the amount of computational power and data has become available to build simulations of sufficient scale to be realistic. These techniques have a lot of promise, but it is still early days.
Successfully building and deploying simulations to improve decision making involves overcoming some daunting scientific and engineering challenges. It’s not just a case of writing more code and running with a larger number of agents and more data. Most academic research with ABMs tends to stick to simplified “idea models” which are easier to reason about and validate. In many ways the art of modelling is about simplifying a problem down to only the most important details. Most complexity scientists appreciate the value of simple agent-based models, however some would caution against adding too much detail to these simulations, since that could detract from generalisable insights. There are advantages to increased realism from using larger datasets and more detailed simulations, although it can be tricky to know when this additional detail is beneficial. It risks introducing more degrees of freedom and potential for error. I would argue that any model at all is still an improvement over relying on implicit mental models, but with the caveat that it is not always obvious how to integrate the model output into your overall understanding of the problem, particularly if you have multiple models and sources of data which conflict with each other. If this is handled properly then larger, more detailed models can add to scientific insight and not detract from it.
Another challenge is that simulations are less portable between problems than alternative techniques such as deep learning. For a deep learning model you can develop the learning algorithm once then apply it to problems in different areas by training on new data. However writing a simulation requires encoding domain knowledge into the model, so typically this has to be done separately for each new domain. A related problem is that the software tools to build ABMs have not received nearly as much investment as those for Deep Learning, so it remains difficult to build ABMs which can scale to a large number of agents, i.e. the millions of agents needed to represent the population of a whole country. The domain experts who have the knowledge to build an accurate model may not have the software engineering skills to implement that knowledge in a scalable simulation. This lack of tooling is one of the factors holding back wider adoption of simulations for decision making.
In my view simulation models are one of the best ways to formally combine insights and data sources from multiple domains, showing the implied consequences of our assumptions and limited understanding of a real world system. They can also help non-technical decision makers leverage knowledge from multiple domain experts. However they are not a crystal ball, and there are clear limits to their predictive abilities. This is in part due to incomplete understanding of the system dynamics, but even well defined models are subject to the phenomena of chaos, popularly known as thebutterfly effect. If we don’t have perfect knowledge of the starting conditions in a complex system then the simulated trajectory will quickly diverge from reality, another reason why distributions are favoured over point predictions. This is why weather forecasts are only accurate for about one week out, even though the underlying physical equations are very well understood. However while weather is unpredictable, climate on the other hand is broadly predictable for many decades out (or at least the direction of change is predictable), due to structural dynamics that are fairly well understood. So I believe there is a lot of valuable scientific work to be done in understanding what is analogous to weather (unpredictable fluctuations) vs. climate (structured and predictable) in complex social systems such as the economy. For example could the 2008 financial crisis have been predicted ahead of time? It seems to have been caused by deep structural features of the financial system in the run-up to the crisis, however maybe this only seems inevitable in hindsight.
While simulations may have some predictive power, it is often better to think of them as tools for exploring and generating intuition about a system.
A model of movement building
In the Nature Reviews article on complexity economics W. Brian Arthur explains that he considers complexity science to be more of a “movement within science” than a science itself. In many ways this reminds me of effective altruism, however complexity science has been around for a longer time, over 3 decades. I think there is a lot we can learn from complexity science as an example of building an interdisciplinary and pioneering intellectual movement from scratch, which has achieved a lot of success and influence.
A key part of the complexity science movement is the Santa Fe Institute (SFI), an independent research organisation founded by physicists from the Los Alamos National Laboratory in the 1980s, it is arguably the first and most prominent complexity science organisation. It can take a lot of credit for developing and popularising the ideas behind complexity science. SFI has effectively created a new interdisciplinary identity from scratch, similar to EA. They have also been very influential upon key decision makers; many prominent academics, business people and politicians have visited SFI over the years. This flow of visitors as well as new postdocs is a key component of their success, allowing a continuous transfer of ideas in and out.
Something I admire about SFI is how they are willing to tackle ambitious and broad scientific problems and make tangible progress on them. This has resulted in notable scientific achievements such as scaling laws in biology, to name just one example. This is all the more impressive since they have built a successful scientific research organisation outside the confines of the traditional university system, which therefore has minimal bureaucracy and is liberated from many academic incentives, allowing people to focus on their research. This includes not having tenure, with a more flexible evaluation of what success looks like. Researchers are empowered to organise their own programs and events. There are no departmental boundaries or labelled disciplines, which promotes interdisciplinary work.
A particular strength of the Santa Fe Institute in my opinion is PR and outreach. They have a highly polished website with great media content, and they offer many learning resources and introductory courses. They have also fostered close connections to arts and culture, including music, which is perhaps unusual given their identity as a theoretical research institute. David Krakauer, the current SFI president, speaks in an almost poetic manner, however he is able to convey a real sense of excitement and wonder, and I think this mode of communication can be inspirational to a lot of people. Clearly there is a balance to be struck, knowing when to be rigorous and cautious vs. poetic and metaphorical. Whatever SFI is doing it seems to be working, they appear to be well funded and have been running successful research programs for decades. Part of this success may be attributed to intentionally maintaining a small size of around 30 resident researchers, which has allowed them to stay agile. It relies heavily on private donations, supplemented by government funding, which gives flexibility but requires a prestigious reputation, although this is a similar situation to many EA organisations.
What can Effective Altruism learn from this success? In fact Stefan Torges’ EA forum post on “Ingredients for creating disruptive research teams” mentions SFI and some of the lessons that can be drawn for EA research teams. However perhaps replicating this level of success and prominence is just a matter of time, building up a reputation and field over several decades. It certainly seems instructive to look at organisations like the Santa Fe Institute and the field of complexity science more broadly.
Conclusion
I have tried to give a taste of what complexity science is, how complex emergent phenomena can arise from simple interactions, how systemic problems can be approached in a holistic and bottom-up way, and how tools such as agent-based modelling might be useful and relevant to EA causes. Although I have focused more on complex systems modelling and simulation as a tool, I believe there are many aspects of complexity science which may be of value to EA.
Beyond just a set of tools I think complexity science provides an interesting example of an intellectual movement which has grown successfully over the last few decades. It has created an interdisciplinary identity from scratch, and provides a valuable philosophy and approach to tackling large, foundational scientific questions. The Santa Fe Institute is a great example of this, however there are other well regarded independent research institutions such as the New England Complex Systems Institute and Complexity Science Hub Vienna. Many universities around the world now have their own complex systems research groups. Additionally and of particular relevance is the recently launched Simon Institute, an EA organisation applying many ideas from complexity science, among other approaches, to improving longtermist governance.
I am very keen to receive any comments or feedback on the ideas in this post. If anyone has ideas of how complexity science could be applied to EA causes, or is interested in collaborating on related projects in this area then please get in touch.
I believe working to improve complex systems modelling and simulation, either by building tools for modellers or applying simulations directly, could be a very high impact career path. If you are interested in exploring this, regardless of your background or existing skill set, then I am happy to discuss and advise.
Many thanks to all the people who reviewed drafts of this post: Max Stauffer, Konrad Seifert, Nora Ammann, Tamara Borine, Adam Bricknell, Michelle Hutchinson and Vicky Yang.
In particular thank you to Max for prompting me to write this post in the first place.
I have included some links and resources for further reading below.
What complexity science and simulation have to offer effective altruism
Summary
Complexity science is a field which aims to understand systems composed of many interacting parts. I believe it is relevant to a number of EA cause areas, and has several features that could help in realising the goals of effective altruism:
A set of useful concepts and mental models for approaching broad and challenging systemic problems.
Tools such as computer simulation for understanding and analysing complex systems.
An example of building a successful interdisciplinary intellectual movement.
A note about me
I am not an academic expert in complexity science, however I have several years of experience building computer simulations to model complex systems. As such, this post focuses disproportionately on computer simulations as a tool, however it is still intended as an introduction to complexity science more broadly. I strongly believe that complex systems simulation is a powerful tool for institutional decision making. I also have a weaker belief that complexity science could benefit many other EA cause areas. My aim with this post is to introduce this field to more EA people in order to get feedback and ideas on where this could be applied within EA.
What is complexity science?
What do ant colonies, the immune system, the economy, the energy grid and the internet all have in common? According to the field of complexity science these are all examples of complex systems, composed of many interacting components, which communicate and coordinate with each other, adapt to changing circumstances and can process information in a decentralised manner. These can be contrasted with systems that are merely complicated, but not complex. Take for example complicated human-engineered systems such as airplanes or microprocessors, these are impressively intricate but they have been designed in a top down, controlled way and their operation is fairly predictable. It is straightforward to connect the high level and low level phenomena observed in these systems. Conversely you can have apparently simple complex systems, such as cellular automata, which are made up of very simple behavioural rules at the micro level, but the individual components interact and combine in a way that is hard to understand and predict at the overall system level. Complexity science is an ambitious endeavour to understand and explain such phenomena.
I have not seen much prior discussion of complexity science within EA, but I think there are several parallels and areas of overlap. One similarity is that much like effective altruism it is highly interdisciplinary; it draws from physics, biology, computer science, artificial intelligence and social science. In this post I aim to give a quick introduction to the world of complexity science and why I believe it holds a lot of value for effective altruism.
The website Complexity Explained contains a short introduction to the core ideas of complexity science, with interactive examples. One key concept is the idea of interactions, a complex system cannot be understood just by aggregating its component parts, since you also have to understand the interactions between these parts. This leads to system characteristics such as nonlinearity, feedback loops and tipping points, meaning a small input to the system can result in wildly different behaviour.
To illustrate this concept imagine something non-complex such as a bag of marbles. If you want to know the weight of the whole bag then you can simply sum the weight of all the individual marbles. In contrast, if you take a complex system such as a financial market composed of many individual traders, you can’t get the macro level properties of this market simply by summing the actions of all the traders, since these traders are going to react and interact with each other. Another way of thinking about this is “more is different”, if you add more components or agents to a system then you get qualitatively different effects that can’t be predicted just by the sum of those parts.
A follow-on concept is the idea of emergence; some complex systems exhibit surprisingly complex behaviour at the macro level as a result of relatively simple behaviour at the individual level. Ant colonies are a classic example of this. Each individual ant is following very simple rules and has no understanding of the overall intent of the colony, however ant colonies are able to coordinate to build intricate nests and cooperate on a large scale to forage for food, among other sophisticated behaviours. This shows how complex information processing can emerge from very simple individual rules. Murmurations of starlings are another example of how beautiful and complex-seeming patterns can arise from simple local behaviour. Each bird can only perceive a few of its neighbours, and will simply adjust itself to match their direction and speed, however the combined action of all these birds following simple rules results in complex patterns with no central coordination. Knowing that a system exhibits emergence does not necessarily mean we can use this to predict its behaviour, however it is a useful way to classify systems in which it is hard to link the micro and macro behaviour.
Systems which exhibit complexity and emergence cannot be understood by breaking them down via analytical reductionism, but neither are they random systems, so we cannot rely solely on statistics to understand their macro-level properties. In general such systems prove intractable for analytical mathematical tools which rely on aggregation and simplification, for a more technical introduction to this idea see this textbook. As a partial solution to this problem complexity science provides us with helpful concepts for thinking about these systems, such as emergence, hierarchy and self organization. This isn’t as precise as analytical reductionism, but no other approaches will work. This requires approaching complex systems in a holistic way, it can be helpful to think “from the bottom up”; modelling the individual components as well as how they combine together.
An example of this can be seen in economics, by contrasting classical economic methods, e.g. in macroeconomics, with a new approach inspired by complexity science, known as complexity economics. Traditional economic models rely on strong assumptions such as homogeneity (all agents are the same) and rationality, agents act to maximise their own self interest and have perfect knowledge. These models solve for a global optimum which represents an equilibrium state, however complex systems such as the economy rarely reach equilibrium in reality, they are examples of non-equilibrium dynamical systems. Complexity economics takes a different approach, which focuses on bottom-up modelling of agents. This includes agent-based models (ABMs), which are computer simulation models with heterogeneous agents that can interact with each other and follow heuristic behavioural rules, with limited knowledge rather than global optimisation. A way of thinking about this is modelling “verbs” rather than “nouns”, as explained in a recent paper by W. Brian Arthur, which means capturing dynamic processes rather than static quantities. I think the idea of working with non-equilibrium systems is particularly relevant to EA, since many of the real world systems EAs are trying to impact, such as global politics or national healthcare systems, are constantly changing and never settle down into a static equilibrium.
Complexity economics is still far from the mainstream of economics, however it has been gaining acceptance in recent years, particularly since the 2008 financial crisis, which resulted in a lot of criticism for mainstream macroeconomics for failing to foresee the crisis (the finance system wasn’t even included in most macroeconomic models before then). Complexity Economics was recently featured in two Nature Reviews articles here and here.
Complex systems tend to be characterised by “fat tail” distributions which include extreme events, caused by feedback loops and cascade effects due to interactions between components and systems. This can take the form of “cascading failures”. A classic example of this is the major power blackout in the Northeastern US and Canada in 2003. This was initially triggered by short circuits in transmission lines in Ohio due to overgrown trees; the outage was then compounded by a software bug in the alarm system. Due to insufficient coordination and containment strategies the blackout spread to overload the power grid across much of the Northeastern and Midwestern US as well as the Canadian province of Ontario. This cascading failure was due to the interaction of multiple systems; a forest ecological network, the power grid made up of physical cables and software systems as well as a human coordination network.
The ongoing Covid-19 pandemic is another example of interacting complex systems, for example disease propagation amongst interacting agents, international travel patterns and the economy.
Of particular relevance to EA is how this may apply to analysing potential global catastrophic risks (GCRs) or existential risks. Most of the plausible scenarios for such risks involve multiple systemic problems interacting with each other, with hard to predict feedback loops and cascading failures.
Complexity science includes many more areas which I don’t have space to cover in depth here, for example information theory, evolution, genetic algorithms, fractals, chaotic systems and network theory. All of these areas exist as academic fields in their own right, however complexity science weaves a thread through them all. It can come across as overly broad when viewed through the existing ontology used in academia to demarcate separate areas of study such as physics, chemistry, biology, economics etc. However it is focused on a particular set of ideas and phenomena that can be found in many of these different areas, for example interactions and micro-macro scale relationships.
The term “Complexity Science” as an umbrella for this interdisciplinary set of ideas is only a few decades old, however these ideas have a long lineage from other areas, particularly in areas of physics such as dynamical systems and chaos theory, as well as graph theory and network science. There are many people who use this lens to study such systems even if they don’t use the term complexity science. The process of connecting all these different domains and ideas is still underway, for example formalising the notion of complexity, which arguably remains an informal concept. However I would propose that there is a set of strong underlying themes, ideas and tools which could be beneficial for effective altruism.
A particularly promising tool from complexity science is computer simulation, including agent-based models. Since this is the specific area I have the most experience in, I will spend much of the rest of this post exploring how computer simulation could be used within effective altruism. Although it’s important to note that computer simulation is just one tool from complexity science, and not all complexity scientists would place the same emphasis on simulation that I do. There are other aspects of complexity science which could be valuable for EA, for example simply being able to recognise that you are dealing with a complex, emergent system is useful because it is likely to change what assumptions are appropriate to make, your priors on what behaviour you expect to observe and the viability of any potential solutions.
Applications within Effective Altruism
When I think about the biggest challenges in effective altruism, many of them involve intervening in complex systems. For example health interventions in the developing world, often these can have 2nd and 3rd order consequences which are hard to reason about, and are hard to test with RCTs, given the complex interactions between different systems. A lot of the time the straightforward “non-complex” interventions have already been tried, and all that is left are the thornier, more complex challenges.
I think there is an opportunity for effective altruism minded people to work on building and advocating for simulations of complex social systems, and using these to explore the effects of policy interventions. This has a lot of overlap with the area of Improving Institutional Decision Making (IIDM). Our world is more interconnected than ever before, and many of the most pressing challenges of the 21st century involve complex systems, for example pandemics, misinformation in social networks, and the critical infrastructure which enables modern life, such as the transport network, the internet and the power grid.
There is also a clear link to longtermism. Parts of the complexity science community are exploring foundational questions such as the origin of life. The Interplanetary Project at the Santa Fe Institute (a well known complexity science organisation), touches on many themes which would be familiar to anyone interested in longtermism within EA.
Below I have sketched out a few examples of where I think complexity science and simulation are relevant to EA cause areas. As a warning; these are just initial ideas to prompt discussion and are not necessarily fully thought through.
Cause Areas
Biosecurity
Complex systems simulation has already had a large impact on decision making during the Covid-19 pandemic (discussed in more detail later in this post). In my view there are many untapped opportunities for better simulations to help handle such events. These could explore the 2nd order unanticipated effects of pandemics and the interactions between systems such as the disease itself, the economy and the fragility or resilience of global supply chains. Governments could use simulations to test out interventions and prevention policies, such as the move to remote work. Simulations could also model hypothetical scenarios such as an engineered pathogen with a range of different possible disease parameters.
Governance and institutional decision making
IIDM is perhaps the most obvious cause area in which to apply simulations, as they can be effective tools for decision making, by promoting collaboration and shared understanding of a situation. Simulations allow decision makers to test the effects of novel policies. In a more meta way ABMs can also simulate the mechanisms of decision making and voting themselves. There is also the closely related area of “Operations Research”, which focuses on improving decision making and planning, and has already made use of tools such as ABMs and systems thinking.
Economics
The rapidly growing field of complexity economics provides additional capabilities beyond the limitations of traditional economic modelling. Such techniques could allow a better understanding of the robustness of the economy and financial system, and help to understand the potential harm of black swan events such as the 2008 financial crisis. They could help to distinguish between short term variation and noise in the economy and financial markets, versus longer term structural trends. OpenPhil already mentions improving macroeconomic policy as a focus area.
A speculative application of complexity economics and ABMs is building simulations of hypothetical future worlds, which could help to understand what aspects of our current economic theory might still hold true in a vastly different context.
Climate change
Climate change is a promising area of application for complex systems modelling, in particular complexity economics. Simulations can help to understand the complex interactions and feedback loops between the economy and natural systems. Existing climate models are already complex system simulations which factor in multiple different geological and climate dynamics and feedback effects, however at present there are not many serious efforts to combine these with economic models and simulations of social systems. This paper by Doyne Farmer, one of the pioneers of Complexity Economics, argues that complexity economics and ABMs can address the shortcomings of existing economic models of climate change. An economic agent-based model such as the EUROMOD tax model could be used to understand the impact of policies such as a carbon tax.
Nuclear weapons
Agent-based models could be built to simulate hypothetical nuclear weapons proliferation scenarios, modelling the incentives and behaviours at the agent level, with agents representing different countries or other actors. This could highlight which policies may have counterintuitive or unexpectedly negative effects. This would be a form of dynamic game theory simulation, and could help test the robustness of different equilibria and sensitivity to different parameters and assumptions. Related work has been conducted during the Cold War which could be expanded upon with more data and computational power.
AI Safety
It is plausible that the arrival of powerful AI, whether gradual or sudden, could be modelled as a complex system of many interacting humans and AI agents, similar to the scenarios discussed in “What failure looks like” by Paul Christiano. This could look like an “AI ecosystem” of powerful but narrow AI agents. The ARCHES paper by Andrew Critch and David Krueger sets out many such scenarios, including the challenges of “multi-multi” delegation and control between multiple humans and AI agents. Viewing these multi-agent dynamics as a complex system seems like a natural way to think about this.
I am not completely sure that we understand these hypothetical scenarios well enough right now to build a useful simulation, however even the process of trying to build such a simulation could promote new ways of thinking about the problem. Similarly to simulating nuclear weapon scenarios, simulations may illuminate some of the power dynamics involved with developing military AI, for example race dynamics and potential destabilising effects.
Complexity science also overlaps directly with the fields of AI and cognitive science. For example this article explores the links between deep learning and complexity / chaos. I am optimistic that there are some interesting links to AI safety here, however this requires more investigation to flesh out.
Building more complex and realistic simulation environments for AI training is an active area of AI research, which clearly overlaps with complex systems modelling. Multi-agent simulations are particularly relevant here. The development of increasingly sophisticated training simulations has many implications for AI safety, and this could potentially increase the risk of poorly aligned AI unless approached in a careful way.
Additionally it strikes me that in order to avoid a worst case scenario when transformative AI does arise we want to ensure that it does not destroy what we might describe as “complex life”, yet another example of a complex system. So perhaps there are deep links between measuring complexity and ensuring that AI agents preserve the existing complex life in our world.
Longtermism
A big opportunity I see for complexity science applied to longtermism is in understanding the dynamics of societies which are distant to us in time, either in the past or future. Simulations can be used to encode assumptions and hypothetical scenarios even with little or no hard data. Simulations have already been used in archaeology to study past societies, for example the influential artificial anasazi agent-based model, which simulates an ancient native american civilisation. Importantly simulations could be used to understand long-term historical trends and previous societal collapses (e.g. this paper). This seems very relevant for understanding how to avoid such societal collapses in future.
You could also imagine modelling hypothetical future worlds, for example a simulation incorporating a high amount of detail about a future society, similar to Robin Hanson’s Age of Em, but encoded in an agent-based model. This would impose a degree of rigour, and would test the internal consistency of any hypotheses about what such a world would look like. There is even a nascent project to create ABMs of worlds from science fiction, to provide interactive and dynamic explorations of these worlds.
Tools—Simulation and computational experiments
I have explored several possible applications for complex systems modelling and simulation within EA cause areas, however it is worth digging into why these are appropriate tools. Complex systems are often unsuited to standard mathematical modelling tools which rely on aggregation and simplification, so we need a different way of modelling these systems. Computer simulations are an alternative approach, enabling agents and components of a system to be represented individually and simulating the interactions between them, capturing nonlinear effects such as feedback. A sub-field of this type of simulation is agent-based modelling, which has been employed in various fields such as biology, epidemiology, social science and economics. Computer simulation can be viewed as a new way of doing science, in addition to experiment and theory. This can help us study emergent effects, as we can recreate in the lab how complex macro-behaviours result from micro-level behaviour at the agent level.
There are many benefits to using computer simulations for decision making, and indeed many of these benefits are shared with traditional mathematical models. In an article called Why Model? Joshua Epstein, one of the pioneers of agent-based models for social science, sets out 16 separate reasons for why computer models can be useful, in addition to simply predicting the future, which is often assumed to be the sole purpose of modelling. One major advantage is that it forces us to formalise our understanding of a system, as Epstein points out in Why Model? without explicitly defined models we typically have to rely on informally specified mental models of complex systems, so there is value in attempting to formalise our mental models to expose any contradictory assumptions and explicitly combine these assumptions with observed data. These arguments apply to all mathematical modelling, however I think they are particularly applicable in the context of ABMs, where we may have some approximate understanding of the behaviour of individual agents, consisting of simple rules, and we want to understand the implied consequences of those rules at the macro-level. Epstein is a major proponent of using bottom-up models in order to understand social systems, a process he refers to as generative social science, the motto of which is “If you can’t grow it, then you don’t understand it”.
Computer simulations allow us to test interventions in a system. In particular we can test interventions at a more granular level, which may affect different agents in different ways. This allows governments to test policies in a safe simulated environment before implementing them in the real world. For example with Covid-19 a detailed simulation of a country’s population could be used to test interventions such as lockdowns, this would allow us to encode the assumption that a certain fraction of people won’t adhere to restrictions. We can also model heterogeneous populations that have varying behaviours and different vulnerabilities to the virus. This is particularly important with pandemics such as COVID-19, since often the only data we have access to are macro variables such as total caseload, number of deaths and reproduction rate. However these aggregate variables are driven by social networks and the behaviour of agents within them at the micro level, which is in turn affected by policy. Connecting aggregate data to individual behaviour of populations is something that agent-based models are well suited to.
A further advantage of this type of model is that they tend to be interpretable, and the agent-level logic can correspond neatly to our intuitive understanding of the mechanisms of the system we want to model. This contrasts with other methods of predictive modelling, such as aggregated analytical mathematical models, or black-box machine learning methods such as deep neural networks, where it can be difficult to understand the internal workings of the model, or map this to our own mental models.
Computer simulations can be used to aid decision making through the process of wargaming, where hypothetical scenarios are tested in a simulated environment in a collaborative way. This can foster mutual understanding of a problem between multiple parties. When such simulations are combined with interactive visualisations and user interfaces they can allow decision makers to try out many different scenarios and generate intuitions about the system being modelled, and how it may respond to certain actions. Most models have a lot of uncertainty, requiring assumptions to be made, however the right user interfaces can expose these assumptions alongside unknown parameters, and allow decision makers to tweak these and put in their own assumptions and estimates of parameters.
Nowadays increasingly large and complex simulations of the real world are being built for such planning and training activities, often referred to as “digital twins”, which combine large amounts of data on real world systems with simulation models.
Covid-19 provides an informative example of how such simulations can be employed to improve critical decision making. Early on in the pandemic agent-based models such as the one built by Neil Ferguson and his team at Imperial College heavily influenced the UK government’s response, and were credited with being one of the reasons that eventually pushed them to go ahead with lockdown, due to the large number of fatalities predicted by the model. Admittedly there were several criticisms laid against this model in particular, such as the fact that it didn’t take into account how people would adjust their behaviour in the absence of government intervention, and also that the model code was over-complicated and not well tested. These criticisms are accurate to some extent, however they only apply to the specific model in question and not to ABMs more generally. Addressing these criticisms by building improved ABMs is a huge opportunity for tackling future pandemics.
There is a lot of potential value in combining models from different domains to understand the connections between real world systems. ABMs such as the Imperial model only take into account disease propagation between abstract agents, however future ABMs could incorporate detailed behavioural models of how people respond to infection levels. Many more dynamics could be added, such as realistic people movement based on data from the UK population. I recently worked on a Covid-19 ABM which combines census and population movement data with a disease model, as part of the Royal Society’s RAMP initiative. If the models used by governments had taken into account factors such as population movement and behavioural response then this might have narrowed the uncertainty in the forecasts of disease spread.
I should sound a note of caution here: there will always be uncertainty in models, both in the data and in whether the model logic accurately reflects the real world dynamics. Some sources of uncertainty may dominate others, for example in the early days of the pandemic perhaps the lack of knowledge of disease parameters (eg. R0) was the largest source of uncertainty, so there would be little value in adding more detail and refining other aspects of the model until this had been narrowed down. However models themselves can help to investigate which are the main sources of uncertainty, using techniques such as sensitivity analysis.
Neil Ferguson and his team did commendable scientific work with the tools they had available, however their Covid model was cobbled together in a short amount of time by repurposing the code from a decade old flu model. Investing in scalable and flexible agent-based models ahead of time would result in sophisticated models which are a major asset for future pandemics or other unforeseen emergency situations requiring government intervention in complex systems.
An even larger opportunity is in combining disease dynamics with an agent-based model of the economy. The real challenge of Covid-19 policy was making difficult tradeoffs between public health and the economy over both the short and long term. Combining disease dynamics and economics in the same simulation could have helped to forecast and quantify the true consequences of government action or lack thereof.
Creating modern user interfaces could further increase the value of such simulations. These could allow decision makers to interact with these simulations directly, rather than relying on a slow feedback loop of requesting scientists to re-run the model with different parameters. This allows people to inject their own assumptions and priorities into a model, and to build up an intuition for how a given system behaves in different scenarios.
In my view a valuable aspect of ABMs is not in perfectly predicting the future, but forcing people to face up to the stark reality of what their current data and assumptions are telling them. Computational models of complex systems are rarely able to produce accurate point predictions of future events, due to uncertainty and chaotic effects, however they can generate useful distributions of plausible outcomes.
Complex systems scientists led the way in advocating for a well-informed Covid-19 response from the very early days of the pandemic. Yaneer Bar-Yam, a complexity scientist at the New England Complex Systems Institute, was one of the earliest and most prominent advocates of a “zero covid” strategy, through his website endcoronavirus.org. He was strongly recommending controls on international movement as early as January 2020.
Reflections on modelling and simulation
From a technical point of view what I am recommending are large scale simulations which incorporate multiple different domains. These techniques are still relatively new, it’s only recently that the amount of computational power and data has become available to build simulations of sufficient scale to be realistic. These techniques have a lot of promise, but it is still early days.
Successfully building and deploying simulations to improve decision making involves overcoming some daunting scientific and engineering challenges. It’s not just a case of writing more code and running with a larger number of agents and more data. Most academic research with ABMs tends to stick to simplified “idea models” which are easier to reason about and validate. In many ways the art of modelling is about simplifying a problem down to only the most important details. Most complexity scientists appreciate the value of simple agent-based models, however some would caution against adding too much detail to these simulations, since that could detract from generalisable insights. There are advantages to increased realism from using larger datasets and more detailed simulations, although it can be tricky to know when this additional detail is beneficial. It risks introducing more degrees of freedom and potential for error. I would argue that any model at all is still an improvement over relying on implicit mental models, but with the caveat that it is not always obvious how to integrate the model output into your overall understanding of the problem, particularly if you have multiple models and sources of data which conflict with each other. If this is handled properly then larger, more detailed models can add to scientific insight and not detract from it.
Another challenge is that simulations are less portable between problems than alternative techniques such as deep learning. For a deep learning model you can develop the learning algorithm once then apply it to problems in different areas by training on new data. However writing a simulation requires encoding domain knowledge into the model, so typically this has to be done separately for each new domain. A related problem is that the software tools to build ABMs have not received nearly as much investment as those for Deep Learning, so it remains difficult to build ABMs which can scale to a large number of agents, i.e. the millions of agents needed to represent the population of a whole country. The domain experts who have the knowledge to build an accurate model may not have the software engineering skills to implement that knowledge in a scalable simulation. This lack of tooling is one of the factors holding back wider adoption of simulations for decision making.
In my view simulation models are one of the best ways to formally combine insights and data sources from multiple domains, showing the implied consequences of our assumptions and limited understanding of a real world system. They can also help non-technical decision makers leverage knowledge from multiple domain experts. However they are not a crystal ball, and there are clear limits to their predictive abilities. This is in part due to incomplete understanding of the system dynamics, but even well defined models are subject to the phenomena of chaos, popularly known as the butterfly effect. If we don’t have perfect knowledge of the starting conditions in a complex system then the simulated trajectory will quickly diverge from reality, another reason why distributions are favoured over point predictions. This is why weather forecasts are only accurate for about one week out, even though the underlying physical equations are very well understood. However while weather is unpredictable, climate on the other hand is broadly predictable for many decades out (or at least the direction of change is predictable), due to structural dynamics that are fairly well understood. So I believe there is a lot of valuable scientific work to be done in understanding what is analogous to weather (unpredictable fluctuations) vs. climate (structured and predictable) in complex social systems such as the economy. For example could the 2008 financial crisis have been predicted ahead of time? It seems to have been caused by deep structural features of the financial system in the run-up to the crisis, however maybe this only seems inevitable in hindsight.
While simulations may have some predictive power, it is often better to think of them as tools for exploring and generating intuition about a system.
A model of movement building
In the Nature Reviews article on complexity economics W. Brian Arthur explains that he considers complexity science to be more of a “movement within science” than a science itself. In many ways this reminds me of effective altruism, however complexity science has been around for a longer time, over 3 decades. I think there is a lot we can learn from complexity science as an example of building an interdisciplinary and pioneering intellectual movement from scratch, which has achieved a lot of success and influence.
A key part of the complexity science movement is the Santa Fe Institute (SFI), an independent research organisation founded by physicists from the Los Alamos National Laboratory in the 1980s, it is arguably the first and most prominent complexity science organisation. It can take a lot of credit for developing and popularising the ideas behind complexity science. SFI has effectively created a new interdisciplinary identity from scratch, similar to EA. They have also been very influential upon key decision makers; many prominent academics, business people and politicians have visited SFI over the years. This flow of visitors as well as new postdocs is a key component of their success, allowing a continuous transfer of ideas in and out.
Something I admire about SFI is how they are willing to tackle ambitious and broad scientific problems and make tangible progress on them. This has resulted in notable scientific achievements such as scaling laws in biology, to name just one example. This is all the more impressive since they have built a successful scientific research organisation outside the confines of the traditional university system, which therefore has minimal bureaucracy and is liberated from many academic incentives, allowing people to focus on their research. This includes not having tenure, with a more flexible evaluation of what success looks like. Researchers are empowered to organise their own programs and events. There are no departmental boundaries or labelled disciplines, which promotes interdisciplinary work.
A particular strength of the Santa Fe Institute in my opinion is PR and outreach. They have a highly polished website with great media content, and they offer many learning resources and introductory courses. They have also fostered close connections to arts and culture, including music, which is perhaps unusual given their identity as a theoretical research institute. David Krakauer, the current SFI president, speaks in an almost poetic manner, however he is able to convey a real sense of excitement and wonder, and I think this mode of communication can be inspirational to a lot of people. Clearly there is a balance to be struck, knowing when to be rigorous and cautious vs. poetic and metaphorical. Whatever SFI is doing it seems to be working, they appear to be well funded and have been running successful research programs for decades. Part of this success may be attributed to intentionally maintaining a small size of around 30 resident researchers, which has allowed them to stay agile. It relies heavily on private donations, supplemented by government funding, which gives flexibility but requires a prestigious reputation, although this is a similar situation to many EA organisations.
What can Effective Altruism learn from this success? In fact Stefan Torges’ EA forum post on “Ingredients for creating disruptive research teams” mentions SFI and some of the lessons that can be drawn for EA research teams. However perhaps replicating this level of success and prominence is just a matter of time, building up a reputation and field over several decades. It certainly seems instructive to look at organisations like the Santa Fe Institute and the field of complexity science more broadly.
Conclusion
I have tried to give a taste of what complexity science is, how complex emergent phenomena can arise from simple interactions, how systemic problems can be approached in a holistic and bottom-up way, and how tools such as agent-based modelling might be useful and relevant to EA causes. Although I have focused more on complex systems modelling and simulation as a tool, I believe there are many aspects of complexity science which may be of value to EA.
Beyond just a set of tools I think complexity science provides an interesting example of an intellectual movement which has grown successfully over the last few decades. It has created an interdisciplinary identity from scratch, and provides a valuable philosophy and approach to tackling large, foundational scientific questions. The Santa Fe Institute is a great example of this, however there are other well regarded independent research institutions such as the New England Complex Systems Institute and Complexity Science Hub Vienna. Many universities around the world now have their own complex systems research groups. Additionally and of particular relevance is the recently launched Simon Institute, an EA organisation applying many ideas from complexity science, among other approaches, to improving longtermist governance.
I am very keen to receive any comments or feedback on the ideas in this post. If anyone has ideas of how complexity science could be applied to EA causes, or is interested in collaborating on related projects in this area then please get in touch.
I believe working to improve complex systems modelling and simulation, either by building tools for modellers or applying simulations directly, could be a very high impact career path. If you are interested in exploring this, regardless of your background or existing skill set, then I am happy to discuss and advise.
Many thanks to all the people who reviewed drafts of this post: Max Stauffer, Konrad Seifert, Nora Ammann, Tamara Borine, Adam Bricknell, Michelle Hutchinson and Vicky Yang.
In particular thank you to Max for prompting me to write this post in the first place.
I have included some links and resources for further reading below.
Further Reading
Books
Complexity: A Guided Tour, by Melanie Mitchell—an accessible intro to complexity science.
Complexity: The Emerging Science at the Edge of Order and Chaos, by Mitchell Wardrop—an earlier introduction to complexity science.
Worlds Hidden in Plain Sight. The Evolving Idea of Complexity at the Santa Fe Institute—a collection of essays from the Santa Fe Institute on the idea of complexity
Generative Social Science: Studies in Agent-Based Computational Modeling, by Joshua Epstein
Harnessing Computational Simulations to Design and Engineer Policy—Geneva Science Policy Interface
The Santa Fe Institute has an educational project called Complexity Explorer, which includes many courses and resources. A good starting point would be their Introduction to Complexity course and their Introduction to Agent-Based Modeling.
Complexity Economics:
Institute for New Economic Thinking at Oxford University
Complexity Economics at the Santa Fe Institute
Computational Social Science
JASSS journal
CoMSES—a collection of resources for agent-based models, including a large library of existing model implementations.