“Economics can be harder than rocket science: the Soviet Union was great at rocket science”
EAG is a brilliant conference, and like many other examples throughout history it shows the value of gathering a large amount of smart, driven people together from many fields together. You’re never quite sure what the chaos will produce, but it will always be interesting.
For me, one of the more fascinating talks at the conference happened off the agenda, when a group gathered together to discuss complexity science, and its implications for EA.
There was far too much discussed there for a detailed breakdown, and many are already incorporating complexity science and theories into their work, so this post may be of only partial value to many. However, the implications of complexity interacting with long termism in particular are serious, and if we are working on systems as complicated as humanity as a whole we ignore them at our peril.
Complexity science covers many forms of analysis, broadly tied to a central thesis: “The whole is the greater than the sum of its parts”. It applies to animals, mathematics, the movement of planets, the functioning of politics, and any other system where the sum of seemingly linear systems and logical correlations suddenly produce emergent behaviour, often in fascinating and counterintuitive ways.
However, I am an economist, and while I have no great insight into chaos theory mathematics we do think about these issues of complexity in our own way, which are sometimes very different to other fields.
There was a classic challenge to economics, to name something true and non-trivial that the discipline has provided. Samuelson’s answer was Ricardo’s theory of trade, which is deeply counterintuitive to many, especially at the time, and profoundly changed the world. It is also an example where letting go of control and planning, in favour of chaos, led to vast gains in human material prosperity.
The original argument is elegantly simple. The United Kingdom can produce wool, which it is well suited to, and grow wine, with low yields and poor taste due to our unique climate. Meanwhile Portugal can make excellent wine, but is unsuited to sheep varieties with heavy wool.
From the point of view of the United Kingdom there are two ways to make wine. Firstly, you can divert labour and capital away from sheep farming at great cost and with limited results, or secondly you could load wool onto a ship, which returns a month later loaded with far more wine than you could ever produce from the same resources. It may as well be a factory for turning wool into wine for all the United Kingdom cares, and importantly this process works even if Portugal is both better at making wine and wool.
Governments today still struggle with this, attempting to micromanage the chaos of trade by adjusting tariffs and setting complicated quotas on thousands of line items. I once contracted on a project for a government in Sub Saharan Africa which was doing exactly this. The country had set an import quota for sugar, managed by their state trading company, in an attempt to improve their balance of payments situation. When this led to high prices due to shortages they had set price limits, when this led to hoarding and smuggling they took control of sugar retailing and distribution. Smuggling profits rose even higher, leading to a situation where sugar smuggling funds terrorist operations and normal people experience rationing, at prices well above world market rates. Unfortunately, the game of policy twister is still underway, with an expansion of state-owned sugar production to bridge the deficit. This is not going well. It almost certainly did not even improve their balance of payments, their original goal in some distantly remembered past.
The concept that letting go to achieve more is deeply counterintuitive, and runs against our instincts, but is a foundational aspect of core parts of classical economics. Chaos there is not just a force to be fought, but something a force to be supported and gently guided the service of social goals. As a result, you end up in an almost state of zen, doing less to achieve more, and finding a way to let the fury of the river do the work for you rather than damming it and making a serene canal.
There is a great tendency in the world today to see a problem, then sit down and reason almost in a dark room for how you will solve it scientifically via policy and commands. This can drive great progress, and EA was founded on this to a degree. However, the failure modes of this thinking can be catastrophic, especially with radical and ambitious plans: many in the past have sat down with honest intentions for adding order to chaos for the betterment of man but ended up doing the precise opposite. These ideas seemed obvious at the time, promised to provide better lives through science, or raise agricultural output. The reality did not conform to their expectations: they were plans designed for robots, not the humans and natural world standing before them in their vast complexities.
However, while chaos cannot and should not be contained completely, and while we should not presume to understand it fully, we also simply cannot leave it alone. This is true in economics and its failures of the past and present; it is utterly foundational to the concerns around existential risks. Plans must be made for when the market fails: your social supports, antitrust laws, policy on externalities and systems to prevent financial crashes and bank runs must be in place. These economic policies were almost all built upon the pain of past experiences, and scientific thinking, evidence and rationality were a key element of their design. For existential risks this raises an even larger problem, as we cannot work from past experiences: it will simply be too late, and grand plans here are required.
So where does this leave us? Are we stuck between chaos being uncontrollable and often useful up until the point where it levels our world?
I would argue not, while I’m perhaps stretching the zen analogy here I believe it is a case where we need both the yin and the yang of chaos and order.
We need to be modest in what we know, understand that the world is not a chess board and that making use of chaos can be helpful: plans should be able to adjust, we should design systems to make use knowledge no matter where it comes from, we should be humble in our own understanding and constantly attempt to question our positions. A physicist may not be able to predict the outcome of a football match, but they may be able to make predictions of what happens when the ball is kicked, and if it will leave the stadium. That can still be useful.
At the same time we need to keep working towards the large goals attached to the movement, and accept that we must impose order on chaos, sometimes in situations where we only have reason to guide us. Here we must still look at the mistakes of the past, in order to not repeat them, we may only have one opportunity to do so.
This is a fairly chaotic post, rather fittingly. I do not have all the answers or even that many, I am working on one small part of EA looking at food systems, where producing enough food is a necessary but not sufficient step in feeding everyone. It would be great to start a conversation on complexity across the movement, how we deal with it well, how we deal with it poorly, and where the example of others can help.
If you have any thoughts please post them below, and we should not stop having grand ambitions or even ivory tower thinking.
However, at the same time I would suggest embracing the following thoughts, wherever possible:
Design systems that scale from a small base. If you have a grand plan that requires a complete system to function, but no way of getting there, you do not have a plan.
Systems must fail gracefully. We will be wrong many times, and must build that into our thinking.
Look at history and other studies, people have done surprisingly weird things in the past which may be relevant for you today. Want to see how a society with a judiciary, a legislature but no executive functioned? Want to know what happens when we lose a good chunk of our sunlight? Of course the world has changed, but history and its examples can prove a very interesting start for our thinking, something Will Macaskill highlighted in his recent EAG speech.
The world is not a chessboard. We do not know all of the rules and probably never can, we cannot assume we can move the pieces at will, and I would be deeply concerned of a system that allowed that level of control.
Complexity Science, Economics and the Art of Zen.
“Economics can be harder than rocket science: the Soviet Union was great at rocket science”
EAG is a brilliant conference, and like many other examples throughout history it shows the value of gathering a large amount of smart, driven people together from many fields together. You’re never quite sure what the chaos will produce, but it will always be interesting.
For me, one of the more fascinating talks at the conference happened off the agenda, when a group gathered together to discuss complexity science, and its implications for EA.
There was far too much discussed there for a detailed breakdown, and many are already incorporating complexity science and theories into their work, so this post may be of only partial value to many. However, the implications of complexity interacting with long termism in particular are serious, and if we are working on systems as complicated as humanity as a whole we ignore them at our peril.
Complexity science covers many forms of analysis, broadly tied to a central thesis: “The whole is the greater than the sum of its parts”. It applies to animals, mathematics, the movement of planets, the functioning of politics, and any other system where the sum of seemingly linear systems and logical correlations suddenly produce emergent behaviour, often in fascinating and counterintuitive ways.
However, I am an economist, and while I have no great insight into chaos theory mathematics we do think about these issues of complexity in our own way, which are sometimes very different to other fields.
There was a classic challenge to economics, to name something true and non-trivial that the discipline has provided. Samuelson’s answer was Ricardo’s theory of trade, which is deeply counterintuitive to many, especially at the time, and profoundly changed the world. It is also an example where letting go of control and planning, in favour of chaos, led to vast gains in human material prosperity.
The original argument is elegantly simple. The United Kingdom can produce wool, which it is well suited to, and grow wine, with low yields and poor taste due to our unique climate. Meanwhile Portugal can make excellent wine, but is unsuited to sheep varieties with heavy wool.
From the point of view of the United Kingdom there are two ways to make wine. Firstly, you can divert labour and capital away from sheep farming at great cost and with limited results, or secondly you could load wool onto a ship, which returns a month later loaded with far more wine than you could ever produce from the same resources. It may as well be a factory for turning wool into wine for all the United Kingdom cares, and importantly this process works even if Portugal is both better at making wine and wool.
Governments today still struggle with this, attempting to micromanage the chaos of trade by adjusting tariffs and setting complicated quotas on thousands of line items. I once contracted on a project for a government in Sub Saharan Africa which was doing exactly this. The country had set an import quota for sugar, managed by their state trading company, in an attempt to improve their balance of payments situation. When this led to high prices due to shortages they had set price limits, when this led to hoarding and smuggling they took control of sugar retailing and distribution. Smuggling profits rose even higher, leading to a situation where sugar smuggling funds terrorist operations and normal people experience rationing, at prices well above world market rates. Unfortunately, the game of policy twister is still underway, with an expansion of state-owned sugar production to bridge the deficit. This is not going well. It almost certainly did not even improve their balance of payments, their original goal in some distantly remembered past.
The concept that letting go to achieve more is deeply counterintuitive, and runs against our instincts, but is a foundational aspect of core parts of classical economics. Chaos there is not just a force to be fought, but something a force to be supported and gently guided the service of social goals. As a result, you end up in an almost state of zen, doing less to achieve more, and finding a way to let the fury of the river do the work for you rather than damming it and making a serene canal.
There is a great tendency in the world today to see a problem, then sit down and reason almost in a dark room for how you will solve it scientifically via policy and commands. This can drive great progress, and EA was founded on this to a degree. However, the failure modes of this thinking can be catastrophic, especially with radical and ambitious plans: many in the past have sat down with honest intentions for adding order to chaos for the betterment of man but ended up doing the precise opposite. These ideas seemed obvious at the time, promised to provide better lives through science, or raise agricultural output. The reality did not conform to their expectations: they were plans designed for robots, not the humans and natural world standing before them in their vast complexities.
However, while chaos cannot and should not be contained completely, and while we should not presume to understand it fully, we also simply cannot leave it alone. This is true in economics and its failures of the past and present; it is utterly foundational to the concerns around existential risks. Plans must be made for when the market fails: your social supports, antitrust laws, policy on externalities and systems to prevent financial crashes and bank runs must be in place. These economic policies were almost all built upon the pain of past experiences, and scientific thinking, evidence and rationality were a key element of their design. For existential risks this raises an even larger problem, as we cannot work from past experiences: it will simply be too late, and grand plans here are required.
So where does this leave us? Are we stuck between chaos being uncontrollable and often useful up until the point where it levels our world?
I would argue not, while I’m perhaps stretching the zen analogy here I believe it is a case where we need both the yin and the yang of chaos and order.
We need to be modest in what we know, understand that the world is not a chess board and that making use of chaos can be helpful: plans should be able to adjust, we should design systems to make use knowledge no matter where it comes from, we should be humble in our own understanding and constantly attempt to question our positions. A physicist may not be able to predict the outcome of a football match, but they may be able to make predictions of what happens when the ball is kicked, and if it will leave the stadium. That can still be useful.
At the same time we need to keep working towards the large goals attached to the movement, and accept that we must impose order on chaos, sometimes in situations where we only have reason to guide us. Here we must still look at the mistakes of the past, in order to not repeat them, we may only have one opportunity to do so.
This is a fairly chaotic post, rather fittingly. I do not have all the answers or even that many, I am working on one small part of EA looking at food systems, where producing enough food is a necessary but not sufficient step in feeding everyone. It would be great to start a conversation on complexity across the movement, how we deal with it well, how we deal with it poorly, and where the example of others can help.
If you have any thoughts please post them below, and we should not stop having grand ambitions or even ivory tower thinking.
However, at the same time I would suggest embracing the following thoughts, wherever possible:
Design systems that scale from a small base. If you have a grand plan that requires a complete system to function, but no way of getting there, you do not have a plan.
Systems must fail gracefully. We will be wrong many times, and must build that into our thinking.
Look at history and other studies, people have done surprisingly weird things in the past which may be relevant for you today. Want to see how a society with a judiciary, a legislature but no executive functioned? Want to know what happens when we lose a good chunk of our sunlight? Of course the world has changed, but history and its examples can prove a very interesting start for our thinking, something Will Macaskill highlighted in his recent EAG speech.
The world is not a chessboard. We do not know all of the rules and probably never can, we cannot assume we can move the pieces at will, and I would be deeply concerned of a system that allowed that level of control.