Thanks for writing this, in my opinion the field of complex systems provides a useful and under-explored perspective and set of tools for AI safety. I particularly like the insights you provide in the “Complex Systems for AI Safety” section, for example that ideas in complex systems foreshadowed inner alignment / mesa-optimisation.
I’d be interested in your thoughts on modelling AGI governance as a complex system, for example race dynamics.
Thanks for writing this, in my opinion the field of complex systems provides a useful and under-explored perspective and set of tools for AI safety. I particularly like the insights you provide in the “Complex Systems for AI Safety” section, for example that ideas in complex systems foreshadowed inner alignment / mesa-optimisation.
I’d be interested in your thoughts on modelling AGI governance as a complex system, for example race dynamics.
I previously wrote a forum post on how complex systems and simulation could be a useful tool in EA for improving institutional decision making, among other things: https://forum.effectivealtruism.org/posts/kWsRthSf6DCaqTaLS/what-complexity-science-and-simulation-have-to-offer