Thanks for sharing your thoughts! I think you are onto an interesting angle here that could be worthwhile exploring if you are so inclined.
One interesting line of work that you do not seem to be considering at the moment but could be interesting is the work done in the “metacrisis” (or polycrisis) space. See this presentation for an overview but I recommend diving deeper than this to get a better sense of the space. What this perspective is interested in is trying to understand and address the underlying patterns, which create the wicked situation we find ourselves in. They work a lot with concepts like “Moloch” (i.e., multi-polar traps in coordination games), the risk accelerating role of AI or different types of civilizational failure modes (e.g., dystopia vs. catastrophe) we should guard against.
Interesting for you may also be a working paper that I am working on with ALLFED, where we are looking at the digital transformation as a driver of systemic catastrophic risks. We do this based on a simulation model in specific scenarios but then generalize a framework where we suggest that the key features that make digital systems valuable also make them an inherent driver of what we called “the risk of digital fragility”. Our work does not yet elaborate on the role of AI but only the pervasive use of digital systems and services in general. My next steps are to work out the role of AI more clearly and see if/how our digital fragility framework can be put to use to better understand how AI could be contributing to systemic catastrophic risks. You can reach out via PM if you are interested to have a chat about this.
Thanks for sharing your thoughts! I think you are onto an interesting angle here that could be worthwhile exploring if you are so inclined.
One interesting line of work that you do not seem to be considering at the moment but could be interesting is the work done in the “metacrisis” (or polycrisis) space. See this presentation for an overview but I recommend diving deeper than this to get a better sense of the space. What this perspective is interested in is trying to understand and address the underlying patterns, which create the wicked situation we find ourselves in. They work a lot with concepts like “Moloch” (i.e., multi-polar traps in coordination games), the risk accelerating role of AI or different types of civilizational failure modes (e.g., dystopia vs. catastrophe) we should guard against.
Interesting for you may also be a working paper that I am working on with ALLFED, where we are looking at the digital transformation as a driver of systemic catastrophic risks. We do this based on a simulation model in specific scenarios but then generalize a framework where we suggest that the key features that make digital systems valuable also make them an inherent driver of what we called “the risk of digital fragility”. Our work does not yet elaborate on the role of AI but only the pervasive use of digital systems and services in general. My next steps are to work out the role of AI more clearly and see if/how our digital fragility framework can be put to use to better understand how AI could be contributing to systemic catastrophic risks. You can reach out via PM if you are interested to have a chat about this.