Essentially, the similarity is that that post was also about AI governance/āstrategy, involved thinking about what criteria would make a historical case study relevant to the AI governance/āstrategy question under consideration, and involved a set of relatively shallow investigations of possibly relevant case studies. This seems like a cool method, and Iād be excited to see more things like this. (Though of course our collective research hours are limited, and Iām not necessarily saying we should deprioritise other things to prioritise this sort of work.)
Hereās the opening section of that post:
MIRIās mission is āto ensure that the creation of smarter-than-human intelligence has a positive impact.ā
One policy-relevant question is: How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much effort to put into AGI risk mitigation vs. other concerns?
To investigate these questions, we asked Jonah Sinick to examine how well policy-makers handled past events analogous in some ways to the future invention of AGI, and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as with our project on how well we can plan for future decades. The post below is a summary of findings from our full email exchange (.pdf) so far.
As with our investigation of how well we can plan for future decades, we decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we arenāt yet able to draw any confident conclusions about our core questions.
The most significant results from this project so far are:
We came up with a preliminary list of 6 seemingly-important ways in which a historical case could be analogous to the future invention of AGI, and evaluated several historical cases on these criteria.
Climate change risk seems sufficiently disanalogous to AI risk that studying climate change mitigation efforts probably gives limited insight into how well policy-makers will deal with AGI risk: the expected damage of climate change appears to be very small relative to the the expected damage due to AI risk, especially when one looks at expected damage to policy makers.
The 2008 financial crisis appears, after a shallow investigation, to be sufficiently analogous to AGI risk that it should give us some small reason to be concerned that policy-makers will not manage the invention of AGI wisely.
The risks to critical infrastructure from geomagnetic storms are far too small to be in the same reference class with risks from AGI.
The eradication of smallpox is only somewhat analogous to the invention of AGI.
Jonah performed very shallow investigations of how policy-makers have handled risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis, but these cases need more study before even āinitial thoughtsā can be given.
We identified additional historical cases that could be investigated in the future.
I just stumbled upon the 2013 MIRI post How well will policy-makers handle AGI? (initial findings), found it interesting, and was reminded of this post.
Essentially, the similarity is that that post was also about AI governance/āstrategy, involved thinking about what criteria would make a historical case study relevant to the AI governance/āstrategy question under consideration, and involved a set of relatively shallow investigations of possibly relevant case studies. This seems like a cool method, and Iād be excited to see more things like this. (Though of course our collective research hours are limited, and Iām not necessarily saying we should deprioritise other things to prioritise this sort of work.)
Hereās the opening section of that post: