That was my guess :) To be more specific: do you (or does MIRI) have any preferences for which strategy to pursue, or is it too early to say? I get the sense from MIRI and FHI that aligned sovereign AI is the end goal. Thanks again for doing the AMA!
I am not Nate, but my view (and my interpretation of some median FHI view) is that we should keep options open about those strategies and as-yet unknown other strategies instead of fixating on one at the moment. There’s a lot of uncertainty, and all of the strategies look really hard to achieve. In short, no strongly favored strategy.
FWIW, I also think that most current work in this area, including MIRI’s, promotes the first three of those goals pretty well.
Hi Nate!
Daniel Dewey at FHI outlined some strategies to mitigate existential risk from a fast take-off scenario here: http://www.danieldewey.net/fast-takeoff-strategies.pdf
I expect you to agree with the exponential decay model, if not – why?
I would also like your opinion on his four strategic categories, namely:
International coordination
Sovereign AI
AI-empowered project
Other decisive technological advantage
Thanks for your attention!
I mostly agree with Daniel’s paper :-)
That was my guess :) To be more specific: do you (or does MIRI) have any preferences for which strategy to pursue, or is it too early to say? I get the sense from MIRI and FHI that aligned sovereign AI is the end goal. Thanks again for doing the AMA!
I am not Nate, but my view (and my interpretation of some median FHI view) is that we should keep options open about those strategies and as-yet unknown other strategies instead of fixating on one at the moment. There’s a lot of uncertainty, and all of the strategies look really hard to achieve. In short, no strongly favored strategy.
FWIW, I also think that most current work in this area, including MIRI’s, promotes the first three of those goals pretty well.
Follow-up: this comment suggests that Nate weakly favors strategies 2 and/or 3 over 1.