That was my guess :) To be more specific: do you (or does MIRI) have any preferences for which strategy to pursue, or is it too early to say? I get the sense from MIRI and FHI that aligned sovereign AI is the end goal. Thanks again for doing the AMA!
I am not Nate, but my view (and my interpretation of some median FHI view) is that we should keep options open about those strategies and as-yet unknown other strategies instead of fixating on one at the moment. There’s a lot of uncertainty, and all of the strategies look really hard to achieve. In short, no strongly favored strategy.
FWIW, I also think that most current work in this area, including MIRI’s, promotes the first three of those goals pretty well.
I mostly agree with Daniel’s paper :-)
That was my guess :) To be more specific: do you (or does MIRI) have any preferences for which strategy to pursue, or is it too early to say? I get the sense from MIRI and FHI that aligned sovereign AI is the end goal. Thanks again for doing the AMA!
I am not Nate, but my view (and my interpretation of some median FHI view) is that we should keep options open about those strategies and as-yet unknown other strategies instead of fixating on one at the moment. There’s a lot of uncertainty, and all of the strategies look really hard to achieve. In short, no strongly favored strategy.
FWIW, I also think that most current work in this area, including MIRI’s, promotes the first three of those goals pretty well.
Follow-up: this comment suggests that Nate weakly favors strategies 2 and/or 3 over 1.