That is high-value work. Holden Karnofsky’s list of “important, actionable research questions” about AI alignment and strategy includes one about figuring out what should be done in deployment of advanced AI and leading up to it (1):
How do we hope an AI lab—or government—would handle various hypothetical situations in which they are nearing the development of transformative AI, and what does that mean for what they should be doing today?
Luke Muehlhauser and I sometimes refer to this general sort of question as the “AI deployment problem”: the question of how and when to build and deploy powerful AI systems, under conditions of uncertainty about how safe they are and how close others are to deploying powerful AI of their own.
My guess is that thinking through questions in this category can shed light on important, non-obvious actions that both AI labs and governments should be taking to make these sorts of future scenarios less daunting. This could, in turn, unlock interventions to encourage these actions.
That is high-value work. Holden Karnofsky’s list of “important, actionable research questions” about AI alignment and strategy includes one about figuring out what should be done in deployment of advanced AI and leading up to it (1):