That is high-value work. Holden Karnofsky’s list of “important, actionable research questions” about AI alignment and strategy includes one about figuring out what should be done in deployment of advanced AI and leading up to it (1):
How do we hope an AI lab—or government—would handle various hypothetical situations in which they are nearing the development of transformative AI, and what does that mean for what they should be doing today?
Luke Muehlhauser and I sometimes refer to this general sort of question as the “AI deployment problem”: the question of how and when to build and deploy powerful AI systems, under conditions of uncertainty about how safe they are and how close others are to deploying powerful AI of their own.
My guess is that thinking through questions in this category can shed light on important, non-obvious actions that both AI labs and governments should be taking to make these sorts of future scenarios less daunting. This could, in turn, unlock interventions to encourage these actions.
I think it’s extremely relevant. To be honest I think that if someone coming without technical background wanted to contribute, that looking into these things would be one of the best default opportunities because: 1) These points you mention are blindspots of the AI alignment community because the typical member of the AI alignment doesn’t really care about all this political stuff. Especially questions on values and on “How does magically those who are 1000x more powerful than others don’t start ruling the entire world with their aligned AI” are very relevant IMO. 2) I think that the fact that AI safety arguments are still too conceptual is still a big weakness of the field. I think that increasing the concreteness of “how it will happen” what will be the concrete problems is a great way to have ourselves clearer scenarios in mind and also to increase the number of people that are taking these risks seriously.
To be clear, when I say that you should work on AI, I totally include people who have thoughts that are very different from the AI alignment field. For instance, I really like the fact that Jacy is thinking about this with animals in mind (I think that animal people should do that more) & being uncertain about the value of the longterm future if it’s driven by human values.
How relevant do you find work that aims to figure out what to do in deployment scenarios, what values we should have, what we should do next, etc?
That is high-value work. Holden Karnofsky’s list of “important, actionable research questions” about AI alignment and strategy includes one about figuring out what should be done in deployment of advanced AI and leading up to it (1):
I think it’s extremely relevant.
To be honest I think that if someone coming without technical background wanted to contribute, that looking into these things would be one of the best default opportunities because:
1) These points you mention are blindspots of the AI alignment community because the typical member of the AI alignment doesn’t really care about all this political stuff. Especially questions on values and on “How does magically those who are 1000x more powerful than others don’t start ruling the entire world with their aligned AI” are very relevant IMO.
2) I think that the fact that AI safety arguments are still too conceptual is still a big weakness of the field. I think that increasing the concreteness of “how it will happen” what will be the concrete problems is a great way to have ourselves clearer scenarios in mind and also to increase the number of people that are taking these risks seriously.
To be clear, when I say that you should work on AI, I totally include people who have thoughts that are very different from the AI alignment field. For instance, I really like the fact that Jacy is thinking about this with animals in mind (I think that animal people should do that more) & being uncertain about the value of the longterm future if it’s driven by human values.