I’d mostly put OpenAI in the same category as DeepMind: primarily an AI capabilities organization, but one that’s unusually interested in long-term safety issues. OpenAI is young, so it’s a bit early to say much about them, but we view them as collaborators and are really happy with “Concrete Problems in AI Safety” (joint work by people at OpenAI, Google Brain, and Stanford). We helped lead a discussion about AI safety at their recent unconference, contributed to some OpenAI Gym environments, and are on good terms with a lot of people there.
Some ways OpenAI’s existence adjusts our strategy (so far):
1) OpenAI is in a better position than MIRI to spread basic ideas like ‘long-run AI risk is a serious issue.’ So this increases our confidence in our plan to scale back outreach, especially outreach toward more skeptical audiences that OpenAI can probably better communicate with.
2) Increasing the number of leading AI research orgs introduces more opportunities for conflicts and arms races, which is a serious risk. So more of our outreach time is spent on trying to encourage collaboration between the big players.
3) On the other hand, OpenAI is a nonprofit with a strong stated interest in encouraging inter-organization collaboration. This suggests OpenAI might be a useful mediator or staging ground for future coordination between leading research groups.
4) The increased interest in long-run safety issues from ML researchers at OpenAI and Google increases the value of building bridges between the alignment and ML communities. This was one factor going into our “Alignment for Advanced ML Systems” agenda.
5) Another important factor is that more dollars going into cutting-edge AI research shortens timelines to AGI, so we put incrementally more attention into research that’s more likely to be useful if AGI is developed soon.
I’d mostly put OpenAI in the same category as DeepMind: primarily an AI capabilities organization, but one that’s unusually interested in long-term safety issues. OpenAI is young, so it’s a bit early to say much about them, but we view them as collaborators and are really happy with “Concrete Problems in AI Safety” (joint work by people at OpenAI, Google Brain, and Stanford). We helped lead a discussion about AI safety at their recent unconference, contributed to some OpenAI Gym environments, and are on good terms with a lot of people there.
Some ways OpenAI’s existence adjusts our strategy (so far):
1) OpenAI is in a better position than MIRI to spread basic ideas like ‘long-run AI risk is a serious issue.’ So this increases our confidence in our plan to scale back outreach, especially outreach toward more skeptical audiences that OpenAI can probably better communicate with.
2) Increasing the number of leading AI research orgs introduces more opportunities for conflicts and arms races, which is a serious risk. So more of our outreach time is spent on trying to encourage collaboration between the big players.
3) On the other hand, OpenAI is a nonprofit with a strong stated interest in encouraging inter-organization collaboration. This suggests OpenAI might be a useful mediator or staging ground for future coordination between leading research groups.
4) The increased interest in long-run safety issues from ML researchers at OpenAI and Google increases the value of building bridges between the alignment and ML communities. This was one factor going into our “Alignment for Advanced ML Systems” agenda.
5) Another important factor is that more dollars going into cutting-edge AI research shortens timelines to AGI, so we put incrementally more attention into research that’s more likely to be useful if AGI is developed soon.