I have recently found myself in the situation of having to explain AI Safety to someone who has never heard of it before. I think I have polished a simple 3 sentence explanation that got me a couple of “wow, that seems important indeed”. The basic idea is
It is quite of a cool scientific aspiration to build Artificial General Intelligent because it could be useful for so many things. But in the same way that it is not feasible to specify each individual action the system might take, we can also not specify a single objective for the system to pursue. AI Safety is about making AI Systems that learn what we want (and carries it out), and I want to work on it because there are very few people trying to understand this important problem.
Although somewhat restrictive definition, note that you don’t need to use “existential risk” or any fancy wording.
Pitching AI Safety in 3 sentences
I have recently found myself in the situation of having to explain AI Safety to someone who has never heard of it before. I think I have polished a simple 3 sentence explanation that got me a couple of “wow, that seems important indeed”. The basic idea is
Although somewhat restrictive definition, note that you don’t need to use “existential risk” or any fancy wording.