(For policy makers and tech executives. If this is too, shorten it by ending it after the I.J. Good quote.)
The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.
I.J. Good expressed concern that we might not be able to keep this superintelligent machine under our control and also was able to recognize that this concern was worth taking seriously despite how it was usually only talked about in science fiction. History has proven him right—Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked:
If you go back 10-12 years ago the whole notion of Artificial General Intelligence was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [But] every year [the number of people who roll their eyes] becomes less.
(Alternatively if it’s not too long but just needs to be one paragraph, use this version:)
The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.” Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked: “If you go back 10-12 years ago the whole notion of Artificial General Intelligence was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [But] every year [the number of people who roll their eyes] becomes less.”
(For policy makers and tech executives. If this is too, shorten it by ending it after the I.J. Good quote.)
The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper:
I.J. Good expressed concern that we might not be able to keep this superintelligent machine under our control and also was able to recognize that this concern was worth taking seriously despite how it was usually only talked about in science fiction. History has proven him right—Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked:
(Alternatively if it’s not too long but just needs to be one paragraph, use this version:)
The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.” Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked: “If you go back 10-12 years ago the whole notion of Artificial General Intelligence was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [But] every year [the number of people who roll their eyes] becomes less.”