I looked to see what the writers of The Terminator actually think about AGI x-risk. James Cameron’s takes are pretty disappointing. He expresses worries about loss of privacy and deepfakes, saying this is what a real Skynet would use to bring about our downfall.
More interesting is Gale Amber Hurd, who suggests that AI developers should take a Hippocratic Oath with explicit mention of unintended consequences:
“The one thing that they don’t teach in engineering schools and biotech is ethics and thinking about not only consequences, but unintended consequences.. If you go to medical school, there’s the Hippocratic Oath, first do no harm. I think we really need that in all of these new technologies.”
She also says:
“Stephen Hawking only came up with the idea that we need to worry about A.I. and robots about two and a half years before he passed away. I remember saying to Jim, ‘If he’d only watched The Terminator.’”
Which jives with the start of the conclusion of the OP:
It would be terrible if AI destroys humanity. It would also be very embarrassing. The Terminator came out nearly 40 years ago; we will not be able to claim we did not see the threat coming.
Ok, now Cameron is saying “I think the weaponization of AI is the biggest danger,” he said. “I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate.”
Titanic filmmaker James Cameronagrees with experts that “AI is the biggest danger” to humanity today and claims he warned the world about the issue way back in 1984 in his movie The Terminator. This comes as the so-called ‘three godfathers of AI’ have recently issued warnings about the need to regulate the quickly evolving technology.
In an interview with CTV News Chief Vassy Kapelos, Cameron said: “I absolutely share their concern,” he added: “I warned you guys in 1984, and you didn’t listen.”
I looked to see what the writers of The Terminator actually think about AGI x-risk. James Cameron’s takes are pretty disappointing. He expresses worries about loss of privacy and deepfakes, saying this is what a real Skynet would use to bring about our downfall.
More interesting is Gale Amber Hurd, who suggests that AI developers should take a Hippocratic Oath with explicit mention of unintended consequences:
She also says:
Which jives with the start of the conclusion of the OP:
William Wisher discussed the issue at Comic-Con 2017, but I haven’t been able to find a video or transcript.
Harlan Ellison sued the producers of Terminator for plagiarism over a story about a time-travelling robotic soldier that he wrote in 1957. This story doesn’t appear to have anything that is an equivalent of Skynet. But Ellison did write a very influential story about superintelligence gone wrong called I Have No Mouth, and I Must Scream. I couldn’t find any comments of his relating specifically to AGI x-risk. In 2013 he said: “I mean, we’re a fairly young species, but we don’t show a lot of promise.”
Ok, now Cameron is saying “I think the weaponization of AI is the biggest danger,” he said. “I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate.”