I appreciate the post, although I’m still worried that comparisons between AI risk and The Terminator are more harmful than helpful.
One major reservation I have is with the whole framing of the argument, which is about “AI risk”. I guess you’re implicitly talking about AI catastrophic risk, which IMO is much more specific than AI risk in general. I would be very uncomfortable saying that near-term AI risks (e.g. due to algorithmic bias) are “like TheTerminator”.
Even if we solely consider catastrophic risks due to AI, I think catastrophes don’t necessarily need to look anything like TheTerminator. What about risks from AI-enabled mass surveillance? Or the difficulty of navigating the transition to a world where transformative AI plays a large role in the global economy?
If we restrict ourselves to AI existential risks (arguably some of the previous examples fall into this category), I’m still hesitant to compare these risks to TheTerminator. This depends on what exactly we mean by “like The Terminator”, because there are some things between the two that are similar (as you point out), and many things that are not.
In general, I worry that too much is being shoved into the word “AI risk”, which could really mean a whole host of different things, and I feel that drawing analogy to TheTerminator for these risks is harmful conflation.
We may eventually create artificial intelligence more powerful than human beings; and
That artificial intelligence may not necessarily share our goals.
Those two statements are obviously at least plausible, which is why there are so many popular stories about rogue AI.
I don’t think it’s immediately obvious to a person who hasn’t heard AI safety arguments why these should be plausible. In my experience, a common reaction to (1) is “Seriously? We don’t even have reliable self-driving cars!”, and to (2) is “Why would anybody build such a thing?”. I doubt that the Terminator movies answer these questions appropriately.
“People think the plot of Terminator is silly in large part because it involves an AI exterminating humanity.”
I feel that this is too superficial—if you then ask people why they think AI-induced human extinction is unlikely, I expect that the answer would be along the lines of “we would never do something so silly”. So I claim that a bigger reason why people think the plot is silly is that it’s not plausible, not the fact that it involves “an AI exterminating humanity” per se. To me, this is a very large part of AI safety arguments, and is left completely unaddressed by the Terminator movies.
Maybe comparing AI risk and the Terminator movies can convince people who are already more sympathetic to thoughts that are “out there”, but I think this would have a negative effect on most other people. Generally, I suspect this comparison underestimates the significance of broader public acceptance, or credibility within government.
Perhaps it might make sense to say “certain AI existential risk scenarios and The Terminator are superficially similar, in the sense that they both involve superintelligent AI that may not be beneficial by default”. At least currently, I’m much more hesitant to say “AI risk is like The Terminator”.
(Edited because the above no longer matches my views or experiences)
Yes, I think saying “AGI x-risk” is much more accurate than “AI risk”, in terms of what we are actually referring to. Also worth saying that The Terminator films have the right premise:
Defense network computers. New… powerful… hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.
[Fast takeoff, instrumental convergent goal of self-preservation]. But everything after this [GIF of Skynet nuking humanity], involving killer robots, is very unrealistic (more realistic: everyone simultaneously dropping dead from poisoning with botulinum toxin delivered by undetectable nanodrones; and yes, even using the nukes would probably not happen).
I appreciate the post, although I’m still worried that comparisons between AI risk and The Terminator are more harmful than helpful.
One major reservation I have is with the whole framing of the argument, which is about “AI risk”. I guess you’re implicitly talking about AI catastrophic risk, which IMO is much more specific than AI risk in general. I would be very uncomfortable saying that near-term AI risks (e.g. due to algorithmic bias) are “like The Terminator”.
Even if we solely consider catastrophic risks due to AI, I think catastrophes don’t necessarily need to look anything like The Terminator. What about risks from AI-enabled mass surveillance? Or the difficulty of navigating the transition to a world where transformative AI plays a large role in the global economy?
If we restrict ourselves to AI existential risks (arguably some of the previous examples fall into this category), I’m still hesitant to compare these risks to The Terminator. This depends on what exactly we mean by “like The Terminator”, because there are some things between the two that are similar (as you point out), and many things that are not.
In general, I worry that too much is being shoved into the word “AI risk”, which could really mean a whole host of different things, and I feel that drawing analogy to The Terminator for these risks is harmful conflation.
I don’t think it’s immediately obvious to a person who hasn’t heard AI safety arguments why these should be plausible. In my experience, a common reaction to (1) is “Seriously? We don’t even have reliable self-driving cars!”, and to (2) is “Why would anybody build such a thing?”. I doubt that the Terminator movies answer these questions appropriately.I feel that this is too superficial—if you then ask people why they think AI-induced human extinction is unlikely, I expect that the answer would be along the lines of “we would never do something so silly”. So I claim that a bigger reason why people think the plot is silly is that it’s notplausible, not the fact that it involves “an AI exterminating humanity” per se. To me, this is a very large part of AI safety arguments, and is left completely unaddressed by the Terminator movies.Maybe comparing AI risk and the Terminator movies can convince people who are already more sympathetic to thoughts that are “out there”, but I think this would have a negative effect on most other people. Generally, I suspect this comparison underestimates the significance of broader public acceptance, or credibility within government.Perhaps it might make sense to say “certain AI existential risk scenarios andThe Terminatorare superficially similar,in the sense that they both involve superintelligent AI that may not be beneficial by default”. At least currently, I’m much more hesitant to say “AI risk is likeThe Terminator”.(Edited because the above no longer matches my views or experiences)
Yes, I think saying “AGI x-risk” is much more accurate than “AI risk”, in terms of what we are actually referring to. Also worth saying that The Terminator films have the right premise:
[Fast takeoff, instrumental convergent goal of self-preservation]. But everything after this [GIF of Skynet nuking humanity], involving killer robots, is very unrealistic (more realistic: everyone simultaneously dropping dead from poisoning with botulinum toxin delivered by undetectable nanodrones; and yes, even using the nukes would probably not happen).