Isn’t a key difference that in Terminator the AI seems incredibly incompetent at wiping us out? Surely we’d be destroyed in no time — to start with it could just manufacture a poison like dioxin and coat the world (or something much smarter). Going around with tanks and guns as depicted in the film is entirely unnecessary.
I feel like this is a pretty insignificant objection, because it implies someone might going around thinking, “don’t worry, AI Risk is just like Terminator! all we’ll have to do is bring humanity back from the brink of extinction, fighting amongst the rubble of civilization after a nuclear holocaust”. Surely if people think the threat is only as bad as Terminator, that’s plenty to get them to care.
I interpreted them not as saying that Terminator underplays the issue but rather that it misrepresents what a real AI would be able to do (in a way that probably makes the problem seem far easier to solve). But that may be me suffering from the curse of knowledge.
I don’t think this is a good characterization of e.g. Kelsey’s preference for her Philip Morris analogy over the Terminator analogy—does rogue Philip Morris sound like a far harder problem to solve than rogue Skynet? Not to me, which is why it seems to me much more motivated by not wanting to sound science-fiction-y. Same as Dylan’s piece; it doesn’t seem to be saying “AI risk is a much harder problem than implied by the Terminator films”, except insofar as it misrepresents the Terminator films as involving evil humans intentionally making evil AI.
It seems to me like the proper explanatory path is “Like Terminator?” → “Basically” → “So why not just not give AI nuclear launch codes?” → “There are a lot of other ways AI could take over”.
“Like Terminator?” → “No, like Philip Morris” seems liable to confuse the audience about the very basic details of the issue, because Philip Morris didn’t take over the world.
Isn’t a key difference that in Terminator the AI seems incredibly incompetent at wiping us out? Surely we’d be destroyed in no time — to start with it could just manufacture a poison like dioxin and coat the world (or something much smarter). Going around with tanks and guns as depicted in the film is entirely unnecessary.
I feel like this is a pretty insignificant objection, because it implies someone might going around thinking, “don’t worry, AI Risk is just like Terminator! all we’ll have to do is bring humanity back from the brink of extinction, fighting amongst the rubble of civilization after a nuclear holocaust”. Surely if people think the threat is only as bad as Terminator, that’s plenty to get them to care.
I interpreted them not as saying that Terminator underplays the issue but rather that it misrepresents what a real AI would be able to do (in a way that probably makes the problem seem far easier to solve). But that may be me suffering from the curse of knowledge.
I don’t think this is a good characterization of e.g. Kelsey’s preference for her Philip Morris analogy over the Terminator analogy—does rogue Philip Morris sound like a far harder problem to solve than rogue Skynet? Not to me, which is why it seems to me much more motivated by not wanting to sound science-fiction-y. Same as Dylan’s piece; it doesn’t seem to be saying “AI risk is a much harder problem than implied by the Terminator films”, except insofar as it misrepresents the Terminator films as involving evil humans intentionally making evil AI.
It seems to me like the proper explanatory path is “Like Terminator?” → “Basically” → “So why not just not give AI nuclear launch codes?” → “There are a lot of other ways AI could take over”.
“Like Terminator?” → “No, like Philip Morris” seems liable to confuse the audience about the very basic details of the issue, because Philip Morris didn’t take over the world.