I don’t think this is a good characterization of e.g. Kelsey’s preference for her Philip Morris analogy over the Terminator analogy—does rogue Philip Morris sound like a far harder problem to solve than rogue Skynet? Not to me, which is why it seems to me much more motivated by not wanting to sound science-fiction-y. Same as Dylan’s piece; it doesn’t seem to be saying “AI risk is a much harder problem than implied by the Terminator films”, except insofar as it misrepresents the Terminator films as involving evil humans intentionally making evil AI.
It seems to me like the proper explanatory path is “Like Terminator?” → “Basically” → “So why not just not give AI nuclear launch codes?” → “There are a lot of other ways AI could take over”.
“Like Terminator?” → “No, like Philip Morris” seems liable to confuse the audience about the very basic details of the issue, because Philip Morris didn’t take over the world.
I don’t think this is a good characterization of e.g. Kelsey’s preference for her Philip Morris analogy over the Terminator analogy—does rogue Philip Morris sound like a far harder problem to solve than rogue Skynet? Not to me, which is why it seems to me much more motivated by not wanting to sound science-fiction-y. Same as Dylan’s piece; it doesn’t seem to be saying “AI risk is a much harder problem than implied by the Terminator films”, except insofar as it misrepresents the Terminator films as involving evil humans intentionally making evil AI.
It seems to me like the proper explanatory path is “Like Terminator?” → “Basically” → “So why not just not give AI nuclear launch codes?” → “There are a lot of other ways AI could take over”.
“Like Terminator?” → “No, like Philip Morris” seems liable to confuse the audience about the very basic details of the issue, because Philip Morris didn’t take over the world.