It does look like AI and deep learning will by default push toward greater surveillance, and greater power to intelligence agencies. It could supercharge passive surveillance of online activity, prediction of futuer crime, could make lie detection reliable.
But here’s the catch. Year on year, AI and synthetic biology become more powerful and accessible. On the Yudkowsky-Moore law of mad science: “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.” How could we possibly expect to be headed toward a stably secure civilization, given that the destructive power of technologies is increasing more quickly than we are really able to adapt our institutions and our selves to deal with them? An obvious answer is that in a world where many can engineer a pandemic in their basement, we’ll need to have greater online surveillance to flag when they’re ordering a concerning combination of lab equipment, or to more sensitively detect homicidal motives.
On this view, the issue of ideological engineering from governments that are not acting in service of their people is one we’re just going to have to deal with...
Another thought is that there will be huge effects from AI (like the internet in general) that come from corporations rather than government. Interacting with apps aggressively tuned for profit (e.g. a supercharged version of the vision described in the Time Well Spent video—http://www.timewellspent.io/) could—I don’t know—increase the docility of the populace or have some other wild kind of effects.
It does look like AI and deep learning will by default push toward greater surveillance, and greater power to intelligence agencies. It could supercharge passive surveillance of online activity, prediction of futuer crime, could make lie detection reliable.
But here’s the catch. Year on year, AI and synthetic biology become more powerful and accessible. On the Yudkowsky-Moore law of mad science: “Every 18 months, the minimum IQ necessary to destroy the world drops by one point.” How could we possibly expect to be headed toward a stably secure civilization, given that the destructive power of technologies is increasing more quickly than we are really able to adapt our institutions and our selves to deal with them? An obvious answer is that in a world where many can engineer a pandemic in their basement, we’ll need to have greater online surveillance to flag when they’re ordering a concerning combination of lab equipment, or to more sensitively detect homicidal motives.
On this view, the issue of ideological engineering from governments that are not acting in service of their people is one we’re just going to have to deal with...
Another thought is that there will be huge effects from AI (like the internet in general) that come from corporations rather than government. Interacting with apps aggressively tuned for profit (e.g. a supercharged version of the vision described in the Time Well Spent video—http://www.timewellspent.io/) could—I don’t know—increase the docility of the populace or have some other wild kind of effects.