Artificial Intelligence is very difficult to control. Even in relatively simple applications, the top AI experts struggle to make it behave. This becomes increasingly dangerous as AI gets more powerful. In fact, many experts fear that if a sufficiently advanced AI were to escape our control, it could actually extinguish all life on Earth. Because AI pursues whatever goals we give it with no mind to other consequences, it would stop at nothing – even human extinction – to maximize its reward.
We can’t know exactly how this would happen—but to make it less abstract, let’s imagine some possibilities. Any AI with internet access may be able to save millions of copies of itself on unsecured computers all over the world, each ready to wake up if another were destroyed. This alone would make it virtually indestructible unless humans destroyed the internet and every computer on Earth. Doing so would be politically difficult in the best case—but especially so if the AI were also using millions of convincing disinformation bots to distract people, conceal the truth, or convince humans not to act. The AI may also be able to conduct brilliant cyber attacks to take control of critical infrastructures like power stations, hospitals, or water treatment facilities. It could hack into weapons of mass destruction—or, invent its own. And what it couldn’t do itself, it could bribe or blackmail humans to do for it by seizing cash from online bank accounts.
For these reasons, most AI experts think advanced AI is much likelier to wipe out human life than climate change. Even if you think this is unlikely, the stakes are high enough to warrant caution.
most AI experts think advanced AI is much likelier to wipe out human life than climate change
I’m not sure this is true, unless you use a very restrictive definition of “AI expert”. I would be surprised if most AI researchers saw AI as a greater threat than climate change.
I took that from a Kelsey Piper writeup here, assuming she was summarizing some study:
”Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.”
The hyperlink goes to an FHI paper that appears to just summarize various risks, so it’s unclear what her source was on the “most.” I’d be curious to know as well. She does stress the greater variance of outcomes and uncertainty surrounding AI—writing “Our predictions about climate change are more confident, both for better and for worse.”—so maybe my distillation should admit that too.
Artificial Intelligence is very difficult to control. Even in relatively simple applications, the top AI experts struggle to make it behave. This becomes increasingly dangerous as AI gets more powerful. In fact, many experts fear that if a sufficiently advanced AI were to escape our control, it could actually extinguish all life on Earth. Because AI pursues whatever goals we give it with no mind to other consequences, it would stop at nothing – even human extinction – to maximize its reward.
We can’t know exactly how this would happen—but to make it less abstract, let’s imagine some possibilities. Any AI with internet access may be able to save millions of copies of itself on unsecured computers all over the world, each ready to wake up if another were destroyed. This alone would make it virtually indestructible unless humans destroyed the internet and every computer on Earth. Doing so would be politically difficult in the best case—but especially so if the AI were also using millions of convincing disinformation bots to distract people, conceal the truth, or convince humans not to act. The AI may also be able to conduct brilliant cyber attacks to take control of critical infrastructures like power stations, hospitals, or water treatment facilities. It could hack into weapons of mass destruction—or, invent its own. And what it couldn’t do itself, it could bribe or blackmail humans to do for it by seizing cash from online bank accounts.
For these reasons, most AI experts think advanced AI is much likelier to wipe out human life than climate change. Even if you think this is unlikely, the stakes are high enough to warrant caution.
I’m not sure this is true, unless you use a very restrictive definition of “AI expert”. I would be surprised if most AI researchers saw AI as a greater threat than climate change.
I took that from a Kelsey Piper writeup here, assuming she was summarizing some study:
”Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.”
The hyperlink goes to an FHI paper that appears to just summarize various risks, so it’s unclear what her source was on the “most.” I’d be curious to know as well. She does stress the greater variance of outcomes and uncertainty surrounding AI—writing “Our predictions about climate change are more confident, both for better and for worse.”—so maybe my distillation should admit that too.